Our new channel - Datasets repositories for Data Science Projects
👇👇
https://news.1rj.ru/str/DataPortfolio
👇👇
https://news.1rj.ru/str/DataPortfolio
Telegram
Data Science Portfolio - Kaggle Datasets & AI Projects | Artificial Intelligence
Free Datasets For Data Science Projects & Portfolio
Buy ads: https://telega.io/c/DataPortfolio
For Promotions/ads: @coderfun @love_data
Buy ads: https://telega.io/c/DataPortfolio
For Promotions/ads: @coderfun @love_data
👍2
ChatGPT for Data Scientist
👇👇
https://www.linkedin.com/posts/sql-analysts_chatgpt-for-data-science-activity-7128583314378043393-kYzL
👇👇
https://www.linkedin.com/posts/sql-analysts_chatgpt-for-data-science-activity-7128583314378043393-kYzL
👍2
Machine Learning & Artificial Intelligence | Data Science Free Courses
ChatGPT for Data Scientist 👇👇 https://www.linkedin.com/posts/sql-analysts_chatgpt-for-data-science-activity-7128583314378043393-kYzL
Like the post and share it with your friends so that it reaches more data aspirants 😄
Useful Telegram Channels to boost your career 😄👇
Free Courses with Certificate
Web Development
Data Science & Machine Learning
Programming books
Python Free Courses
Data Analytics
Ethical Hacking & Cyber Security
English Speaking & Communication
Stock Marketing & Investment Banking
Excel
ChatGPT Hacks
SQL
Tableau & Power BI
Coding Projects
Data Science Projects
Jobs & Internship Opportunities
Coding Interviews
Udemy Free Courses with Certificate
Cryptocurrency & Bitcoin
Python Projects
Data Analyst Interview
Data Analyst Jobs
Python Interview
ChatGPT Hacks
ENJOY LEARNING 👍👍
Free Courses with Certificate
Web Development
Data Science & Machine Learning
Programming books
Python Free Courses
Data Analytics
Ethical Hacking & Cyber Security
English Speaking & Communication
Stock Marketing & Investment Banking
Excel
ChatGPT Hacks
SQL
Tableau & Power BI
Coding Projects
Data Science Projects
Jobs & Internship Opportunities
Coding Interviews
Udemy Free Courses with Certificate
Cryptocurrency & Bitcoin
Python Projects
Data Analyst Interview
Data Analyst Jobs
Python Interview
ChatGPT Hacks
ENJOY LEARNING 👍👍
👍13❤4👎1
How to enter into Data Science
👉Start with the basics: Learn programming languages like Python and R to master data analysis and machine learning techniques. Familiarize yourself with tools such as TensorFlow, sci-kit-learn, and Tableau to build a strong foundation.
👉Choose your target field: From healthcare to finance, marketing, and more, data scientists play a pivotal role in extracting valuable insights from data. You should choose which field you want to become a data scientist in and start learning more about it.
👉Build a portfolio: Start building small projects and add them to your portfolio. This will help you build credibility and showcase your skills.
👉Start with the basics: Learn programming languages like Python and R to master data analysis and machine learning techniques. Familiarize yourself with tools such as TensorFlow, sci-kit-learn, and Tableau to build a strong foundation.
👉Choose your target field: From healthcare to finance, marketing, and more, data scientists play a pivotal role in extracting valuable insights from data. You should choose which field you want to become a data scientist in and start learning more about it.
👉Build a portfolio: Start building small projects and add them to your portfolio. This will help you build credibility and showcase your skills.
👍7
This channels is for Programmers, Coders, Software Engineers.
0- Python
1- Data Science
2- Machine Learning
3- Data Visualization
4- Artificial Intelligence
5- Data Analysis
6- Statistics
7- Deep Learning
8- programming Languages
✅ Best channels on Telegram:
https://news.1rj.ru/str/addlist/JbC2D8X2g700ZGMx
✅ Free Courses with Certificate:
https://news.1rj.ru/str/free4unow_backup
0- Python
1- Data Science
2- Machine Learning
3- Data Visualization
4- Artificial Intelligence
5- Data Analysis
6- Statistics
7- Deep Learning
8- programming Languages
✅ Best channels on Telegram:
https://news.1rj.ru/str/addlist/JbC2D8X2g700ZGMx
✅ Free Courses with Certificate:
https://news.1rj.ru/str/free4unow_backup
👍2
👍2
👍2
The Data Science skill no one talks about...
Every aspiring data scientist I talk to thinks their job starts when someone else gives them:
1. a dataset, and
2. a clearly defined metric to optimize for, e.g. accuracy
But it doesn’t.
It starts with a business problem you need to understand, frame, and solve. This is the key data science skill that separates senior from junior professionals.
Let’s go through an example.
Example
Imagine you are a data scientist at Uber. And your product lead tells you:
We say that a user churns when she decides to stop using Uber.
But why?
There are different reasons why a user would stop using Uber. For example:
1. “Lyft is offering better prices for that geo” (pricing problem)
2. “Car waiting times are too long” (supply problem)
3. “The Android version of the app is very slow” (client-app performance problem)
You build this list ↑ by asking the right questions to the rest of the team. You need to understand the user’s experience using the app, from HER point of view.
Typically there is no single reason behind churn, but a combination of a few of these. The question is: which one should you focus on?
This is when you pull out your great data science skills and EXPLORE THE DATA 🔎.
You explore the data to understand how plausible each of the above explanations is. The output from this analysis is a single hypothesis you should consider further. Depending on the hypothesis, you will solve the data science problem differently.
For example…
Scenario 1: “Lyft Is Offering Better Prices” (Pricing Problem)
One solution would be to detect/predict the segment of users who are likely to churn (possibly using an ML Model) and send personalized discounts via push notifications. To test your solution works, you will need to run an A/B test, so you will split a percentage of Uber users into 2 groups:
The A group. No user in this group will receive any discount.
The B group. Users from this group that the model thinks are likely to churn, will receive a price discount in their next trip.
You could add more groups (e.g. C, D, E…) to test different pricing points.
1. Translating business problems into data science problems is the key data science skill that separates a senior from a junior data scientist.
2. Ask the right questions, list possible solutions, and explore the data to narrow down the list to one.
3. Solve this one data science problem
Every aspiring data scientist I talk to thinks their job starts when someone else gives them:
1. a dataset, and
2. a clearly defined metric to optimize for, e.g. accuracy
But it doesn’t.
It starts with a business problem you need to understand, frame, and solve. This is the key data science skill that separates senior from junior professionals.
Let’s go through an example.
Example
Imagine you are a data scientist at Uber. And your product lead tells you:
👩💼: “We want to decrease user churn by 5% this quarter”
We say that a user churns when she decides to stop using Uber.
But why?
There are different reasons why a user would stop using Uber. For example:
1. “Lyft is offering better prices for that geo” (pricing problem)
2. “Car waiting times are too long” (supply problem)
3. “The Android version of the app is very slow” (client-app performance problem)
You build this list ↑ by asking the right questions to the rest of the team. You need to understand the user’s experience using the app, from HER point of view.
Typically there is no single reason behind churn, but a combination of a few of these. The question is: which one should you focus on?
This is when you pull out your great data science skills and EXPLORE THE DATA 🔎.
You explore the data to understand how plausible each of the above explanations is. The output from this analysis is a single hypothesis you should consider further. Depending on the hypothesis, you will solve the data science problem differently.
For example…
Scenario 1: “Lyft Is Offering Better Prices” (Pricing Problem)
One solution would be to detect/predict the segment of users who are likely to churn (possibly using an ML Model) and send personalized discounts via push notifications. To test your solution works, you will need to run an A/B test, so you will split a percentage of Uber users into 2 groups:
The A group. No user in this group will receive any discount.
The B group. Users from this group that the model thinks are likely to churn, will receive a price discount in their next trip.
You could add more groups (e.g. C, D, E…) to test different pricing points.
In a nutshell
1. Translating business problems into data science problems is the key data science skill that separates a senior from a junior data scientist.
2. Ask the right questions, list possible solutions, and explore the data to narrow down the list to one.
3. Solve this one data science problem
👍28🥰1
1. How can we deal with problems that arise when the data flows in from a variety of sources?
There are many ways to go about dealing with multi-source problems. However, these are done primarily to solve the problems of:
Identifying the presence of similar/same records and merging them into a single recordRe-structuring the schema to ensure there is good schema integration
2. Where is Time Series Analysis used?
Since time series analysis (TSA) has a wide scope of usage, it can be used in multiple domains. Here are some of the places where TSA plays an important role:
Statistics
Signal processing
Econometrics
Weather forecasting
Earthquake prediction
Astronomy
Applied science
3. What are the ideal situations in which t-test or z-test can be used?
It is a standard practice that a t-test is used when there is a sample size less than 30 and the z-test is considered when the sample size exceeds 30 in most cases.
4. What is the usage of the NVL() function?
The NVL() function is used to convert the NULL value to the other value. The function returns the value of the second parameter if the first parameter is NULL. If the first parameter is anything other than NULL, it is left unchanged. This function is used in Oracle, not in SQL and MySQL. Instead of NVL() function, MySQL have IFNULL() and SQL Server have ISNULL() function.
5. What is the difference between DROP and TRUNCATE commands?
If a table is dropped, all things associated with that table are dropped as well. This includes the relationships defined on the table with other tables, access privileges, and grants that the table has, as well as the integrity checks and constraints.
However, if a table is truncated, there are no such problems as mentioned above. The table retains its original structure and the data is dropped.
There are many ways to go about dealing with multi-source problems. However, these are done primarily to solve the problems of:
Identifying the presence of similar/same records and merging them into a single recordRe-structuring the schema to ensure there is good schema integration
2. Where is Time Series Analysis used?
Since time series analysis (TSA) has a wide scope of usage, it can be used in multiple domains. Here are some of the places where TSA plays an important role:
Statistics
Signal processing
Econometrics
Weather forecasting
Earthquake prediction
Astronomy
Applied science
3. What are the ideal situations in which t-test or z-test can be used?
It is a standard practice that a t-test is used when there is a sample size less than 30 and the z-test is considered when the sample size exceeds 30 in most cases.
4. What is the usage of the NVL() function?
The NVL() function is used to convert the NULL value to the other value. The function returns the value of the second parameter if the first parameter is NULL. If the first parameter is anything other than NULL, it is left unchanged. This function is used in Oracle, not in SQL and MySQL. Instead of NVL() function, MySQL have IFNULL() and SQL Server have ISNULL() function.
5. What is the difference between DROP and TRUNCATE commands?
If a table is dropped, all things associated with that table are dropped as well. This includes the relationships defined on the table with other tables, access privileges, and grants that the table has, as well as the integrity checks and constraints.
However, if a table is truncated, there are no such problems as mentioned above. The table retains its original structure and the data is dropped.
👍8👎1
Learn Data Science in 2024
𝟭. 𝗔𝗽𝗽𝗹𝘆 𝗣𝗮𝗿𝗲𝘁𝗼'𝘀 𝗟𝗮𝘄 𝘁𝗼 𝗟𝗲𝗮𝗿𝗻 𝗝𝘂𝘀𝘁 𝗘𝗻𝗼𝘂𝗴𝗵 📚
Pareto's Law states that "that 80% of consequences come from 20% of the causes".
This law should serve as a guiding framework for the volume of content you need to know to be proficient in data science.
Often rookies make the mistake of overspending their time learning algorithms that are rarely applied in production. Learning about advanced algorithms such as XLNet, Bayesian SVD++, and BiLSTMs, are cool to learn.
But, in reality, you will rarely apply such algorithms in production (unless your job demands research and application of state-of-the-art algos).
For most ML applications in production - especially in the MVP phase, simple algos like logistic regression, K-Means, random forest, and XGBoost provide the biggest bang for the buck because of their simplicity in training, interpretation and productionization.
So, invest more time learning topics that provide immediate value now, not a year later.
𝟮. 𝗙𝗶𝗻𝗱 𝗮 𝗠𝗲𝗻𝘁𝗼𝗿 ⚡
There’s a Japanese proverb that says “Better than a thousand days of diligent study is one day with a great teacher.” This proverb directly applies to learning data science quickly.
Mentors can teach you about how to build a model in production and how to manage stakeholders - stuff that you don’t often read about in courses and books.
So, find a mentor who can teach you practical knowledge in data science.
𝟯. 𝗗𝗲𝗹𝗶𝗯𝗲𝗿𝗮𝘁𝗲 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲 ✍️
If you are serious about growing your excelling in data science, you have to put in the time to nurture your knowledge. This means that you need to spend less time watching mindless videos on TikTok and spend more time reading books and watching video lectures.
Join @datasciencefree for more
ENJOY LEARNING 👍👍
𝟭. 𝗔𝗽𝗽𝗹𝘆 𝗣𝗮𝗿𝗲𝘁𝗼'𝘀 𝗟𝗮𝘄 𝘁𝗼 𝗟𝗲𝗮𝗿𝗻 𝗝𝘂𝘀𝘁 𝗘𝗻𝗼𝘂𝗴𝗵 📚
Pareto's Law states that "that 80% of consequences come from 20% of the causes".
This law should serve as a guiding framework for the volume of content you need to know to be proficient in data science.
Often rookies make the mistake of overspending their time learning algorithms that are rarely applied in production. Learning about advanced algorithms such as XLNet, Bayesian SVD++, and BiLSTMs, are cool to learn.
But, in reality, you will rarely apply such algorithms in production (unless your job demands research and application of state-of-the-art algos).
For most ML applications in production - especially in the MVP phase, simple algos like logistic regression, K-Means, random forest, and XGBoost provide the biggest bang for the buck because of their simplicity in training, interpretation and productionization.
So, invest more time learning topics that provide immediate value now, not a year later.
𝟮. 𝗙𝗶𝗻𝗱 𝗮 𝗠𝗲𝗻𝘁𝗼𝗿 ⚡
There’s a Japanese proverb that says “Better than a thousand days of diligent study is one day with a great teacher.” This proverb directly applies to learning data science quickly.
Mentors can teach you about how to build a model in production and how to manage stakeholders - stuff that you don’t often read about in courses and books.
So, find a mentor who can teach you practical knowledge in data science.
𝟯. 𝗗𝗲𝗹𝗶𝗯𝗲𝗿𝗮𝘁𝗲 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲 ✍️
If you are serious about growing your excelling in data science, you have to put in the time to nurture your knowledge. This means that you need to spend less time watching mindless videos on TikTok and spend more time reading books and watching video lectures.
Join @datasciencefree for more
ENJOY LEARNING 👍👍
👍20
🎯5 Certification from Data Science :
📍Python free certification :
https://imp.i115008.net/5bK93j
📍SQL Course :
https://bit.ly/3FxxKPz
📍Data Science Certification :
https://365datascience.pxf.io/q4m66g
📍Data Analysis :
https://imp.i115008.net/gb6ZJ2
Hope this was helpful for you
📍Python free certification :
https://imp.i115008.net/5bK93j
📍SQL Course :
https://bit.ly/3FxxKPz
📍Data Science Certification :
https://365datascience.pxf.io/q4m66g
📍Data Analysis :
https://imp.i115008.net/gb6ZJ2
Hope this was helpful for you
👍13👎3❤1
Optimize your resume to get more interviews
Many job seekers don’t get enough interviews even after applying for dozens of jobs. Why? Companies use Applicant Tracking Systems (ATS) to search and filter resumes by keywords. The Jobscan resume scanner helps you optimize your resume keywords for each job listing so that your application gets found by recruiters.
Link -> https://jobscanco.pxf.io/KjGgAa
ENJOY LEARNING 👍👍
Many job seekers don’t get enough interviews even after applying for dozens of jobs. Why? Companies use Applicant Tracking Systems (ATS) to search and filter resumes by keywords. The Jobscan resume scanner helps you optimize your resume keywords for each job listing so that your application gets found by recruiters.
Link -> https://jobscanco.pxf.io/KjGgAa
ENJOY LEARNING 👍👍
👍11❤1
👍10😁1