Machine Learning & Artificial Intelligence | Data Science Free Courses – Telegram
Machine Learning & Artificial Intelligence | Data Science Free Courses
64.5K subscribers
557 photos
2 videos
98 files
425 links
Perfect channel to learn Data Analytics, Data Sciene, Machine Learning & Artificial Intelligence

Admin: @coderfun
Download Telegram
Many data scientists don't know how to push ML models to production. Here's the recipe 👇

𝗞𝗲𝘆 𝗜𝗻𝗴𝗿𝗲𝗱𝗶𝗲𝗻𝘁𝘀

🔹 𝗧𝗿𝗮𝗶𝗻 / 𝗧𝗲𝘀𝘁 𝗗𝗮𝘁𝗮𝘀𝗲𝘁 - Ensure Test is representative of Online data
🔹 𝗙𝗲𝗮𝘁𝘂𝗿𝗲 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 - Generate features in real-time
🔹 𝗠𝗼𝗱𝗲𝗹 𝗢𝗯𝗷𝗲𝗰𝘁 - Trained SkLearn or Tensorflow Model
🔹 𝗣𝗿𝗼𝗷𝗲𝗰𝘁 𝗖𝗼𝗱𝗲 𝗥𝗲𝗽𝗼 - Save model project code to Github
🔹 𝗔𝗣𝗜 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 - Use FastAPI or Flask to build a model API
🔹 𝗗𝗼𝗰𝗸𝗲𝗿 - Containerize the ML model API
🔹 𝗥𝗲𝗺𝗼𝘁𝗲 𝗦𝗲𝗿𝘃𝗲𝗿 - Choose a cloud service; e.g. AWS sagemaker
🔹 𝗨𝗻𝗶𝘁 𝗧𝗲𝘀𝘁𝘀 - Test inputs & outputs of functions and APIs
🔹 𝗠𝗼𝗱𝗲𝗹 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 - Evidently AI, a simple, open-source for ML monitoring

𝗣𝗿𝗼𝗰𝗲𝗱𝘂𝗿𝗲

𝗦𝘁𝗲𝗽 𝟭 - 𝗗𝗮𝘁𝗮 𝗣𝗿𝗲𝗽𝗮𝗿𝗮𝘁𝗶𝗼𝗻 & 𝗙𝗲𝗮𝘁𝘂𝗿𝗲 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴

Don't push a model with 90% accuracy on train set. Do it based on the test set - if and only if, the test set is representative of the online data. Use SkLearn pipeline to chain a series of model preprocessing functions like null handling.

𝗦𝘁𝗲𝗽 𝟮 - 𝗠𝗼𝗱𝗲𝗹 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁

Train your model with frameworks like Sklearn or Tensorflow. Push the model code including preprocessing, training and validation noscripts to Github for reproducibility.

𝗦𝘁𝗲𝗽 𝟯 - 𝗔𝗣𝗜 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 & 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻

Your model needs a "/predict" endpoint, which receives a JSON object in the request input and generates a JSON object with the model score in the response output. You can use frameworks like FastAPI or Flask. Containzerize this API so that it's agnostic to server environment

𝗦𝘁𝗲𝗽 𝟰 - 𝗧𝗲𝘀𝘁𝗶𝗻𝗴 & 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁

Write tests to validate inputs & outputs of API functions to prevent errors. Push the code to remote services like AWS Sagemaker.

𝗦𝘁𝗲𝗽 𝟱 - 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴

Set up monitoring tools like Evidently AI, or use a built-in one within AWS Sagemaker. I use such tools to track performance metrics and data drifts on online data.
👍7😁1
AI Agents Course
by Hugging Face 🤗


This free course will take you on a journey, from beginner to expert, in understanding, using and building AI agents.

https://huggingface.co/learn/agents-course/unit0/introduction
👍4😁1
How do you handle null, 0, and blank values in your data during the cleaning process?

Sometimes interview questions are also based on this topic. Many data aspirants or even some professionals sometimes make the mistake of simply deleting missing values or trying to fill them without proper analysis.This can damage the integrity of the analysis. It’s essential to ask or find out the reason behind missing values in the data
whether from the project head, client, or through own investigation.

𝘼𝙣𝙨𝙬𝙚𝙧:

Handling null, 0, and blank values is crucial for ensuring the accuracy and reliability of data analysis. Here’s how to approach it:

1. 𝙄𝙙𝙚𝙣𝙩𝙞𝙛𝙮𝙞𝙣𝙜 𝙖𝙣𝙙 𝙐𝙣𝙙𝙚𝙧𝙨𝙩𝙖𝙣𝙙𝙞𝙣𝙜 𝙩𝙝𝙚 𝘾𝙤𝙣𝙩𝙚𝙭𝙩:
   - 𝙉𝙪𝙡𝙡 𝙑𝙖𝙡𝙪𝙚𝙨: These represent missing or undefined data. Identify them using functions like 'ISNULL' or filters in Power Query.
   - 0 𝙑𝙖𝙡𝙪𝙚𝙨: These can be legitimate data points but may also indicate missing data in some contexts. Understanding the context is important.
   - 𝘽𝙡𝙖𝙣𝙠 𝙑𝙖𝙡𝙪𝙚𝙨: These can be spaces or empty strings. Identify them using 'LEN', 'TRIM', or filters.

2. 𝙃𝙖𝙣𝙙𝙡𝙞𝙣𝙜 𝙏𝙝𝙚𝙨𝙚 𝙑𝙖𝙡𝙪𝙚𝙨 𝙐𝙨𝙞𝙣𝙜 𝙋𝙧𝙤𝙥𝙚𝙧 𝙏𝙚𝙘𝙝𝙣𝙞𝙦𝙪𝙚𝙨:
   - 𝙉𝙪𝙡𝙡 𝙑𝙖𝙡𝙪𝙚𝙨: Typically decide whether to impute, remove, or leave them based on the dataset’s context and the analysis requirements. Common imputation methods include using mean, median, or a placeholder.
   - 0 𝙑𝙖𝙡𝙪𝙚𝙨: If 0s are valid data, leave them as is. If they indicate missing data, treat them similarly to null values.

   - 𝘽𝙡𝙖𝙣𝙠 𝙑𝙖𝙡𝙪𝙚𝙨: Convert blanks to nulls or handle them as needed. This involves using 'IF' statements or Power Query transformations.

3. 𝙐𝙨𝙞𝙣𝙜 𝙀𝙭𝙘𝙚𝙡 𝙖𝙣𝙙 𝙋𝙤𝙬𝙚𝙧 𝙌𝙪𝙚𝙧𝙮:
   - 𝙀𝙭𝙘𝙚𝙡: Use formulas like 'IFERROR', 'IF', and 'VLOOKUP' to handle these values.
   - 𝙋𝙤𝙬𝙚𝙧 𝙌𝙪𝙚𝙧𝙮: Use transformations to filter, replace, or fill null and blank values. Steps like 'Fill Down', 'Replace Values', and custom columns help automate the process.

By carefully considering the context and using appropriate methods, the data cleaning process maintains the integrity and quality of the data.

Hope it helps :)
👍52🤣1
Will LLMs always hallucinate?

As large language models (LLMs) become more powerful and pervasive, it's crucial that we understand their limitations.

A new paper argues that hallucinations - where the model generates false or nonsensical information - are not just occasional mistakes, but an inherent property of these systems.

While the idea of hallucinations as features isn't new, the researchers' explanation is.

They draw on computational theory and Gödel's incompleteness theorems to show that hallucinations are baked into the very structure of LLMs.

In essence, they argue that the process of training and using these models involves undecidable problems - meaning there will always be some inputs that cause the model to go off the rails.

This would have big implications. It suggests that no amount of architectural tweaks, data cleaning, or fact-checking can fully eliminate hallucinations.

So what does this mean in practice? For one, it highlights the importance of using LLMs carefully, with an understanding of their limitations.

It also suggests that research into making models more robust and understanding their failure modes is crucial.

No matter how impressive the results, LLMs are not oracles - they're tools with inherent flaws and biases

LLM & Generative AI Resources: https://whatsapp.com/channel/0029VaoePz73bbV94yTh6V2E
👍5🤣1
An high level overview for becoming a machine learning engineer
4👍1
Machine learning algorithms
👍41
Preparing for a machine learning interview as a data analyst is a great step.

Here are some common machine learning interview questions :-

1. Explain the steps involved in a machine learning project lifecycle.

2. What is the difference between supervised and unsupervised learning? Give examples of each.

3. What evaluation metrics would you use to assess the performance of a regression model?

4. What is overfitting and how can you prevent it?

5. Describe the bias-variance tradeoff.

6. What is cross-validation, and why is it important in machine learning?

7. What are some feature selection techniques you are familiar with?

8.What are the assumptions of linear regression?

9. How does regularization help in linear models?

10. Explain the difference between classification and regression.

11. What are some common algorithms used for dimensionality reduction?

12. Describe how a decision tree works.

13. What are ensemble methods, and why are they useful?

14. How do you handle missing or corrupted data in a dataset?

15. What are the different kernels used in Support Vector Machines (SVM)?


These questions cover a range of fundamental concepts and techniques in machine learning that are important for a data scientist role.
Good luck with your interview preparation!


Best Data Science & Machine Learning Resources: https://topmate.io/coding/914624

Like if you need similar content 😄👍
👍82
Free Session to learn Data Analytics, Data Science & AI
👇👇
https://tracking.acciojob.com/g/PUfdDxgHR

Register fast, only for first few users
👍5
🔗 Become a Machine Learning Expert in 7 Steps
👍5