Mike's ML Forge – Telegram
Mike's ML Forge
252 subscribers
130 photos
10 videos
16 files
58 links
Welcome to this channel,in this channel, we're diving deep into the world of Data Science and ML Also a bit of my personal journey, becoming a person who says " I designed the board, collected the data, trained the model, and deployed it"
Download Telegram
Dealing with Missing Data in Python

Missing data? No problem
I explore 2 powerful methods to handle it:

1️⃣ Filling Missing Data with NumPy/Pandas
✔️ Use .fillna() to replace missing values.

    Replace categorical values with "missing".
    Replace numerical values with a constant or the column's mean.

2️⃣ Filling Missing Data with Scikit-Learn
✔️ Use SimpleImputer for flexible, scalable imputation.

    Define strategies like constant (e.g., "missing", 4) or mean.
    Handle categorical, numerical, and mixed datasets easily.

🔗 Combine Scikit-learn with ColumnTransformer to handle multi-type columns in one step.
📊 Master these methods and make your data analysis more robust!

#Python #DataScience #ScikitLearn #NumPy
3
Spend 4+ hours today and It feels like it ain't enough gn I'm going to improve that for sure 🙌🏽
👏3
you can check it on sklearn website
This is Kendrick vs Drake for nerds ngl
😁7
Forwarded from Pythonate
😂😭
😁4
Mike's ML Forge
you can check it on sklearn website
I can't just see this and keep my mouth shut, so the thing is
How to Pick the Right Machine Learning Algorithm

One of the hardest parts of machine learning is choosing the right algorithm for the job. Different algorithms are suited for different types of problems. Here’s a simple way to break it down:

Step 1: What kind of problem are you solving?

Everything starts with understanding what you want to predict or classify. Your problem will fall into one of these categories:

1. Classification – When you need to categorize things (e.g., "Is this email spam or not?").

2. Regression – When you need to predict a number (e.g., "How much will a house cost?").

3. Clustering – When you want the computer to group things automatically without labels (e.g., "Group customers by similar behavior").

4. Dimensionality Reduction – When you have too much data and need to simplify it while keeping the important parts
if we see this simple data

1. Data Preparation 
   - disease.drop("target", axis=1): Extracts feature variables (X). 
   - disease["target"]: Extracts the target variable (y). 
   - train_test_split(x, y, train_size=0.2): Splits the data into training and test sets, with 20% allocated for training.

2. Model Training and Evaluation 
   - Linear Support Vector Classifier (LinearSVC) 
     - LinearSVC() is initialized and trained using fit(x_train, y_train)
     - Model accuracy on the test set: 76.95% (0.7695).

   - Random Forest Classifier 
     - RandomForestClassifier(n_estimators=100): A Random Forest model with 100 decision trees. 
     - Model accuracy on the test set: 83.54% (0.8354), which is better than LinearSVC.

### Observations:
- Random Forest performs better than LinearSVC on this dataset.
👍1
Forwarded from Dagmawi Babi
Lost a lot along the way but Jesus found me in the process.
6
😁9
📌 Model Comparison & Evaluation in Machine Learning

When building classification models, evaluating them properly ensures the best performance. Here’s how to do it effectively: 

🔹 Key Evaluation Metrics 
Accuracy – Measures overall correctness but isn’t ideal for imbalanced datasets. 
AUC-ROC – Higher AUC means better class separation. 
Confusion Matrix – Shows the breakdown of correct & incorrect predictions. 
Classification Report – Includes Precision, Recall, and F1-score for deeper insights. 

🔹 Comparing Multiple Models 
1️⃣ Train different models (Logistic Regression, SVM, Random Forest, etc.). 
2️⃣ Use Cross-Validation to get reliable performance scores. 
3️⃣ Optimize with Hyperparameter Tuning (GridSearchCV, RandomizedSearchCV)
4️⃣ Compare models using AUC-ROC, F1-score, or accuracy for better decision-making. 


#MachineLearning #AI #ModelEvaluation #DataScience #AUCROC #ConfusionMatrix 🚀
Key Takeaway: No single metric or model is best—always compare multiple models and use multiple evaluation metrics for better insights!
Forwarded from Tech Nerd (Tech Nerd)
This media is not supported in your browser
VIEW IN TELEGRAM
Can you feel the Aura

@selfmadecoder
🔥4
Ever trained a machine learning model and wondered… How do I know if it's actually good? Well, that’s where Model evaluation come in. we already saw estimator score method so today we'll see the last two of them! Let's break it down—super simple