Mike's ML Forge – Telegram
Mike's ML Forge
252 subscribers
130 photos
10 videos
16 files
58 links
Welcome to this channel,in this channel, we're diving deep into the world of Data Science and ML Also a bit of my personal journey, becoming a person who says " I designed the board, collected the data, trained the model, and deployed it"
Download Telegram
Forwarded from Oops, My Brain Did That (Mike)
My personality basically was shaped by Monica’s ocd, Chandler’s sarcasm and Ross’s awkwardness
heyy
🚀 Ever wondered why your machine learning model struggles—even when your data looks clean?
It might be because your features are *speaking different languages*

Some values are in thousands, others in decimals—and your model? It's just trying to make sense of the chaos.
That's where feature scaling becomes a game-changer. Let's break it down. 👇
🎯 What is Feature Scaling in Machine Learning?

Imagine you're training a model and one feature is in kilometers (like 1000s) while another is in centimeters (like 1s). Models like gradient descent or KNN will get confused, thinking big numbers matter more.

That’s where feature scaling steps in—bringing all features to the same level.
There are two common ways to get all attributes to have the same scale: min-max
scaling
and standardization.

Min-max scaling (many people call this normalization) is quite simple: values are
shifted and rescaled so that they end up ranging from 0 to 1.
so simply we do this by subtract‐
ing the min value and dividing by the max minus the min.
Standardization is quite different: first it subtracts the mean value (so standardized
values always have a zero mean), and then it divides by the standard deviation so that
the resulting distribution has unit variance. Unlike min-max scaling, standardization
does not bound values to a specific range, which may be a problem for some algo‐
rithms (e.g., neural networks often expect an input value ranging from 0 to 1).
Media is too big
VIEW IN TELEGRAM
Just opened instagram after a while and this the first reel that showed up and jeez I really needed this
4
😁6
Forwarded from The Blogrammer
Feels like a personal attack, but yes
3
Forwarded from Tech Nerd (Tech Nerd)
What’s wrong with this country, fr? I was genuinely shocked when I heard that @A2SVOfficial has run out of funding. Emre (the founder) is doing everything he can to keep it running, but no one is willing to help. Now they’re moving to Rwanda. It’s honestly so sad and disappointing. This guy is changing lives and has raised the bar for so many developers’ expectations and ambitions.

@selfmadecoder
Hey 👋
ያልጠፋነው ከእግዚአብሔር ምሕረት የተነሣ ነው ርኅራኄው አያልቅምና።
ማለዳ ማለዳ ዐዲስ ነው ታማኝነትህ ብዙ ነው።

ሰቈቃወ 3:23
5
So today let's talk about the most important real world data science questions

"How do I know which features matter and which ones don't"

Let's break this down step by step, nice and clear
🔥5
🔗 1. How to Find Correlation Between Features

Correlation shows how strongly two variables move together.
In pandas, the easiest way:
 python
import pandas as pd

# Load your dataset
df = pd.read_csv("your_data.csv")

# Get correlation matrix
correlation_matrix = df.corr(numeric_only=True)

# View it
print(correlation_matrix)

📊 Visualize it:

 python
import seaborn as sns
import matplotlib.pyplot as plt

plt.figure(figsize=(10, 8))
sns.heatmap(df.corr(numeric_only=True), annot=True, cmap='coolwarm')
plt.noscript("Correlation Matrix")
plt.show()

Values close to 1 = strong positive correlation
Values close to -1 = strong negative correlation
Close to 0 = no linear relationship
2. How to Know Which Columns Are Useful?
Here are every possible way to decide which features matter:

A. Correlation with Target Variable (for regression)

Check how each feature correlates with the target (label).

df.corr()['target_column'].sort_values(ascending=False)

→ The ones with strong correlation (positive or negative) are often useful.
👍1
I just packed my bag, hyped myself up to study my academic courses, and went to the library feeling like a scholar. mann Guess what?
I forgot everything notebooks even my pen... just vibes
me here Sitting like a monk with no scrolls then Started learning ML out of spite 😂

Sometimes forgetfulness leads to unexpected learning paths. Let’s roll with it😁
#MLChoseMe
😁3
lemme pretend this was the plan😁
🔥2
lets do simple questions related to "preparing data for training"
Forwarded from Negus channel 🦅 (NEGUS)
profound
😁4