Data science/ML/AI – Telegram
Data science/ML/AI
13K subscribers
509 photos
1 video
98 files
314 links
Data science and machine learning hub

Python, SQL, stats, ML, deep learning, projects, PDFs, roadmaps and AI resources.

For beginners, data scientists and ML engineers
👉 https://rebrand.ly/bigdatachannels

DMCA: @disclosure_bds
Contact: @mldatascientist
Download Telegram
Data Drift: The reason Good Models Go Bad

You built a model that performed amazingly last month.
Now? Accuracy tanked. Confusion Matrix looks like a crime scene.

Welcome to Data Drift. The silent model killer.

📉 What Is Data Drift?

It’s when the data your model sees today is different from the data it was trained on.

Imagine you trained a model on pre-COVID shopping data then you tried to predict online purchases in 2021.
People’s behavior changed. Your model didn’t.

That’s drift. Reality shifted, but your math stayed still.

🧠 The Core Types

➡️ Covariate Drift: Input features change (e.g., user age distribution shifts).
➡️ Prior Drift: The target variable’s frequency changes (e.g., fewer defaults now).
➡️ Concept Drift: The relationship between input and output changes entirely.

The last one is deadly. your model’s logic literally stops making sense.

🚨 Why It’s Dangerous

Models decay quietly.
By the time you notice lower performance, the damage( business or otherwise ) is already done.

That’s why top teams monitor models like systems, not code.

🧩 The Fix

1. Track feature distributions over time (use KS test, PSI, or histograms).
2. Monitor prediction confidence — sudden uncertainty = red flag.
3. Retrain models periodically with fresh data.

AI isn’t “build once.” It’s “maintain forever.”

A model is only as good as the world it was trained in
and the world never stops changing.
6
Phases To Master Agentic AI
8
📚 Data Science Riddle

You're building a chatbot but it gives generic answers. What's the root issue?
Anonymous Quiz
8%
Model is too deep
68%
Training data lacks context
9%
Wrong loss function
15%
Poor tokenization
Cheatsheet: Imbalanced Data In Classification
6
The Data Analyst Cheatsheet
6
📚 Data Science Riddle

Model Accuracy improves after dropping half the features. Why?
Anonymous Quiz
11%
Model became smaller
72%
Overfitting reduced
11%
Data size shrank
7%
Training faster
3
Understanding the Forecast Statistics and Four Moments (4P).pdf
181.8 KB
Statistical Moments (M1, M2) for Data Analysis

Here are 5 curated PDFs diving into the mean (M1), variance (M2), and their applications in crafting research questions and sourcing data.

A channel member requested resources on this topic and we delivered.

If you have a topic you want resources on let us know, and we’ll make it happen!

@datascience_bds
8
Excel Vs SQL Vs Python
7👍3
Basic SQL Commands
2
📚 Data Science Riddle

Why do we use Batch Normalization?
Anonymous Quiz
28%
Speeds up training
45%
Prevents overfitting
9%
Adds non-linearity
18%
Reduces dataset size
5
LLM Cheatsheet
5
📚 Data Science Riddle

Your object detection model misses small objects. Easiest fix?
Anonymous Quiz
21%
Use larger input images
34%
Add more classes
30%
Reduce learning rate
15%
Train longer
🤖 AI that creates AI: ASI-ARCH finds 106 new SOTA architectures

ASI-ARCH — experimental ASI that autonomously researches and designs neural nets. It hypothesizes, codes, trains & tests models.

💡 Scale:
1,773 experiments → 20,000+ GPU-hours.
Stage 1 (20M params, 1B tokens): 1,350 candidates beat DeltaNet.
Stage 2 (340M params): 400 models → 106 SOTA winners.
Top 5 trained on 15B tokens vs Mamba2 & Gated DeltaNet.

📊 Results:
PathGateFusionNet: 48.51 avg (Mamba2: 47.84, Gated DeltaNet: 47.32).
BoolQ: 60.58 vs 60.12 (Gated DeltaNet).
Consistent gains across tasks.
🔍 Insights:
Prefers proven tools (gating, convs), refines them iteratively.
Ideas come from: 51.7% literature, 38.2% self-analysis, 10.1% originality.
SOTA share: self-analysis ↑ to 44.8%, literature ↓ to 48.6%.

@datascience_bds
4