Data Science & Machine Learning – Telegram
Data Science & Machine Learning
72.5K subscribers
770 photos
2 videos
68 files
680 links
Join this channel to learn data science, artificial intelligence and machine learning with funny quizzes, interesting projects and amazing resources for free

For collaborations: @love_data
Download Telegram
What are precision, recall, and F1-score?

Precision and recall are classification evaluation metrics:
P = TP / (TP + FP) and R = TP / (TP + FN).

Where TP is true positives, FP is false positives and FN is false negatives

In both cases the score of 1 is the best: we get no false positives or false negatives and only true positives.

F1 is a combination of both precision and recall in one score (harmonic mean):
F1 = 2 * PR / (P + R).
Max F score is 1 and min is 0, with 1 being the best.
👍1
What is unsupervised learning?

Unsupervised learning aims to detect patterns in the data where no labels are given.
Would you prefer gradient boosting trees model or logistic regression when doing text classification with bag of words?

Usually logistic regression is better because bag of words creates a matrix with large number of columns. For a huge number of columns logistic regression is usually faster than gradient boosting trees.
1
What is clustering? When do we need it?

Clustering algorithms group objects such that similar feature points are put into the same groups (clusters) and dissimilar feature points are put into different clusters.
What is bag of words? How we can use it for text classification?

Bag of Words is a representation of text that describes the occurrence of words within a document. The order or structure of the words is not considered. For text classification, we look at the histogram of the words within the text and consider each word count as a feature.
1
Free Data Science courses from Udemy and Udacity
👇👇

Intro to Data Science

https://imp.i115008.net/rn2beD

Data Analysis and Visualization

https://imp.i115008.net/JrBjZR

Data Analysis with R by Facebook

https://imp.i115008.net/gbJr5r

Introduction to Data Science using Python

https://ern.li/OP/1qvkxbfaxqj

Intro to Data for Data Science

https://ern.li/OP/1qvkxbfbmf8

Data Science with Analogies, Algorithms and Solved Problems

https://ern.li/OP/1qvkxbfcehz

Introduction to Data Science for Complete Beginners

https://bit.ly/3sh4oPO

ENJOY LEARNING 👍👍
1👏1
Data Science & Machine Learning pinned «Some helpful Data science projects for beginners https://www.kaggle.com/c/house-prices-advanced-regression-techniques https://www.kaggle.com/c/digit-recognizer https://www.kaggle.com/c/titanic BEST RESOURCES TO LEARN DATA SCIENCE AND MACHINE LEARNING FOR…»
Some interview questions related to Data science

1- what is difference between structured data and unstructured data.

2- what is multicollinearity.and how to remove them

3- which algorithms you use to find the most correlated features in the datasets.

4- define entropy

5- what is the workflow of principal component analysis

6- what are the applications of principal component analysis not with respect to dimensionality reduction

7- what is the Convolutional neural network. Explain me its working
Fake_News_Detection_Machine_learning_project.rar
8.3 MB
Fake news Detection Machine Learning Project with 92%Accuracy
it contain compressed file in which "jupyter notebook file and dataset"
2👍1
dice_roll.py
445 B
🎲Dice_roll_Simulator_Gui with python in 2 minute 😊
numpy.pdf
1.4 MB
Data_science Numpy cheat sheet
1
Dimensionality reduction techniques

Singular Value Decomposition (SVD)
Principal Component Analysis (PCA)
Linear Discriminant Analysis (LDA)
T-distributed Stochastic Neighbor Embedding (t-SNE)
Autoencoders
Fourier and Wavelet Transforms
What is the curse of dimensionality? Why do we care about it?

Data in only one dimension is relatively tightly packed. Adding a dimension stretches the points across that dimension, pushing them further apart. Additional dimensions spread the data even further making high dimensional data extremely sparse. We care about it, because it is difficult to use machine learning in sparse spaces.
👍1
K-means vs DBScan ML Algorithm

DBScan is more robust to noise.
DBScan is better when the amount of clusters is difficult to guess.
K-means has a lower complexity, i.e. it will be much faster, especially with a larger amount of points.
1
Data Science & Machine Learning
Fake_News_Detection_Machine_learning_project.rar
Start working on any project if you are a beginner and want to grow your career as a data scientist
You will learn much more as you practice and work on projects from yourself
You can find dataset in this channel or go to kaggle to find any random dataset and just work on it
Learning concepts is fine but most of the learnings come from projects
I know that might feel boring at first time but as you move forward, it become interesting
1
👉The Ultimate Guide to the Pandas Library for Data Science in Python
👇👇

https://www.freecodecamp.org/news/the-ultimate-guide-to-the-pandas-library-for-data-science-in-python/amp/

A Visual Intro to NumPy and Data Representation
.
Link : 👇👇
https://jalammar.github.io/visual-numpy/

Matplotlib Cheatsheet 👇👇

https://github.com/rougier/matplotlib-cheatsheet

SQL Cheatsheet 👇👇

https://websitesetup.org/sql-cheat-sheet/
Seeing Theory : A visual introduction to probability and statistics

Link :👇👇
https://seeing-theory.brown.edu/

“The Projects You Should Do to Get a Data Science Job” by Ken Jee
👇👇
https://link.medium.com/Q2DnxSGRO6