AI and Machine Learning – Telegram
AI and Machine Learning
91.3K subscribers
266 photos
69 videos
363 files
179 links
Learn Data Science, Data Analysis, Machine Learning, Artificial Intelligence, and Python with Tensorflow, Pandas & more!
Buy ads: https://telega.io/c/machine_learning_courses
Download Telegram
🤝 Build AI Model From Scratch
Please open Telegram to view this post
VIEW IN TELEGRAM
38👍4
🌟 DeepSearcher: AI Harvester for Your Data.

The project combines the use of LLM, vector databases to perform search, evaluation, and reasoning tasks based on the provided data (files, text, sources).


It is positioned by developers as a tool for enterprise knowledge management, intelligent QA systems and information search scenarios.

DeepSearcher can use information from the Internet if necessary, is compatible with Milvus vector databases and their service provider Zilliz Cloud, Pymilvus, OpenAI and VoyageAI embeddings. It is possible to connect LLM DeepSeek and OpenAI via API directly or through TogetherAI and SiliconFlow.
Local file download, connection of web crawlers FireCrawl, Crawl4AI and Jina Reader are supported.

Our immediate plans include adding a web clipper feature, expanding the list of supported vector databases, and creating a RESTful API interface.

▶️ Local installation and launch:

# Clone the repository
git clone https://github.com/zilliztech/deep-searcher.git


# Create a Python venv
python3 -m venv .venv
source .venv/bin/activate


# Install dependencies
cd deep searcher
pip install -e .


# Quick start demo
from deepsearcher.configuration import Configuration, init_config
from deepsearcher.online_query import query

config = Configuration()


# Customize your config here
config.set_provider_config("llm", "OpenAI", {"model": "gpt-4o-mini"})
init_config(config = config)


# Load your local data
from deepsearcher.offline_loading import load_from_local_files
load_from_local_files(paths_or_directory=your_local_path)


# (Optional) Load from web crawling (FIRECRAWL_API_KEY env variable required)
from deepsearcher.offline_loading import load_from_website
load_from_website(urls=website_url)


# Query
result = query("Write a report about xxx.") # Your question here


🌐 GitHub: https://github.com/zilliztech/deep-searcher
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
40🔥2🥰1
🔅 Building Blocks for Deep Learning in the Wolfram Language

📝 Learn how to construct neural networks in the Wolfram Language.

🌐 Author: Wolfram Research
🔰 Level: Advanced
Duration: 54m

📋 Topics: Wolfram Language, Deep Learning, Artificial Intelligence

🔗 Join Artificial intelligence for more courses
Please open Telegram to view this post
VIEW IN TELEGRAM
12
Building Blocks for Deep Learning in the Wolfram Language.zip
113.5 MB
📱Artificial intelligence
📱Building Blocks for Deep Learning in the Wolfram Language
Please open Telegram to view this post
VIEW IN TELEGRAM
3👍2
If you're getting started with Ai and Ai Agents then save these terms related to Ai Agents...
👍238
Resume key words for data scientist role explained in points:

1. Data Analysis:
   - Proficient in extracting, cleaning, and analyzing data to derive insights.
   - Skilled in using statistical methods and machine learning algorithms for data analysis.
   - Experience with tools such as Python, R, or SQL for data manipulation and analysis.

2. Machine Learning:
   - Strong understanding of machine learning techniques such as regression, classification, clustering, and neural networks.
- Experience in model development, evaluation, and deployment.
   - Familiarity with libraries like TensorFlow, scikit-learn, or PyTorch for implementing machine learning models.

3. Data Visualization:
   - Ability to present complex data in a clear and understandable manner through visualizations.
   - Proficiency in tools like Matplotlib, Seaborn, or Tableau for creating insightful graphs and charts.
   - Understanding of best practices in data visualization for effective communication of findings.

4. Big Data:
   - Experience working with large datasets using technologies like Hadoop, Spark, or Apache Flink.
   - Knowledge of distributed computing principles and tools for processing and analyzing big data.
   - Ability to optimize algorithms and processes for scalability and performance.

5. Problem-Solving:
   - Strong analytical and problem-solving skills to tackle complex data-related challenges.
   - Ability to formulate hypotheses, design experiments, and iterate on solutions.
   - Aptitude for identifying opportunities for leveraging data to drive business outcomes and decision-making.


Resume key words for a data analyst role

1. SQL (Structured Query Language):
   - SQL is a programming language used for managing and querying relational databases.
   - Data analysts often use SQL to extract, manipulate, and analyze data stored in databases, making it a fundamental skill for the role.

2. Python/R:
   - Python and R are popular programming languages used for data analysis and statistical computing.
   - Proficiency in Python or R allows data analysts to perform various tasks such as data cleaning, modeling, visualization, and machine learning.

3. Data Visualization:
   - Data visualization involves presenting data in graphical or visual formats to communicate insights effectively.
   - Data analysts use tools like Tableau, Power BI, or Python libraries like Matplotlib and Seaborn to create visualizations that help stakeholders understand complex data patterns and trends.

4. Statistical Analysis:
   - Statistical analysis involves applying statistical methods to analyze and interpret data.
   - Data analysts use statistical techniques to uncover relationships, trends, and patterns in data, providing valuable insights for decision-making.

5. Data-driven Decision Making:
   - Data-driven decision making is the process of making decisions based on data analysis and evidence rather than intuition or gut feelings.
   - Data analysts play a crucial role in helping organizations make informed decisions by analyzing data and providing actionable insights that drive business strategies and operations.
50👍2
🧠 Machine Learning Mindmap
26👍4🔥3
🔅 Hugging Face Transformers: Introduction to Pretrained Models

📝 Learn how to build natural language processing (NLP) applications with pretrained transformers in Hugging Face, the popular machine learning platform.

🌐 Author: Kumaran Ponnambalam
🔰 Level: Advanced
Duration: 54m

📋 Topics: Hugging Face Products, Natural Language Processing, Transformers

🔗 Join Artificial intelligence for more courses
Please open Telegram to view this post
VIEW IN TELEGRAM
10
Hugging Face Transformers: Introduction to Pretrained Models.zip
107.4 MB
📱Artificial intelligence
📱Hugging Face Transformers: Introduction to Pretrained Models
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
📌 Llama3 from scratch: extended version

The "Deepdive Llama3 from scratch" project is an extended fork of the guide repository for creating LLama-3 from scratch step by step.

The original project has been reworked, updated, improved and optimized in order to help everyone understand and master the implementation principle and detailed rationalization process of the Llama3 model.

▶️ Changes and improvements in this fork:

🟢 The sequence of presentation of the material has been changed, the structure has been adjusted to make the learning process more transparent, helping to understand the code step by step;

🟢 Added a large number of detailed annotations to the code;

🟢 The changes in matrix dimensions at each stage of the calculation are fully annotated;

🟢 Detailed explanations of the principles have been added to fully understand the design concept of the model.

🟢 An additional chapter dedicated to KV-cache has been added, which describes in detail the basic concepts, operating principles, and application process of the attention mechanism.


📌 Licensing: MIT License.


🔜 Repository on Github
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
116
🌟 olmOCR: a tool for processing PDF documents.

olmOCR is a project designed to convert PDF files and document images into structured Markdown text. It can handle equations, tables, and handwritten text, preserving the correct reading order even in the most complex multi-column layouts.

olmOCR is trained with heuristics to handle common parsing and metadata errors and supports SGLang and vLLM, where it can scale from one to hundreds of GPUs, making it a unique solution for large-scale tasks.

The key advantage of olmOCR is its cost-effectiveness. Processing 1 million PDF pages will cost only $190 (with GPU rental), which is about 1/32 of the cost of using the GPT-4o API for the same volume.

The development team created a unique method called "document anchoring" to improve the quality of the extracted text. It uses text and metadata from PDF files to improve the accuracy of processing. Image regions and text blocks are extracted, concatenated and inserted into the model prompt. When VLM requests a plain text version of the document, the "anchored" text is used along with the rasterized page image.

In tests, olmOCR showed high results compared to Marker, MinerU and GOT-OCR 2.0. During testing, olmOCR was preferred in 61.3% of cases against Marker, in 58.6% against GOT-OCR and in 71.4% against MinerU.

▶️ olmOCR release:

🟢 Model olmOCR-7B-0225-preview - retrained Qwen2-VL-7B-Instruct on dataset olmOCR-mix-0225;

🟢 Dataset olmOCR-mix-0225 - over 250 thousand pages of digital books and documents from the public domain, recognized using gpt-4o-2024-08-06 and a special prompt strategy that preserves all digital content of each page.

🟢 A set of codes for inference and training.


▶️ Recommended environment for inference:

🟠 NVIDIA GPU (RTX 4090 and above)
🟠 30 GB free space on SSD \ HDD
🟠 installed package poppler-utils
🟠 sglang with flashinfer for GPU inference

▶️ Local installation and launch:

 # Install dependencies
sudo apt-get update
sudo apt-get install poppler-utils ttf-mscorefonts-installer msttcorefonts fonts-crosextra-caladea fonts-crosextra-carlito gsfonts lcdf-typetools

# Set up a conda env
conda create -n olmocr python=3.11
conda activate olmocr

git clone https://github.com/allenai/olmocr.git
cd olmocr
pip install -e .

# Convert a Single PDF
python -m olmocr.pipeline ./localworkspace --pdfs tests/gnarly_pdfs/test.pdf

# Convert Multiple PDFs
python -m olmocr.pipeline ./localworkspace --pdfs tests/gnarly_pdfs/*.pdf


📌 Licensing: Apache 2.0 License.


🟡 Article
🟡 Demo
🟡 Model
🟡 Arxiv
🟡 Discord Community
🖥 Github
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
41👍1🔥1
Basic skills needed for ai engineer

1. Programming Skills (Essential)
Learn Python (most widely used in AI).
Basics of libraries like NumPy, Pandas (for data handling).
Understanding of loops, functions, OOPs concepts.

2. Mathematics & Statistics (Basic Level)
Linear Algebra (Vectors, Matrices, Dot Product).
Probability & Statistics (Mean, Variance, Standard Deviation).
Basic Calculus (Derivatives, Integrals – useful for ML models)

3. Machine Learning Fundamentals
Understand what Supervised & Unsupervised Learning are.
Learn about Regression, Classification, and Clustering.
Introduction to Neural Networks and Deep Learning.

4. Data Handling & Processing
How to collect, clean, and process data for AI models.
Using Pandas & NumPy to manipulate datasets.

5. AI Libraries & Frameworks
Learn Scikit-learn for ML models.
Introduction to TensorFlow or PyTorch for Deep Learning.
30🔥5👍2
🔅 Complete Guide to NLP with R

📝 Find out how to use the R programming language to implement natural language processing (NLP) algorithms.

🌐 Author: Mark Niemann-Ross
🔰 Level: Advanced
Duration: 5h 4m

📋 Topics: Natural Language Processing, R

🔗 Join Artificial intelligence for more courses
Please open Telegram to view this post
VIEW IN TELEGRAM
9