AI and Machine Learning – Telegram
AI and Machine Learning
90.3K subscribers
241 photos
66 videos
355 files
160 links
Learn Data Science, Data Analysis, Machine Learning, Artificial Intelligence, and Python with Tensorflow, Pandas & more!
Buy ads: https://telega.io/c/machine_learning_courses
Download Telegram
Advanced_LLMs_with_Retrieval_Augmented_Generation_RAG:_Practical.zip
364.7 MB
📱Artificial intelligence
📱Advanced LLMs with Retrieval Augmented Generation (RAG): Practical Projects for AI Applications
Please open Telegram to view this post
VIEW IN TELEGRAM
12🔥2
🔆 Random Forest explained
15
💡 20 Concepts In LLMs
Please open Telegram to view this post
VIEW IN TELEGRAM
24👍6
🔅 AI Projects with Python, TensorFlow, and NLTK

📝 Supercharge your technical know-how and start building AI projects using Python, TensorFlow, and NLTK.

🌐 Author: Dhhyey Desai
🔰 Level: Intermediate
Duration: 24m

📋 Topics: TensorFlow, Artificial Intelligence, NLTK

🔗 Join Artificial intelligence for more courses
Please open Telegram to view this post
VIEW IN TELEGRAM
11👍5🔥3
Please open Telegram to view this post
VIEW IN TELEGRAM
10👍5
This media is not supported in your browser
VIEW IN TELEGRAM
🚀 TrajectoryCrafter (Moving-Camera Diffusion) is a new tool from Tencent that offers a new approach to redirecting camera trajectories in monochrome videos.

How the model works:
🌟 Initialization :
starts with an existing camera trajectory or even pure noise. This sets the initial state that the model will gradually improve.

The model uses two types of input data simultaneously: rendered point clouds (3D representations of scenes) and source videos.

🌟 Diffusion process:
The model learns to “clean up” random noise step by step, turning it into a sequence of trajectories. At each step, iterative refinement occurs — the model predicts what a more realistic trajectory should look like, based on given conditions (e.g., smoothness of motion, and consistency of the scene).

Instead of using only videos taken from different angles, the authors created a training set by combining extensive monocular videos (with a regular camera) with limited but high-quality multi-view videos. This strategy is achieved using what is called “double reprojection”, which helps the model better adapt to different scenes.

🌟 Generating the final trajectory:
After a series of iterations, when the noise is removed, a new camera trajectory is generated that meets the given conditions and has high quality visual dynamics.

Installation :
git clone --recursive https://github.com/TrajectoryCrafter/TrajectoryCrafter.git
cd TrajectoryCrafter


🖥 Github
🟡 Article
🟡 Project
🟡 Demo
🟡 Video
Please open Telegram to view this post
VIEW IN TELEGRAM
18
🔅 AI for Beginners: Inside Large Language Models

3 hours 📁 326 Lessons

📔 Understand how LLMs actually work under the hood from scratch with practical and fun lessons. No prior knowledge required!


🎙 Taught by: Scott Kerr

📤 Download All Courses
Please open Telegram to view this post
VIEW IN TELEGRAM
23
🔰 Aiopandas is a lightweight patch for Pandas that adds native async support for the most popular data processing methods: map, apply, applymap, aggregate and transform.

Allows you to pass async functions to these methods without any problems. The library will automatically run them asynchronously, controlling the number of tasks executed simultaneously using the max_parallel parameter.

Key features:

▪️ Easy integration: Use as a replacement for standard Pandas functions, but now with full support for async functions.
▪️ Controlled parallelism: Automatically execute your coroutines asynchronously, with the ability to limit the maximum number of parallel tasks (max_parallel). Ideal for managing the load on external services!
▪️ Flexible error handling: Built-in options for managing runtime errors: raise, ignore, or log.
▪️ Progress Indication: Built-in tqdm support for visually tracking the progress of long operations in real time.

🌐 Github : https://github.com/telekinesis-inc/aiopandas
Please open Telegram to view this post
VIEW IN TELEGRAM
8👍2
🔅 Building Deep Learning Applications with Keras

📝 Get a thorough introduction to Keras, a versatile deep learning framework, and learn how to build, deploy, and monitor robust deep learning models.

🌐 Author: Isil Berkun
🔰 Level: Intermediate
Duration: 1h 50m

📋 Topics: Keras, Deep Learning, Application Development

🔗 Join Artificial intelligence for more courses
Please open Telegram to view this post
VIEW IN TELEGRAM
12
Please open Telegram to view this post
VIEW IN TELEGRAM
7
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 Voice mode + video chat mode is now available in chat.qwenlm.ai chat

Moreover, the Chinese have posted the code of their Qwen2.5-Omni-7B - a single omni-model that can understand text, audio, images and video.

They developed a "thinker-talker" architecture that enables a model to think and talk simultaneously.

They promise to release open source models for an even greater number of parameters soon.

Simply top-notch, run and test it.

🟢 Try it : https://chat.qwenlm.ai
🟢 Paper : https://github.com/QwenLM/Qwen2.5-Omni/blob/main/assets/Qwen2.5_Omni.pdf
🟢 Blog : https://qwenlm.github.io/blog/qwen2.5-omni
🟢 GitHub : https://github.com/QwenLM/Qwen2.5-Omni
🟢 Hugging Face : https://huggingface.co/Qwen/Qwen2.5-Omni-7B
🟢 ModelScope : https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥53
🌟 ChatTTS — a generative text2speech model with an emphasis on realism

 import ChatTTS
from IPython.display import Audio

chat = ChatTTS.Chat()
chat.load_models()

texts = ["<PUT YOUR TEXT HERE>",]

wavs = chat.infer(texts, use_decoder=True)
Audio(wavs[0], rate=24_000, autoplay=True)


ChatTTS is a text-to-speech model designed specifically for conversational scenarios such as LLM assistant.
ChatTTS supports both English and Chinese (if this is relevant).

🖥 GitHub
🤗 Play Hugging Face
🟡 ChatTTS Page
Please open Telegram to view this post
VIEW IN TELEGRAM
8🔥2