map, apply, applymap, aggregate and transform.Allows you to pass async functions to these methods without any problems. The library will automatically run them asynchronously, controlling the number of tasks executed simultaneously using the
max_parallel parameter.✨ Key features:
▪️ Easy integration: Use as a replacement for standard Pandas functions, but now with full support for async functions.
▪️ Controlled parallelism: Automatically execute your coroutines asynchronously, with the ability to limit the maximum number of parallel tasks (max_parallel). Ideal for managing the load on external services!
▪️ Flexible error handling: Built-in options for managing runtime errors: raise, ignore, or log.
▪️ Progress Indication: Built-in tqdm support for visually tracking the progress of long operations in real time.
Please open Telegram to view this post
VIEW IN TELEGRAM
❤10👍2
Please open Telegram to view this post
VIEW IN TELEGRAM
❤13
Please open Telegram to view this post
VIEW IN TELEGRAM
❤9
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 Voice mode + video chat mode is now available in chat.qwenlm.ai chat
Moreover, the Chinese have posted the code of their Qwen2.5-Omni-7B - a single omni-model that can understand text, audio, images and video.
They developed a "thinker-talker" architecture that enables a model to think and talk simultaneously.
They promise to release open source models for an even greater number of parameters soon.
Simply top-notch, run and test it.
🟢 Try it : https://chat.qwenlm.ai
🟢 Paper : https://github.com/QwenLM/Qwen2.5-Omni/blob/main/assets/Qwen2.5_Omni.pdf
🟢 Blog : https://qwenlm.github.io/blog/qwen2.5-omni
🟢 GitHub : https://github.com/QwenLM/Qwen2.5-Omni
🟢 Hugging Face : https://huggingface.co/Qwen/Qwen2.5-Omni-7B
🟢 ModelScope : https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B
Moreover, the Chinese have posted the code of their Qwen2.5-Omni-7B - a single omni-model that can understand text, audio, images and video.
They developed a "thinker-talker" architecture that enables a model to think and talk simultaneously.
They promise to release open source models for an even greater number of parameters soon.
Simply top-notch, run and test it.
Please open Telegram to view this post
VIEW IN TELEGRAM
❤6🔥6
import ChatTTS
from IPython.display import Audio
chat = ChatTTS.Chat()
chat.load_models()
texts = ["<PUT YOUR TEXT HERE>",]
wavs = chat.infer(texts, use_decoder=True)
Audio(wavs[0], rate=24_000, autoplay=True)
ChatTTS is a text-to-speech model designed specifically for conversational scenarios such as LLM assistant.
ChatTTS supports both English and Chinese (if this is relevant).
🤗 Play Hugging Face
Please open Telegram to view this post
VIEW IN TELEGRAM
❤10🔥2
Please open Telegram to view this post
VIEW IN TELEGRAM
❤10
Please open Telegram to view this post
VIEW IN TELEGRAM
❤5🔥4
Please open Telegram to view this post
VIEW IN TELEGRAM
❤26👍4
🌟 DeepSearcher: AI Harvester for Your Data.
It is positioned by developers as a tool for enterprise knowledge management, intelligent QA systems and information search scenarios.
DeepSearcher can use information from the Internet if necessary, is compatible with Milvus vector databases and their service provider Zilliz Cloud, Pymilvus, OpenAI and VoyageAI embeddings. It is possible to connect LLM DeepSeek and OpenAI via API directly or through TogetherAI and SiliconFlow.
Local file download, connection of web crawlers FireCrawl, Crawl4AI and Jina Reader are supported.
Our immediate plans include adding a web clipper feature, expanding the list of supported vector databases, and creating a RESTful API interface.
▶️ Local installation and launch:
# Clone the repository
# Create a Python venv
# Install dependencies
# Quick start demo
# Customize your config here
# Load your local data
# (Optional) Load from web crawling (
# Query
🌐 GitHub: https://github.com/zilliztech/deep-searcher
The project combines the use of LLM, vector databases to perform search, evaluation, and reasoning tasks based on the provided data (files, text, sources).
It is positioned by developers as a tool for enterprise knowledge management, intelligent QA systems and information search scenarios.
DeepSearcher can use information from the Internet if necessary, is compatible with Milvus vector databases and their service provider Zilliz Cloud, Pymilvus, OpenAI and VoyageAI embeddings. It is possible to connect LLM DeepSeek and OpenAI via API directly or through TogetherAI and SiliconFlow.
Local file download, connection of web crawlers FireCrawl, Crawl4AI and Jina Reader are supported.
Our immediate plans include adding a web clipper feature, expanding the list of supported vector databases, and creating a RESTful API interface.
▶️ Local installation and launch:
# Clone the repository
git clone https://github.com/zilliztech/deep-searcher.git
# Create a Python venv
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
cd deep searcher
pip install -e .
# Quick start demo
from deepsearcher.configuration import Configuration, init_config
from deepsearcher.online_query import query
config = Configuration()
# Customize your config here
config.set_provider_config("llm", "OpenAI", {"model": "gpt-4o-mini"})
init_config(config = config)# Load your local data
from deepsearcher.offline_loading import load_from_local_files
load_from_local_files(paths_or_directory=your_local_path)
# (Optional) Load from web crawling (
FIRECRAWL_API_KEY env variable required)from deepsearcher.offline_loading import load_from_website
load_from_website(urls=website_url)
# Query
result = query("Write a report about xxx.") # Your question herePlease open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
❤11