ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
3.99K photos
226 videos
23 files
4.29K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Mistral released Ministral 3 — a new line of reasoning- and instruct-models!

Ministral 3 is available in versions 3B, 8B, and 14B, with vision support and top performance in its class.

The 14B model can be run locally on a machine with 24 GB RAM.

Guide + laptop: https://docs.unsloth.ai/new/ministral-3

GGUF builds: https://huggingface.co/collections/unsloth/ministral-3

👉 https://news.1rj.ru/str/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
4
Qwen3-VL Technical Report

📝 Summary:
Qwen3-VL is a highly capable vision-language model, achieving superior performance across multimodal benchmarks. It supports 256K interleaved contexts and offers strong text understanding, robust long-context comprehension, and advanced multimodal reasoning through key architectural upgrades.

🔹 Publication Date: Published on Nov 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21631
• PDF: https://arxiv.org/pdf/2511.21631

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VisionLanguageModel #MultimodalAI #AI #DeepLearning #LLM
ViDiC: Video Difference Captioning

📝 Summary:
The ViDiC task and ViDiC-1K dataset evaluate MLLMs' ability to describe differences between video pairs, overcoming static image captioning limits. It assesses motion and event evolution, finding significant performance gaps in current models for comparative video understanding.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03405
• PDF: https://arxiv.org/pdf/2512.03405
• Project Page: https://vidic-1k.github.io/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VideoCaptioning #MLLM #VideoUnderstanding #ComputerVision #AIResearch
PretrainZero: Reinforcement Active Pretraining

📝 Summary:
PretrainZero is a reinforcement active learning framework that pretrains large models on unlabeled general corpora using RL. It significantly improves general reasoning abilities and benchmark performance, breaking the verification data-wall for artificial general intelligence.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03442
• PDF: https://arxiv.org/pdf/2512.03442

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ReinforcementLearning #ActiveLearning #AGI #Pretraining #MachineLearning
1
OneThinker: All-in-one Reasoning Model for Image and Video

📝 Summary:
OneThinker is an all-in-one model unifying image and video understanding across diverse tasks like QA, captioning, and tracking. It employs a new training corpus and RL method for balanced optimization, achieving strong performance and knowledge transfer across 31 benchmarks. This advances toward...

🔹 Publication Date: Published on Dec 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03043
• PDF: https://arxiv.org/pdf/2512.03043
• Project Page: https://github.com/tulerfeng/OneThinker
• Github: https://github.com/tulerfeng/OneThinker

🔹 Models citing this paper:
https://huggingface.co/OneThink/OneThinker-8B
https://huggingface.co/OneThink/OneThinker-SFT-Qwen3-8B

Datasets citing this paper:
https://huggingface.co/datasets/OneThink/OneThinker-train-data
https://huggingface.co/datasets/OneThink/OneThinker-eval

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AI #ComputerVision #MultimodalAI #DeepLearning #VideoUnderstanding
Rethinking Prompt Design for Inference-time Scaling in Text-to-Visual Generation

📝 Summary:
PRIS adaptively revises prompts during text-to-visual generation inference to enhance user intent alignment. It reviews visual failures and redesigns prompts using fine-grained feedback, proving that jointly scaling prompts and visuals improves accuracy and quality.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03534
• PDF: https://arxiv.org/pdf/2512.03534
• Project Page: https://subin-kim-cv.github.io/PRIS

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#PromptEngineering #TextToImage #GenerativeAI #DeepLearning #AIResearch
Thinking with Programming Vision: Towards a Unified View for Thinking with Images

📝 Summary:
CodeVision enhances MLLMs robustness and tool-based reasoning by generating code for image operations. It overcomes brittleness and improves performance through supervised fine-tuning and reinforcement learning, enabling flexible tool composition and error recovery.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03746
• PDF: https://arxiv.org/pdf/2512.03746

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#CodeVision #MLLM #ComputerVision #AIResearch #DeepLearning
This media is not supported in your browser
VIEW IN TELEGRAM
RELIC: Interactive Video World Model with Long-Horizon Memory

📝 Summary:
RELIC is a unified framework enabling real-time, memory-aware exploration of scenes with user control. It integrates long-horizon memory and spatial consistency using video-diffusion distillation, achieving 16 FPS generation with robust 3D coherence.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04040
• PDF: https://arxiv.org/pdf/2512.04040
• Project Page: https://relic-worldmodel.github.io/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#WorldModels #VideoDiffusion #DeepLearning #RealTimeAI #ComputerVision
SpaceTools: Tool-Augmented Spatial Reasoning via Double Interactive RL

📝 Summary:
SpaceTools introduces Double Interactive Reinforcement Learning DIRL. This two-phase RL framework enables Vision Language Models to coordinate multiple tools for precise spatial reasoning, achieving state-of-the-art performance on benchmarks and real-world robot tasks.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04069
• PDF: https://arxiv.org/pdf/2512.04069
• Project Page: https://spacetools.github.io/
• Github: https://spacetools.github.io/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ReinforcementLearning #VisionLanguageModels #Robotics #SpatialReasoning #AI
SR-GRPO: Stable Rank as an Intrinsic Geometric Reward for Large Language Model Alignment

📝 Summary:
This paper proposes stable rank, an intrinsic quality signal from LLM representations, to improve alignment without external supervision. Stable rank measures effective dimensionality and is used as a reward in SR-GRPO, boosting LLM performance on reasoning tasks.

🔹 Publication Date: Published on Dec 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02807
• PDF: https://arxiv.org/pdf/2512.02807

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#StableRank #LLMAlignment #LargeLanguageModels #AIResearch #DeepLearning
CookAnything: A Framework for Flexible and Consistent Multi-Step Recipe Image Generation

📝 Summary:
CookAnything is a diffusion framework generating coherent, multi-step recipe image sequences from instructions. It uses step-wise regional control, flexible positional encoding, and cross-step consistency for consistent, high-quality visual synthesis.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03540
• PDF: https://arxiv.org/pdf/2512.03540
• Github: https://github.com/zhangdaxia22/CookAnything

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#CookAnything #ImageGeneration #DiffusionModels #AI #RecipeGeneration
AlignBench: Benchmarking Fine-Grained Image-Text Alignment with Synthetic Image-Caption Pairs

📝 Summary:
AlignBench is a new benchmark for fine-grained image-text alignment, using detailed synthetic image-caption pairs. It reveals that CLIP-based models struggle with compositional reasoning and shows detector self-preference.

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.20515
• PDF: https://arxiv.org/pdf/2511.20515
• Project Page: https://dahlian00.github.io/AlignBench/
• Github: https://dahlian00.github.io/AlignBench/

Datasets citing this paper:
https://huggingface.co/datasets/omron-sinicx/AlignBench

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageTextAlignment #MultimodalAI #ComputerVision #Benchmarking #CLIPModels
SkillFactory: Self-Distillation For Learning Cognitive Behaviors

📝 Summary:
SkillFactory fine-tunes models to learn cognitive skills using self-generated data before reinforcement learning. This self-distillation method enhances robustness and generalization post-RL, enabling models to effectively utilize acquired cognitive skills.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04072
• PDF: https://arxiv.org/pdf/2512.04072

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#SelfDistillation #ReinforcementLearning #CognitiveAI #MachineLearning #AIResearch
In-Context Representation Hijacking

📝 Summary:
Doublespeak is an in-context attack that hijacks LLM representations. It replaces harmful keywords with benign ones in examples, making LLMs interpret innocuous prompts as harmful, bypassing safety. This highlights a need for representation-level alignment.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03771
• PDF: https://arxiv.org/pdf/2512.03771

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLM #AISafety #AIsecurity #InContextLearning #RepresentationLearning
1
UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs

📝 Summary:
UniQL unifies quantization and low-rank compression to deploy LLMs on mobile devices. It reduces memory by 4x-5.7x and improves token throughput by 2.7x-3.4x, maintaining accuracy across various model types.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03383
• PDF: https://arxiv.org/pdf/2512.03383
• Project Page: https://hychiang.info/projects/uniql/
• Github: https://github.com/enyac-group/UniQL

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLMs #EdgeAI #Quantization #ModelCompression #DeepLearning
ToolOrchestra: Elevating Intelligence via Efficient Model and Tool Orchestration

📝 Summary:
ToolOrchestra uses reinforcement learning to train small orchestrators that coordinate intelligent tools. This method enables an 8B model to outperform GPT-5 on complex tasks like Humanitys Last Exam, achieving higher accuracy at significantly lower cost and improving efficiency.

🔹 Publication Date: Published on Nov 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21689
• PDF: https://arxiv.org/pdf/2511.21689
• Project Page: https://research.nvidia.com/labs/lpr/ToolOrchestra/
• Github: https://github.com/NVlabs/ToolOrchestra/

🔹 Models citing this paper:
https://huggingface.co/nvidia/Orchestrator-8B
https://huggingface.co/Mungert/Orchestrator-8B-GGUF
https://huggingface.co/cyankiwi/Orchestrator-8B-AWQ-4bit

Datasets citing this paper:
https://huggingface.co/datasets/nvidia/ToolScale
https://huggingface.co/datasets/victor/ToolScale
https://huggingface.co/datasets/FranckAbgrall/ToolScale

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ToolOrchestra #ModelOrchestration #ReinforcementLearning #LLMs #AI
Echo-4o: Harnessing the Power of GPT-4o Synthetic Images for Improved Image Generation

📝 Summary:
Echo-4o-Image is a 180K synthetic dataset from GPT-4o. It enhances image generation by covering rare scenarios and providing clean text to image supervision. This improves model performance and transferability across various foundation models.

🔹 Publication Date: Published on Aug 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2508.09987
• PDF: https://arxiv.org/pdf/2508.09987
• Project Page: https://yejy53.github.io/Echo-4o/
• Github: https://yejy53.github.io/Echo-4o

Datasets citing this paper:
https://huggingface.co/datasets/Yejy53/Echo-4o-Image

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageGeneration #GPT4o #SyntheticData #AIResearch #FoundationModels
MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech Translation

📝 Summary:
MultiMed-ST, a large-scale multilingual medical speech translation dataset, is introduced. With 290,000 samples in five languages, it is the largest medical MT and multilingual ST dataset. This work also provides an extensive comparative analysis.

🔹 Publication Date: Published on Apr 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2504.03546
• PDF: https://arxiv.org/pdf/2504.03546
• Project Page: https://github.com/leduckhai/MultiMed-ST
• Github: https://github.com/leduckhai/MultiMed-ST

🔹 Models citing this paper:
https://huggingface.co/leduckhai/MultiMed-ST

Datasets citing this paper:
https://huggingface.co/datasets/leduckhai/MultiMed-ST

Spaces citing this paper:
https://huggingface.co/spaces/HaoVuong/MedicalASR

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#SpeechTranslation #MedicalAI #MultilingualNLP #MachineTranslation #Dataset
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach

📝 Summary:
TACO enhances VLA model stability and success rates by preventing distribution shifts at inference. It uses a lightweight pseudo-count estimator to verify and select optimal action chunks at test-time. This gradient-free method significantly improves performance in downstream tasks.

🔹 Publication Date: Published on Dec 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02834
• PDF: https://arxiv.org/pdf/2512.02834
• Project Page: https://vla-anti-exploration.github.io/
• Github: https://github.com/breez3young/TACO

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VLAModels #AntiExploration #AIResearch #MachineLearning #RoboticsAI