ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
3.99K photos
226 videos
23 files
4.29K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Rethinking Prompt Design for Inference-time Scaling in Text-to-Visual Generation

📝 Summary:
PRIS adaptively revises prompts during text-to-visual generation inference to enhance user intent alignment. It reviews visual failures and redesigns prompts using fine-grained feedback, proving that jointly scaling prompts and visuals improves accuracy and quality.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03534
• PDF: https://arxiv.org/pdf/2512.03534
• Project Page: https://subin-kim-cv.github.io/PRIS

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#PromptEngineering #TextToImage #GenerativeAI #DeepLearning #AIResearch
Thinking with Programming Vision: Towards a Unified View for Thinking with Images

📝 Summary:
CodeVision enhances MLLMs robustness and tool-based reasoning by generating code for image operations. It overcomes brittleness and improves performance through supervised fine-tuning and reinforcement learning, enabling flexible tool composition and error recovery.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03746
• PDF: https://arxiv.org/pdf/2512.03746

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#CodeVision #MLLM #ComputerVision #AIResearch #DeepLearning
This media is not supported in your browser
VIEW IN TELEGRAM
RELIC: Interactive Video World Model with Long-Horizon Memory

📝 Summary:
RELIC is a unified framework enabling real-time, memory-aware exploration of scenes with user control. It integrates long-horizon memory and spatial consistency using video-diffusion distillation, achieving 16 FPS generation with robust 3D coherence.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04040
• PDF: https://arxiv.org/pdf/2512.04040
• Project Page: https://relic-worldmodel.github.io/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#WorldModels #VideoDiffusion #DeepLearning #RealTimeAI #ComputerVision
SpaceTools: Tool-Augmented Spatial Reasoning via Double Interactive RL

📝 Summary:
SpaceTools introduces Double Interactive Reinforcement Learning DIRL. This two-phase RL framework enables Vision Language Models to coordinate multiple tools for precise spatial reasoning, achieving state-of-the-art performance on benchmarks and real-world robot tasks.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04069
• PDF: https://arxiv.org/pdf/2512.04069
• Project Page: https://spacetools.github.io/
• Github: https://spacetools.github.io/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ReinforcementLearning #VisionLanguageModels #Robotics #SpatialReasoning #AI
SR-GRPO: Stable Rank as an Intrinsic Geometric Reward for Large Language Model Alignment

📝 Summary:
This paper proposes stable rank, an intrinsic quality signal from LLM representations, to improve alignment without external supervision. Stable rank measures effective dimensionality and is used as a reward in SR-GRPO, boosting LLM performance on reasoning tasks.

🔹 Publication Date: Published on Dec 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02807
• PDF: https://arxiv.org/pdf/2512.02807

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#StableRank #LLMAlignment #LargeLanguageModels #AIResearch #DeepLearning
CookAnything: A Framework for Flexible and Consistent Multi-Step Recipe Image Generation

📝 Summary:
CookAnything is a diffusion framework generating coherent, multi-step recipe image sequences from instructions. It uses step-wise regional control, flexible positional encoding, and cross-step consistency for consistent, high-quality visual synthesis.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03540
• PDF: https://arxiv.org/pdf/2512.03540
• Github: https://github.com/zhangdaxia22/CookAnything

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#CookAnything #ImageGeneration #DiffusionModels #AI #RecipeGeneration
AlignBench: Benchmarking Fine-Grained Image-Text Alignment with Synthetic Image-Caption Pairs

📝 Summary:
AlignBench is a new benchmark for fine-grained image-text alignment, using detailed synthetic image-caption pairs. It reveals that CLIP-based models struggle with compositional reasoning and shows detector self-preference.

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.20515
• PDF: https://arxiv.org/pdf/2511.20515
• Project Page: https://dahlian00.github.io/AlignBench/
• Github: https://dahlian00.github.io/AlignBench/

Datasets citing this paper:
https://huggingface.co/datasets/omron-sinicx/AlignBench

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageTextAlignment #MultimodalAI #ComputerVision #Benchmarking #CLIPModels
SkillFactory: Self-Distillation For Learning Cognitive Behaviors

📝 Summary:
SkillFactory fine-tunes models to learn cognitive skills using self-generated data before reinforcement learning. This self-distillation method enhances robustness and generalization post-RL, enabling models to effectively utilize acquired cognitive skills.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04072
• PDF: https://arxiv.org/pdf/2512.04072

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#SelfDistillation #ReinforcementLearning #CognitiveAI #MachineLearning #AIResearch
In-Context Representation Hijacking

📝 Summary:
Doublespeak is an in-context attack that hijacks LLM representations. It replaces harmful keywords with benign ones in examples, making LLMs interpret innocuous prompts as harmful, bypassing safety. This highlights a need for representation-level alignment.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03771
• PDF: https://arxiv.org/pdf/2512.03771

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLM #AISafety #AIsecurity #InContextLearning #RepresentationLearning
1
UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs

📝 Summary:
UniQL unifies quantization and low-rank compression to deploy LLMs on mobile devices. It reduces memory by 4x-5.7x and improves token throughput by 2.7x-3.4x, maintaining accuracy across various model types.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03383
• PDF: https://arxiv.org/pdf/2512.03383
• Project Page: https://hychiang.info/projects/uniql/
• Github: https://github.com/enyac-group/UniQL

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLMs #EdgeAI #Quantization #ModelCompression #DeepLearning
ToolOrchestra: Elevating Intelligence via Efficient Model and Tool Orchestration

📝 Summary:
ToolOrchestra uses reinforcement learning to train small orchestrators that coordinate intelligent tools. This method enables an 8B model to outperform GPT-5 on complex tasks like Humanitys Last Exam, achieving higher accuracy at significantly lower cost and improving efficiency.

🔹 Publication Date: Published on Nov 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21689
• PDF: https://arxiv.org/pdf/2511.21689
• Project Page: https://research.nvidia.com/labs/lpr/ToolOrchestra/
• Github: https://github.com/NVlabs/ToolOrchestra/

🔹 Models citing this paper:
https://huggingface.co/nvidia/Orchestrator-8B
https://huggingface.co/Mungert/Orchestrator-8B-GGUF
https://huggingface.co/cyankiwi/Orchestrator-8B-AWQ-4bit

Datasets citing this paper:
https://huggingface.co/datasets/nvidia/ToolScale
https://huggingface.co/datasets/victor/ToolScale
https://huggingface.co/datasets/FranckAbgrall/ToolScale

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ToolOrchestra #ModelOrchestration #ReinforcementLearning #LLMs #AI
Echo-4o: Harnessing the Power of GPT-4o Synthetic Images for Improved Image Generation

📝 Summary:
Echo-4o-Image is a 180K synthetic dataset from GPT-4o. It enhances image generation by covering rare scenarios and providing clean text to image supervision. This improves model performance and transferability across various foundation models.

🔹 Publication Date: Published on Aug 13

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2508.09987
• PDF: https://arxiv.org/pdf/2508.09987
• Project Page: https://yejy53.github.io/Echo-4o/
• Github: https://yejy53.github.io/Echo-4o

Datasets citing this paper:
https://huggingface.co/datasets/Yejy53/Echo-4o-Image

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageGeneration #GPT4o #SyntheticData #AIResearch #FoundationModels
MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech Translation

📝 Summary:
MultiMed-ST, a large-scale multilingual medical speech translation dataset, is introduced. With 290,000 samples in five languages, it is the largest medical MT and multilingual ST dataset. This work also provides an extensive comparative analysis.

🔹 Publication Date: Published on Apr 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2504.03546
• PDF: https://arxiv.org/pdf/2504.03546
• Project Page: https://github.com/leduckhai/MultiMed-ST
• Github: https://github.com/leduckhai/MultiMed-ST

🔹 Models citing this paper:
https://huggingface.co/leduckhai/MultiMed-ST

Datasets citing this paper:
https://huggingface.co/datasets/leduckhai/MultiMed-ST

Spaces citing this paper:
https://huggingface.co/spaces/HaoVuong/MedicalASR

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#SpeechTranslation #MedicalAI #MultilingualNLP #MachineTranslation #Dataset
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach

📝 Summary:
TACO enhances VLA model stability and success rates by preventing distribution shifts at inference. It uses a lightweight pseudo-count estimator to verify and select optimal action chunks at test-time. This gradient-free method significantly improves performance in downstream tasks.

🔹 Publication Date: Published on Dec 2

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02834
• PDF: https://arxiv.org/pdf/2512.02834
• Project Page: https://vla-anti-exploration.github.io/
• Github: https://github.com/breez3young/TACO

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VLAModels #AntiExploration #AIResearch #MachineLearning #RoboticsAI
Flowing Backwards: Improving Normalizing Flows via Reverse Representation Alignment

📝 Summary:
A novel alignment strategy improves Normalizing Flows by aligning their generative reverse pass with vision foundation models. This boosts generative quality, classification accuracy, and training speed, achieving new state-of-the-art results for NFs.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22345
• PDF: https://arxiv.org/pdf/2511.22345
• Github: https://github.com/MCG-NJU/FlowBack

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#NormalizingFlows #GenerativeAI #DeepLearning #ComputerVision #MachineLearning
Deep Research: A Systematic Survey

📝 Summary:
This survey systematically reviews Deep Research systems that integrate LLMs with external tools to enhance complex problem-solving. It provides a roadmap, key components, optimization techniques, and challenges for these advanced research agents.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02038
• PDF: https://arxiv.org/pdf/2512.02038
• Project Page: https://deep-research-survey.github.io/
• Github: https://github.com/mangopy/Deep-Research-Survey

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#DeepResearch #LLMs #AI #ResearchAgents #SystematicSurvey
Adversarial Confusion Attack: Disrupting Multimodal Large Language Models

📝 Summary:
The Adversarial Confusion Attack systematically disrupts multimodal LLMs, causing incoherent or confidently incorrect outputs. This basic adversarial technique transfers to diverse models, including proprietary ones, potentially hindering AI Agent reliability.

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.20494
• PDF: https://arxiv.org/pdf/2511.20494

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AdversarialAttack #MultimodalAI #LLMs #AISecurity #AIResearch
1👍1🔥1
Jina-VLM: Small Multilingual Vision Language Model

📝 Summary:
Jina-VLM is a 2.4B vision-language model achieving top multilingual VQA among open 2B-scale models. It couples a SigLIP2 vision encoder with a Qwen3 language backbone via an attention-pooling connector for efficient arbitrary-resolution image processing.

🔹 Publication Date: Published on Dec 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.04032
• PDF: https://arxiv.org/pdf/2512.04032

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VLM #MultilingualAI #ComputerVision #DeepLearning #VQA
2
🎁❗️TODAY FREE❗️🎁

Entry to our VIP channel is completely free today. Tomorrow it will cost $500! 🔥

JOIN 👇

https://news.1rj.ru/str/+MPpZ4FO2PHQ4OTZi
https://news.1rj.ru/str/+MPpZ4FO2PHQ4OTZi
https://news.1rj.ru/str/+MPpZ4FO2PHQ4OTZi
1
Economies of Open Intelligence: Tracing Power & Participation in the Model Ecosystem

📝 Summary:
Analysis of Hugging Face data reveals a rebalancing of the open model economy. US industry dominance has declined as Chinese influence and community developers grow, alongside shifts in model properties and declining data transparency.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03073
• PDF: https://arxiv.org/pdf/2512.03073

Spaces citing this paper:
https://huggingface.co/spaces/economies-open-ai/open-model-evolution

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#OpenModels #AIEconomy #HuggingFace #AIGeopolitics #DataTransparency
3