ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
4.01K photos
229 videos
23 files
4.32K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Image-Free Timestep Distillation via Continuous-Time Consistency with Trajectory-Sampled Pairs

📝 Summary:
TBCM is a self-contained method that distills diffusion models by extracting latent representations directly from the teacher model trajectory. This eliminates external data, greatly improving efficiency and quality for few-step generation with reduced resources.

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.20410
• PDF: https://arxiv.org/pdf/2511.20410
• Github: https://github.com/hustvl/TBCM

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#DiffusionModels #ModelDistillation #GenerativeAI #AIResearch #MachineLearning
RAISECity: A Multimodal Agent Framework for Reality-Aligned 3D World Generation at City-Scale

📝 Summary:
RAISECity uses an agentic framework with multimodal tools for reality-aligned, high-quality, city-scale 3D world generation. It iteratively refines scenes, achieving superior precision and fidelity compared to existing methods.

🔹 Publication Date: Published on Nov 22

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.18005
• PDF: https://arxiv.org/pdf/2511.18005
• Github: https://github.com/tsinghua-fib-lab/RAISECity

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#3DGeneration #GenerativeAI #MultimodalAI #VirtualWorlds #ComputerGraphics
Multimodal Evaluation of Russian-language Architectures

📝 Summary:
Mera Multi is the first open multimodal evaluation framework for Russian-language AI, addressing a lack of such benchmarks. It introduces 18 new instruction-based tasks across text, image, audio, and video, created with Russian cultural specificity and a leakage prevention methodology.

🔹 Publication Date: Published on Nov 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.15552
• PDF: https://arxiv.org/pdf/2511.15552
• Project Page: https://mera.a-ai.ru/en/multi
• Github: https://github.com/MERA-Evaluation/MERA_MULTIMODAL/tree/main

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultimodalAI #RussianAI #AIEvaluation #Benchmarks #AIresearch
WizardCoder: Empowering Code Large Language Models with Evol-Instruct

📝 Summary:
WizardCoder is a Code LLM fine-tuned using Evol-Instruct for complex instructions. It significantly outperforms open-source and major closed LLMs on code generation benchmarks.

🔹 Publication Date: Published on Jun 14, 2023

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2306.08568
• PDF: https://arxiv.org/pdf/2306.08568
• Github: https://github.com/nlpxucan/WizardLM

🔹 Models citing this paper:
https://huggingface.co/WizardLMTeam/WizardCoder-Python-34B-V1.0
https://huggingface.co/WizardLMTeam/WizardCoder-15B-V1.0
https://huggingface.co/alpindale/WizardLM-2-8x22B

Datasets citing this paper:
https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_V2_196k
https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1
https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_70k

Spaces citing this paper:
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard
https://huggingface.co/spaces/FallnAI/Quantize-HF-Models

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#CodeLLM #LLM #AIE #CodeGeneration #EvolInstruct
UniGame: Turning a Unified Multimodal Model Into Its Own Adversary

📝 Summary:
UniGame is a self-adversarial post-training framework that improves unified multimodal models. It resolves inconsistencies between understanding and generation by using a lightweight perturber to make the model its own adversary. This boosts consistency, understanding, generation, and robustness.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19413
• PDF: https://arxiv.org/pdf/2511.19413
• Github: https://github.com/AIFrontierLab/UniGame

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultimodalAI #AdversarialLearning #AIResearch #MachineLearning #ModelRobustness
1
Reinforcing Action Policies by Prophesying

📝 Summary:
ProphRL improves Vision-Language-Action policies by overcoming imitation learning limits. It uses Prophet, a learned world model simulator, with tailored reinforcement learning FA-GRPO and FlowScale for data-efficient and stable post-training. This yields significant success gains on benchmarks a...

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.20633
• PDF: https://arxiv.org/pdf/2511.20633
• Project Page: https://logosroboticsgroup.github.io/ProphRL/
• Github: https://github.com/LogosRoboticsGroup/ProphRL

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ReinforcementLearning #ProphRL #WorldModels #Robotics #DeepLearning
Position: The Complexity of Perfect AI Alignment -- Formalizing the RLHF Trilemma

📝 Summary:
RLHF faces an Alignment Trilemma: representativeness, tractability, and robustness are proven intractable to achieve simultaneously. Current RLHF sacrifices representativeness globally, causing biases and pathologies.

🔹 Publication Date: Published on Nov 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19504
• PDF: https://arxiv.org/pdf/2511.19504

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AIAlienment #RLHF #AISafety #MachineLearning #AIResearch
2
Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild

📝 Summary:
Gradio is an open-source Python package that creates visual interfaces for ML models, making them accessible to non-specialized users via a URL. This improves collaboration by allowing easy interaction, feedback, and trust-building in interdisciplinary settings.

🔹 Publication Date: Published on Jun 6, 2019

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/1906.02569
• PDF: https://arxiv.org/pdf/1906.02569
• Github: https://github.com/gradio-app/gradio

🔹 Models citing this paper:
https://huggingface.co/CxECHO/CE

Datasets citing this paper:
https://huggingface.co/datasets/society-ethics/papers

Spaces citing this paper:
https://huggingface.co/spaces/orYx-models/Nudge_Generator
https://huggingface.co/spaces/society-ethics/about
https://huggingface.co/spaces/mindmime/gradio

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#Gradio #MachineLearning #MLOps #Python #DataScience
This media is not supported in your browser
VIEW IN TELEGRAM
NAF: Zero-Shot Feature Upsampling via Neighborhood Attention Filtering

📝 Summary:
NAF upsamples Vision Foundation Model features zero-shot by learning adaptive spatial-and-content weights. It outperforms VFM-specific upsamplers without retraining, achieving state-of-the-art performance across various tasks efficiently.

🔹 Publication Date: Published on Nov 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.18452
• PDF: https://arxiv.org/pdf/2511.18452
• Github: https://github.com/valeoai/NAF?tab=readme-ov-file

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ZeroShotLearning #ComputerVision #FeatureUpsampling #DeepLearning #AIResearch
G^2VLM: Geometry Grounded Vision Language Model with Unified 3D Reconstruction and Spatial Reasoning

📝 Summary:
G^2VLM integrates 3D geometry learning into vision-language models to overcome their spatial intelligence deficits. It unifies 3D reconstruction and spatial reasoning, leveraging learned 3D features to achieve strong performance in both tasks.

🔹 Publication Date: Published on Nov 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21688
• PDF: https://arxiv.org/pdf/2511.21688
• Project Page: https://gordonhu608.github.io/g2vlm.github.io/
• Github: https://github.com/InternRobotics/G2VLM

🔹 Models citing this paper:
https://huggingface.co/InternRobotics/G2VLM-2B-MoT

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VisionLanguageModels #3DReconstruction #SpatialReasoning #ComputerVision #ArtificialIntelligence
1
MIRA: Multimodal Iterative Reasoning Agent for Image Editing

📝 Summary:
MIRA is a multimodal iterative reasoning agent that enhances diffusion-based image editing. It tackles complex instructions by breaking them into atomic edits via a perception-reasoning-action loop with visual feedback. This improves semantic consistency and perceptual quality, outperforming othe...

🔹 Publication Date: Published on Nov 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21087
• PDF: https://arxiv.org/pdf/2511.21087

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AI #ImageEditing #MultimodalAI #DiffusionModels #ComputerVision
Multi-Crit: Benchmarking Multimodal Judges on Pluralistic Criteria-Following

📝 Summary:
Multi-Crit evaluates multimodal models as judges on following diverse criteria using novel metrics. Findings reveal current models struggle with consistent adherence and flexibility to pluralistic criteria. This highlights gaps in capabilities and lays a foundation for building reliable AI evalua...

🔹 Publication Date: Published on Nov 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21662
• PDF: https://arxiv.org/pdf/2511.21662
• Project Page: https://multi-crit.github.io/
• Github: https://multi-crit.github.io/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultimodalAI #AIEvaluation #BenchmarkingAI #AIJudges #MachineLearning
Agentic Learner with Grow-and-Refine Multimodal Semantic Memory

📝 Summary:
MLLMs often repeat errors due to insufficient multimodal memory. ViLoMem is a dual-stream memory framework that builds schema-based knowledge by separately encoding visual distractions and logical errors. This method significantly improves accuracy and reduces repeated errors across multiple benc...

🔹 Publication Date: Published on Nov 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21678
• PDF: https://arxiv.org/pdf/2511.21678

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MLLMs #MultimodalAI #AIMemory #DeepLearning #AIResearch
This media is not supported in your browser
VIEW IN TELEGRAM
Canvas-to-Image: Compositional Image Generation with Multimodal Controls

📝 Summary:
Canvas-to-Image unifies diverse controls like text, poses, and layouts into a single canvas image for high-fidelity compositional image generation. Its multi-task training helps it understand and integrate these controls, outperforming existing methods in adherence and identity.

🔹 Publication Date: Published on Nov 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21691
• PDF: https://arxiv.org/pdf/2511.21691
• Project Page: https://snap-research.github.io/canvas-to-image/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageGeneration #GenerativeAI #MultimodalAI #ComputerVision #DeepLearning
Video Generation Models Are Good Latent Reward Models

📝 Summary:
Traditional video reward models are inefficient, operating in pixel space. PRFL uses pre-trained video generation models as latent reward models, optimizing preferences entirely in latent space. This significantly improves human alignment and reduces memory and training time.

🔹 Publication Date: Published on Nov 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21541
• PDF: https://arxiv.org/pdf/2511.21541
• Project Page: https://kululumi.github.io/PRFL/
• Github: https://kululumi.github.io/PRFL/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VideoGeneration #ReinforcementLearning #LatentSpace #AIResearch #MachineLearning
Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free

📝 Summary:
Applying a head-specific sigmoid gate after Scaled Dot-Product Attention in large language models significantly improves performance, stability, and scaling. This simple modification mitigates attention sink and enhances long-context extrapolation by introducing non-linearity and sparse gating.

🔹 Publication Date: Published on May 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2505.06708
• PDF: https://arxiv.org/pdf/2505.06708
• Github: https://github.com/qiuzh20/gated_attention

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLM #AttentionMechanism #DeepLearning #NLP #AIResearch
Media is too big
VIEW IN TELEGRAM
Paper2Video: Automatic Video Generation from Scientific Papers

📝 Summary:
PaperTalker is a multi-agent framework for automatic academic video production, integrating slides, subnoscripts, speech, and talking heads. It produces more faithful, informative videos than existing methods, simplifying labor-intensive research communication.

🔹 Publication Date: Published on Oct 6

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.05096
• PDF: https://arxiv.org/pdf/2510.05096
• Project Page: https://showlab.github.io/Paper2Video/
• Github: https://showlab.github.io/Paper2Video/

Datasets citing this paper:
https://huggingface.co/datasets/ZaynZhu/Paper2Video

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VideoGeneration #AI #AcademicCommunication #MachineLearning #MultimodalAI
What does it mean to understand language?

📝 Summary:
Deep language understanding involves more than just surface meaning. It requires transferring information from the core language system to other brain regions for mental models, world knowledge, and memories. This offers a new strategy to study language comprehension.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19757
• PDF: https://arxiv.org/pdf/2511.19757

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LanguageUnderstanding #CognitiveScience #Neuroscience #MentalModels #NLP
This media is not supported in your browser
VIEW IN TELEGRAM
NAF: Zero-Shot Feature Upsampling via Neighborhood Attention Filtering

📝 Summary:
NAF upsamples Vision Foundation Model features zero-shot by learning adaptive spatial-and-content weights. It outperforms VFM-specific upsamplers without retraining, achieving state-of-the-art performance across various tasks efficiently.

🔹 Publication Date: Published on Nov 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.18452
• PDF: https://arxiv.org/pdf/2511.18452
• Github: https://github.com/valeoai/NAF?tab=readme-ov-file

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ZeroShotLearning #ComputerVision #FeatureUpsampling #DeepLearning #AIResearch