ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
4.01K photos
229 videos
23 files
4.32K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
PPTAgent: Generating and Evaluating Presentations Beyond Text-to-Slides

📝 Summary:
PPTAgent improves presentation generation with a two-stage approach that analyzes reference presentations to ensure structural and content consistency. It outperforms traditional methods across content, design, and coherence.

🔹 Publication Date: Published on Jan 7

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2501.03936
• PDF: https://arxiv.org/pdf/2501.03936
• Github: https://github.com/icip-cas/PPTAgent

Datasets citing this paper:
https://huggingface.co/datasets/Forceless/Zenodo10K

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AIPresentations #GenerativeAI #MachineLearning #NLP #TechResearch
WorldVLA: Towards Autoregressive Action World Model

📝 Summary:
WorldVLA unifies VLA and world models, showing mutual enhancement in image understanding and action generation. It addresses autoregressive action prediction errors with an attention mask strategy that significantly improves performance.

🔹 Publication Date: Published on Jun 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2506.21539
• PDF: https://arxiv.org/pdf/2506.21539
• Project Page: https://github.com/alibaba-damo-academy/WorldVLA
• Github: https://github.com/alibaba-damo-academy/WorldVLA

🔹 Models citing this paper:
https://huggingface.co/Alibaba-DAMO-Academy/WorldVLA
https://huggingface.co/jcenaa/WorldVLA-ActionModel-LIBERO-Goal-256
https://huggingface.co/jcenaa/WorldVLA-ActionModel-LIBERO-10-256

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AI #MachineLearning #Robotics #ComputerVision #WorldModels
1
Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer

📝 Summary:
Z-Image is an efficient 6B-parameter diffusion transformer achieving state-of-the-art image generation with significantly reduced computational cost. It enables sub-second inference and consumer hardware compatibility, challenging the scale-at-all-costs paradigm.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22699
• PDF: https://arxiv.org/pdf/2511.22699
• Project Page: https://tongyi-mai.github.io/Z-Image-blog/
• Github: https://github.com/Tongyi-MAI/Z-Image

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageGeneration #DiffusionModels #EfficientAI #FoundationModels #MachineLearning
1
DiP: Taming Diffusion Models in Pixel Space

📝 Summary:
DiP is an efficient pixel space diffusion framework addressing the quality-efficiency trade-off without VAEs. It combines a Diffusion Transformer for global structure and a Patch Detailer Head for local details, achieving high-quality images up to 10x faster.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.18822
• PDF: https://arxiv.org/pdf/2511.18822

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#DiffusionModels #GenerativeAI #ImageGeneration #DeepLearning #ComputerVision
Architecture Decoupling Is Not All You Need For Unified Multimodal Model

📝 Summary:
Unified multimodal models struggle with task conflicts. This paper introduces Attention Interaction Alignment AIA loss, which learns task-specific cross-modal attention patterns. AIA loss improves generation and understanding performance without model decoupling.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22663
• PDF: https://arxiv.org/pdf/2511.22663
• Project Page: https://zhengdian1.github.io/AIA-project/
• Github: https://github.com/zhengdian1/AIA

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultimodalAI #DeepLearning #AttentionMechanisms #AIResearch #ArtificialIntelligence
DualVLA: Building a Generalizable Embodied Agent via Partial Decoupling of Reasoning and Action

📝 Summary:
DualVLA tackles action degeneration in VLAs by boosting action performance while retaining reasoning. It uses dual-layer data pruning and dual-teacher adaptive distillation. This balances precise action and multimodal understanding, leading to high success rates.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22134
• PDF: https://arxiv.org/pdf/2511.22134
• Project Page: https://costaliya.github.io/DualVLA/
• Github: https://costaliya.github.io/DualVLA/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#EmbodiedAI #VLAs #AIagents #DeepLearning #AIResearch
AnyTalker: Scaling Multi-Person Talking Video Generation with Interactivity Refinement

📝 Summary:
AnyTalker generates scalable multi-person talking videos using an identity-aware Diffusion Transformer. It trains mostly on single-person videos, refining interactivity with minimal multi-person data, achieving high lip sync and naturalness.

🔹 Publication Date: Published on Nov 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23475
• PDF: https://arxiv.org/pdf/2511.23475
• Project Page: https://hkust-c4g.github.io/AnyTalker-homepage/
• Github: https://github.com/HKUST-C4G/AnyTalker

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VideoGeneration #GenerativeAI #DiffusionModels #ComputerVision #DeepLearning
Every Token Counts: Generalizing 16M Ultra-Long Context in Large Language Models

📝 Summary:
This paper introduces Hierarchical Sparse Attention HSA to enable Transformers to handle ultra-long contexts efficiently. The HSA-UltraLong model achieves over 90 percent accuracy on 16M token retrieval tasks, matching full attention on shorter contexts. It lays a foundation for future long conte...

🔹 Publication Date: Published on Nov 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23319
• PDF: https://arxiv.org/pdf/2511.23319

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLM #LongContext #SparseAttention #Transformers #AIResearch
Media is too big
VIEW IN TELEGRAM
Captain Safari: A World Engine

📝 Summary:
Captain Safari is a pose-conditioned world engine that generates high-quality, 3D-consistent long videos with precise camera paths. It uses a dynamic memory and retriever of pose-aligned world tokens to outperform existing methods in quality, consistency, and trajectory following.

🔹 Publication Date: Published on Nov 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22815
• PDF: https://arxiv.org/pdf/2511.22815
• Project Page: https://johnson111788.github.io/open-safari/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#GenerativeAI #3DVideo #ComputerVision #WorldEngine #AIResearch
Test-time scaling of diffusions with flow maps

📝 Summary:
The Flow Map Trajectory Tilting FMTT algorithm enhances test-time diffusion models by using flow maps to align better with user rewards. This approach solves the ill-posed problem of reward gradients, achieving superior reward ascent for improved sampling and novel image editing.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22688
• PDF: https://arxiv.org/pdf/2511.22688
• Project Page: https://flow-map-trajectory-tilting.github.io/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#DiffusionModels #GenerativeAI #ImageEditing #MachineLearning #FlowMaps
1
REASONEDIT: Towards Reasoning-Enhanced Image Editing Models

📝 Summary:
REASONEDIT integrates MLLM reasoning thinking and reflection into image editing models. This enables a thinking-editing-reflection loop, improving instruction understanding and editing accuracy by interpreting abstract instructions and correcting results. The approach achieves significant perform...

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22625
• PDF: https://arxiv.org/pdf/2511.22625

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageEditing #AIReasoning #MLLM #ComputerVision #AI
The Collapse of Patches

📝 Summary:
Patch collapse is a novel image modeling perspective where observing certain patches reduces uncertainty in others. An autoencoder learns patch dependencies to determine an optimal realization order. This improves masked image modeling and promotes vision efficiency, achieving high accuracy with ...

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22281
• PDF: https://arxiv.org/pdf/2511.22281
• Github: https://github.com/wguo-ai/CoP

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageModeling #ComputerVision #Autoencoders #DeepLearning #MaskedImageModeling
Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input Information

📝 Summary:
Focused Chain-of-Thought F-CoT is an input-centric method that improves LLM reasoning efficiency. It structures query information into a concise context, guiding models to focus reasoning. This approach reduces token usage by 2-3x while maintaining accuracy on arithmetic problems.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22176
• PDF: https://arxiv.org/pdf/2511.22176

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLM #ChainOfThought #AI #NLP #Efficiency
SO-Bench: A Structural Output Evaluation of Multimodal LLMs

📝 Summary:
SO-Bench is a new benchmark evaluating MLLMs ability to generate schema-compliant structured outputs from visual inputs. It reveals significant gaps in current models performance, highlighting the need for better multimodal structured reasoning.

🔹 Publication Date: Published on Nov 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.21750
• PDF: https://arxiv.org/pdf/2511.21750

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultimodalLLMs #StructuredOutput #LLMEvaluation #AIResearch #ComputerVision
Decoupled DMD: CFG Augmentation as the Spear, Distribution Matching as the Shield

📝 Summary:
This study challenges the understanding of Distribution Matching Distillation DMD for text-to-image generation. It reveals that CFG Augmentation is the primary driver of few-step distillation, while distribution matching acts as a regularizer. This new insight enables improved distillation method...

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22677
• PDF: https://arxiv.org/pdf/2511.22677
• Project Page: https://tongyi-mai.github.io/Z-Image-blog/
• Github: https://github.com/Tongyi-MAI/Z-Image/tree/main

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#TextToImage #GenerativeAI #DiffusionModels #ModelDistillation #AIResearch
FedRE: A Representation Entanglement Framework for Model-Heterogeneous Federated Learning

📝 Summary:
FedRE is a federated learning framework for model-heterogeneous environments. Clients create and upload entangled representations and entangled-label encodings to train a global classifier. This method enhances performance, protects privacy, and reduces communication overhead.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22265
• PDF: https://arxiv.org/pdf/2511.22265

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#FederatedLearning #MachineLearning #AI #PrivacyPreservingAI #RepresentationLearning
This media is not supported in your browser
VIEW IN TELEGRAM
Vision Bridge Transformer at Scale

📝 Summary:
Vision Bridge Transformer ViBT is a large-scale model for conditional generation. It efficiently translates data by directly modeling input-to-output trajectories, unlike diffusion models. ViBT scales to billions of parameters, achieving robust performance in image and video editing tasks.

🔹 Publication Date: Published on Nov 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23199
• PDF: https://arxiv.org/pdf/2511.23199
• Project Page: https://yuanshi9815.github.io/ViBT_homepage/
• Github: https://github.com/Yuanshi9815/ViBT

Spaces citing this paper:
https://huggingface.co/spaces/Yuanshi/ViBT

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VisionTransformer #GenerativeAI #ComputerVision #DeepLearning #AI
OralGPT-Omni: A Versatile Dental Multimodal Large Language Model

📝 Summary:
OralGPT-Omni is the first dental MLLM for comprehensive image analysis, using TRACE-CoT reasoning. It introduces the MMOral-Uni benchmark and dramatically outperforms GPT-5, advancing intelligent dentistry.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22055
• PDF: https://arxiv.org/pdf/2511.22055
• Github: https://github.com/isbrycee/OralGPT

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#DentalAI #MLLM #GenerativeAI #HealthcareTech #MedicalImaging
World in a Frame: Understanding Culture Mixing as a New Challenge for Vision-Language Models

📝 Summary:
LVLMs struggle to preserve cultural identities in mixed visual scenes. Researchers created CultureMix, a VQA benchmark, finding consistent failures and background reliance. Supervised fine-tuning with diverse culture mixing data significantly improves model consistency and reduces background sens...

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22787
• PDF: https://arxiv.org/pdf/2511.22787

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VisionLanguageModels #CulturalAI #ComputerVision #AIML #AIResearch
1
RefineBench: Evaluating Refinement Capability of Language Models via Checklists

📝 Summary:
RefineBench evaluates language models' self-refinement and guided refinement capabilities using 1,000 problems and a checklist. It finds that LMs perform poorly at self-refinement, often failing to improve without guidance, but excel at guided refinement with targeted feedback.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22173
• PDF: https://arxiv.org/pdf/2511.22173

Datasets citing this paper:
https://huggingface.co/datasets/RefineBench/RefineBench

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLM #AI #NLP #ModelEvaluation #Refinement
From Pixels to Feelings: Aligning MLLMs with Human Cognitive Perception of Images

📝 Summary:
MLLMs struggle with human cognitive perception of images like memorability or aesthetics. CogIP-Bench evaluates this gap, showing post-training significantly improves alignment. This enhances human-like perception and improves creative AI tasks.

🔹 Publication Date: Published on Nov 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22805
• PDF: https://arxiv.org/pdf/2511.22805
• Project Page: https://follen-cry.github.io/MLLM-Cognition-project-page/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MLLM #CognitiveAI #ImagePerception #AIAlignment #AIResearch