ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
4.01K photos
229 videos
23 files
4.32K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
In-Video Instructions: Visual Signals as Generative Control

📝 Summary:
This paper introduces In-Video Instruction for controllable image-to-video generation. It embeds visual signals like text or arrows directly into frames as instructions, offering precise, spatial-aware control over object actions. Experiments show video models reliably execute these visual cues.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19401
• PDF: https://arxiv.org/pdf/2511.19401
• Project Page: https://fangggf.github.io/In-Video/
• Github: https://fangggf.github.io/In-Video/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VideoGeneration #GenerativeAI #ComputerVision #AIResearch #DeepLearning
AutoEnv: Automated Environments for Measuring Cross-Environment Agent Learning

📝 Summary:
AutoEnv and AutoEnv-36 provide a standardized framework and dataset for measuring cross-environment agent learning. Their evaluations show that fixed learning methods do not scale across diverse environments, highlighting current limitations in agent generalization.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19304
• PDF: https://arxiv.org/pdf/2511.19304

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AI #MachineLearning #AgentLearning #Generalization #ReinforcementLearning
DeCo: Frequency-Decoupled Pixel Diffusion for End-to-End Image Generation

📝 Summary:
DeCo is a frequency-decoupled pixel diffusion framework that improves image generation by separating high-frequency details and low-frequency semantics. It uses a lightweight pixel decoder for details and a DiT for semantics, achieving superior efficiency and quality over existing pixel diffusion...

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19365
• PDF: https://arxiv.org/pdf/2511.19365
• Project Page: https://zehong-ma.github.io/DeCo/
• Github: https://github.com/Zehong-Ma/DeCo

🔹 Models citing this paper:
https://huggingface.co/zehongma/DeCo

Spaces citing this paper:
https://huggingface.co/spaces/zehongma/DeCo

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageGeneration #DiffusionModels #ComputerVision #DeepLearning #DeCo
Budget-Aware Tool-Use Enables Effective Agent Scaling

📝 Summary:
Tool-augmented agents struggle to scale with more tool calls due to a lack of budget awareness. This paper introduces Budget Tracker for continuous budget awareness and BATS for adaptive planning, dynamically adjusting strategy based on remaining resources. These methods significantly improve cos...

🔹 Publication Date: Published on Nov 21

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.17006
• PDF: https://arxiv.org/pdf/2511.17006

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AIAgents #ToolUse #ResourceManagement #AgentScaling #AIResearch
UltraFlux: Data-Model Co-Design for High-quality Native 4K Text-to-Image Generation across Diverse Aspect Ratios

📝 Summary:
UltraFlux overcomes diffusion transformer failures at 4K resolution and diverse aspect ratios through data-model co-design. It uses enhanced positional encoding, VAE improvements, gradient rebalancing, and aesthetic curriculum learning to achieve superior 4K text-to-image generation, outperformin...

🔹 Publication Date: Published on Nov 22

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.18050
• PDF: https://arxiv.org/pdf/2511.18050
• Project Page: https://github.com/W2GenAI-Lab/UltraFlux
• Github: https://github.com/W2GenAI-Lab/UltraFlux

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#TextToImage #GenerativeAI #4KGeneration #DiffusionModels #AIResearch
Controllable Layer Decomposition for Reversible Multi-Layer Image Generation

📝 Summary:
Controllable Layer Decomposition CLD enables fine-grained, controllable separation of raster images into editable RGBA layers, overcoming traditional compositing limitations. Using LD-DiT and MLCA, CLD surpasses existing methods in quality and control. It produces layers directly usable in design...

🔹 Publication Date: Published on Nov 20

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.16249
• PDF: https://arxiv.org/pdf/2511.16249
• Github: https://github.com/monkek123King/CLD

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageGeneration #DeepLearning #ComputerVision #ImageEditing #LayerDecomposition
PRInTS: Reward Modeling for Long-Horizon Information Seeking

📝 Summary:
PRInTS is a generative process reward model that improves AI agents information-seeking. It provides dense scoring on step quality and summarizes long trajectories to manage context. PRInTS enhances agent performance, matching or surpassing frontier models with a smaller backbone.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19314
• PDF: https://arxiv.org/pdf/2511.19314

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#RewardModeling #InformationSeeking #AIagents #GenerativeAI #MachineLearning
Plan-X: Instruct Video Generation via Semantic Planning

📝 Summary:
Plan-X improves instruction-aligned video generation by integrating a Semantic Planner with diffusion models. The planner generates semantic tokens that guide video synthesis, reducing visual hallucinations. This framework combines language models for reasoning with diffusion models for photoreal...

🔹 Publication Date: Published on Nov 22

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.17986
• PDF: https://arxiv.org/pdf/2511.17986
• Project Page: https://byteaigc.github.io/Plan-X/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VideoGeneration #DiffusionModels #AI #ComputerVision #DeepLearning
This media is not supported in your browser
VIEW IN TELEGRAM
Target-Bench: Can World Models Achieve Mapless Path Planning with Semantic Targets?

📝 Summary:
Target-Bench evaluates world models for mapless robot path planning to semantic targets in real-world environments. It reveals off-the-shelf models perform poorly, but fine-tuning significantly improves their planning capability.

🔹 Publication Date: Published on Nov 21

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.17792
• PDF: https://arxiv.org/pdf/2511.17792
• Project Page: https://target-bench.github.io/
• Github: https://github.com/TUM-AVS/target-bench

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#Robotics #PathPlanning #WorldModels #ArtificialIntelligence #MachineLearning
SyncMV4D: Synchronized Multi-view Joint Diffusion of Appearance and Motion for Hand-Object Interaction Synthesis

📝 Summary:
SyncMV4D generates realistic and consistent multi-view 3D Hand-Object Interaction videos and 4D motions. It unifies visual priors, motion dynamics, and multi-view geometry, using a joint diffusion model and a point aligner for robust generation.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19319
• PDF: https://arxiv.org/pdf/2511.19319
• Project Page: https://droliven.github.io/SyncMV4D/
• Github: https://droliven.github.io/SyncMV4D/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#HandObjectInteraction #DiffusionModels #3DGeneration #ComputerVision #GenerativeAI
Continuous Thought Machines

📝 Summary:
The Continuous Thought Machine CTM reintroduces neural timing and synchronization to deep learning for complex sequential reasoning and biologically plausible AI. It uses neuron-level temporal processing and synchronization as a latent representation, performing well on diverse tasks with adaptiv...

🔹 Publication Date: Published on May 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2505.05522
• PDF: https://arxiv.org/pdf/2505.05522
• Github: https://github.com/SakanaAI/continuous-thought-machines

🔹 Models citing this paper:
https://huggingface.co/SakanaAI/ctm-imagenet
https://huggingface.co/SakanaAI/ctm-maze-large

Spaces citing this paper:
https://huggingface.co/spaces/Uday/ctm-energy-based-halting

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AI #DeepLearning #NeuralNetworks #BiologicallyInspiredAI #TemporalAI
LucidFlux: Caption-Free Universal Image Restoration via a Large-Scale Diffusion Transformer

📝 Summary:
LucidFlux is a caption-free universal image restoration framework using a large diffusion transformer. It employs a dual-branch conditioner and adaptive modulation for robust restoration, avoiding text prompts by using SigLIP features. This approach outperforms existing methods by intelligently c...

🔹 Publication Date: Published on Sep 26

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2509.22414
• PDF: https://arxiv.org/pdf/2509.22414
• Project Page: https://w2genai-lab.github.io/LucidFlux/
• Github: https://github.com/W2GenAI-Lab/LucidFlux

🔹 Models citing this paper:
https://huggingface.co/W2GenAI/LucidFlux

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageRestoration #DiffusionModels #ComputerVision #DeepLearning #GenerativeAI
Seeing the Forest and the Trees: Query-Aware Tokenizer for Long-Video Multimodal Language Models

📝 Summary:
QTSplus is a query-aware token selector for long-video multimodal language models. It dynamically selects the most important visual tokens based on a text query, significantly compressing vision data and reducing latency. This method maintains overall accuracy and enhances temporal understanding ...

🔹 Publication Date: Published on Nov 14

🔹 Paper Links:
• arXiv Page: https://huggingface.co/collections/AlpachinoNLP/qtsplus
• PDF: https://arxiv.org/pdf/2511.11910
• Project Page: https://qtsplus.github.io/
• Github: https://github.com/Siyou-Li/QTSplus

🔹 Models citing this paper:
https://huggingface.co/AlpachinoNLP/QTSplus-3B
https://huggingface.co/AlpachinoNLP/QTSplus-3B-FT

Spaces citing this paper:
https://huggingface.co/spaces/AlpachinoNLP/QTSplus-3B

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultimodalAI #VideoAI #LLM #Tokenization #ComputerVision
DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research

📝 Summary:
RLER is introduced to train deep research models for long-form tasks by using rubrics that co-evolve with the policy model. Enabling DR Tulu-8B to outperform open models and match proprietary systems while being more cost-effective.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19399
• PDF: https://arxiv.org/pdf/2511.19399
• Project Page: https://github.com/rlresearch/dr-tulu
• Github: https://github.com/rlresearch/dr-tulu

🔹 Models citing this paper:
https://huggingface.co/rl-research/DR-Tulu-8B
https://huggingface.co/rl-research/DR-Tulu-SFT-8B

Datasets citing this paper:
https://huggingface.co/datasets/rl-research/dr-tulu-sft-data
https://huggingface.co/datasets/rl-research/dr-tulu-rl-data

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ReinforcementLearning #LLMs #DeepLearning #AIResearch #MachineLearning
HunyuanVideo 1.5 Technical Report

📝 Summary:
HunyuanVideo 1.5 is a lightweight, open-source video generation model achieving state-of-the-art visual quality and motion coherence. It employs an advanced DiT architecture with SSTA and an efficient video super-resolution network, enabling high-quality video creation on consumer GPUs.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.18870
• PDF: https://arxiv.org/pdf/2511.18870
• Github: https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VideoGeneration #AI #DeepLearning #OpenSource #DiffusionModels
Flow Map Distillation Without Data

📝 Summary:
This paper introduces a data-free framework for flow map distillation, eliminating the need for external datasets. By sampling only from the prior distribution, it avoids data mismatch risks and achieves state-of-the-art fidelity with minimal sampling steps, surpassing all data-based alternatives.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19428
• PDF: https://arxiv.org/pdf/2511.19428

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#FlowMapDistillation #DataFreeLearning #MachineLearning #DeepLearning #AIResearch
Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens

📝 Summary:
Chain-of-Visual-Thought COVT enables VLMs to improve dense visual perception by reasoning through continuous visual tokens. These tokens capture rich perceptual cues like 2D appearance and 3D geometry from lightweight vision experts. COVT consistently boosts VLM performance on diverse benchmarks,...

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19418
• PDF: https://arxiv.org/pdf/2511.19418
• Project Page: https://wakalsprojectpage.github.io/comt-website/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VLMs #ComputerVision #AI #MachineLearning #VisualReasoning
MASS: Motion-Aware Spatial-Temporal Grounding for Physics Reasoning and Comprehension in Vision-Language Models

📝 Summary:
VLMs struggle with physics-driven video reasoning. This paper introduces MASS, a method that injects spatial-temporal signals and motion tracking into VLMs, along with the MASS-Bench dataset. MASS significantly improves VLM performance on physics tasks, outperforming baselines and achieving state...

🔹 Publication Date: Published on Nov 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.18373
• PDF: https://arxiv.org/pdf/2511.18373

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VLMs #PhysicsAI #ComputerVision #AIResearch #MachineLearning
Pillar-0: A New Frontier for Radiology Foundation Models

📝 Summary:
Pillar-0 is a new radiology foundation model pretrained on diverse CT/MRI scans, utilizing RATE for scalable label extraction. It significantly outperforms existing models across various radiology tasks and extends to new applications like lung cancer risk prediction and brain hemorrhage detection.

🔹 Publication Date: Published on Nov 21

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.17803
• PDF: https://arxiv.org/pdf/2511.17803
• Github: https://github.com/YalaLab/rate-evals

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#Radiology #FoundationModels #AI #MedicalImaging #MachineLearning