ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
4.03K photos
230 videos
23 files
4.34K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Reg-DPO: SFT-Regularized Direct Preference Optimization with GT-Pair for Improving Video Generation

📝 Summary:
This paper presents GT-Pair for automatic preference data construction and Reg-DPO, which adds SFT loss to DPO for stable training. Combined with memory optimizations, it significantly improves video generation quality, outperforming existing methods.

🔹 Publication Date: Published on Nov 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01450
• PDF: https://arxiv.org/pdf/2511.01450
• Github: https://github.com/JieDuTQS/Reg-DPO

🔹 Models citing this paper:
https://huggingface.co/dujielvtqs/Reg-DPO

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VideoGeneration #GenerativeAI #DeepLearning #DPO #AIResearch
AyurParam: A State-of-the-Art Bilingual Language Model for Ayurveda

📝 Summary:
AyurParam-2.9B is a bilingual language model for Ayurveda, outperforming smaller models and competing with larger ones on medical tasks. Highlighting the need for domain adaptation and quality data.

🔹 Publication Date: Published on Nov 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.02374
• PDF: https://arxiv.org/pdf/2511.02374

🔹 Models citing this paper:
https://huggingface.co/bharatgenai/AyurParam

Spaces citing this paper:
https://huggingface.co/spaces/Swanand3/BharatGen_AyurParam

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#Ayurveda #LanguageModel #BilingualAI #NLP #HealthcareAI
3D Gaussian Splatting for Real-Time Radiance Field Rendering

📝 Summary:
This paper introduces a method using 3D Gaussians for scene representation to achieve state-of-the-art, high-quality real-time novel-view synthesis at 1080p resolution. It optimizes anisotropic Gaussians and uses a fast rendering algorithm, outperforming previous radiance field methods.

🔹 Publication Date: Published on Aug 8, 2023

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2308.04079
• PDF: https://arxiv.org/pdf/2308.04079
• Github: https://github.com/graphdeco-inria/gaussian-splatting

Datasets citing this paper:
https://huggingface.co/datasets/Voxel51/gaussian_splatting

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#3DGaussianSplatting #RadianceFields #ComputerGraphics #RealTimeRendering #NovelViewSynthesis
🤖🧠 Krea Realtime 14B: Redefining Real-Time Video Generation with AI

🗓️ 05 Nov 2025
📚 AI News & Trends

The field of artificial intelligence is undergoing a remarkable transformation and one of the most exciting developments is the rise of real-time video generation. From cinematic visual effects to immersive virtual environments, AI is rapidly blurring the boundaries between imagination and reality. At the forefront of this innovation stands Krea Realtime 14B, an advanced open-source ...

#AI #RealTimeVideo #ArtificialIntelligence #OpenSource #VideoGeneration #KreaRealtime14B
DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion

📝 Summary:
DyPE enhances diffusion transformers for ultra-high-resolution image generation by dynamically adjusting positional encodings. This training-free method allows pre-trained models to synthesize images far beyond their training resolution, achieving state-of-the-art fidelity without extra sampling ...

🔹 Publication Date: Published on Oct 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.20766
• PDF: https://arxiv.org/pdf/2510.20766
• Project Page: https://noamissachar.github.io/DyPE/
• Github: https://github.com/guyyariv/DyPE

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#DiffusionModels #ImageGeneration #HighResolution #DeepLearning #ComputerVision
MME-CC: A Challenging Multi-Modal Evaluation Benchmark of Cognitive Capacity

📝 Summary:
MME-CC is a new vision-grounded benchmark to evaluate multimodal large language models cognitive capacity in spatial, geometric, and knowledge-based reasoning tasks. It reveals that while some models lead, spatial and geometric reasoning remain broadly weak. This highlights the need for better ev...

🔹 Publication Date: Published on Nov 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.03146
• PDF: https://arxiv.org/pdf/2511.03146
• Project Page: https://randomtutu.github.io/MME-CC/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultimodalAI #LLMs #Benchmarking #CognitiveAI #ComputerVision
LEGO-Eval: Towards Fine-Grained Evaluation on Synthesizing 3D Embodied Environments with Tool Augmentation

📝 Summary:
The paper introduces LEGO-Eval, a tool-augmented framework, and LEGO-Bench, a detailed instruction benchmark, to improve 3D scene evaluation. It shows LEGO-Eval accurately assesses scene-instruction alignment, outperforming VLMs, and current generation methods largely fail to create realistic sce...

🔹 Publication Date: Published on Nov 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.03001
• PDF: https://arxiv.org/pdf/2511.03001
• Project Page: https://gyeomh.github.io/LEGO-Eval/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#EmbodiedAI #3DGeneration #EvaluationMetrics #VLMs #Benchmarking
Let Multimodal Embedders Learn When to Augment Query via Adaptive Query Augmentation

📝 Summary:
M-Solomon is a multimodal embedder that adaptively decides when to augment queries. It uses a Multimodal LLM to generate augmentations for queries that require them, learning to augment only when necessary. This approach improves performance and significantly reduces embedding latency compared to...

🔹 Publication Date: Published on Nov 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.02358
• PDF: https://arxiv.org/pdf/2511.02358

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultimodalAI #LLM #Embeddings #MachineLearning #DeepLearning
LiveTradeBench: Seeking Real-World Alpha with Large Language Models

📝 Summary:
LiveTradeBench evaluates LLMs in live trading environments with real-time data, multi-asset portfolios, and multiple markets. It reveals that strong static benchmark scores dont predict trading success, and some LLMs can adapt to live market signals. This highlights a gap in current LLM evaluations.

🔹 Publication Date: Published on Nov 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.03628
• PDF: https://arxiv.org/pdf/2511.03628
• Project Page: https://trade-bench.live/
• Github: https://github.com/ulab-uiuc/live-trade-bench

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLM #AlgorithmicTrading #FinancialAI #QuantitativeFinance #AIResearch
1
Kinematify: Open-Vocabulary Synthesis of High-DoF Articulated Objects

📝 Summary:
Kinematify is an automated framework that synthesizes high-DoF articulated objects from images or text. It infers kinematic topologies and estimates joint parameters, combining MCTS search with geometry-driven optimization for physically consistent models.

🔹 Publication Date: Published on Nov 3

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01294
• PDF: https://arxiv.org/pdf/2511.01294

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#3DModeling #ComputerVision #Robotics #AIResearch #Kinematics
Diffusion Language Models are Super Data Learners

📝 Summary:
Diffusion Language Models DLMs consistently outperform autoregressive models, especially in low-data settings. This is due to any-order modeling, iterative bidirectional denoising, and Monte Carlo augmentation. DLMs maintain advantages at scale, achieving strong performance even by repeating limi...

🔹 Publication Date: Published on Nov 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.03276
• PDF: https://arxiv.org/pdf/2511.03276
• Project Page: https://github.com/JinjieNi/dlms-are-super-data-learners
• Github: https://github.com/JinjieNi/OpenMoE2

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#DiffusionModels #LanguageModels #MachineLearning #LowDataLearning #AI
Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning

📝 Summary:
Orion-MSP is a novel tabular in-context learning architecture addressing limitations in existing models. It incorporates multi-scale processing, block-sparse attention, and a Perceiver-style memory. Orion-MSP achieves state-of-the-art performance on various benchmarks while scaling effectively to...

🔹 Publication Date: Published on Nov 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.02818
• PDF: https://arxiv.org/pdf/2511.02818

🔹 Models citing this paper:
https://huggingface.co/Lexsi/Orion-MSP

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#TabularLearning #SparseAttention #MachineLearning #DeepLearning #AI
TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models

📝 Summary:
TabTune is a unified library that standardizes the workflow for tabular foundation models. It provides consistent access to state-of-the-art models, diverse adaptation strategies, and integrated evaluation for performance, calibration, and fairness.

🔹 Publication Date: Published on Nov 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.02802
• PDF: https://arxiv.org/pdf/2511.02802
• Github: https://github.com/Lexsi-Labs/TabTune

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#TabularData #FoundationModels #MachineLearning #DataScience #AIResearch
1
UniAVGen: Unified Audio and Video Generation with Asymmetric Cross-Modal Interactions

📝 Summary:
UniAVGen uses dual Diffusion Transformers and Asymmetric Cross-Modal Interaction for unified audio-video generation. This framework ensures precise spatiotemporal synchronization and semantic consistency. It outperforms existing methods in sync and consistency with far fewer training samples.

🔹 Publication Date: Published on Nov 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.03334
• PDF: https://arxiv.org/pdf/2511.03334
• Project Page: https://mcg-nju.github.io/UniAVGen/
• Github: https://mcg-nju.github.io/UniAVGen/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#GenerativeAI #AudioVideoGeneration #DiffusionModels #CrossModalAI #DeepLearning
MemOS: A Memory OS for AI System

📝 Summary:
MemOS is a memory operating system that unifies plaintext, activation-based, and parameter-level memories for LLMs. It manages memory as a system resource with MemCubes, enabling efficient storage, retrieval, continual learning, and personalized modeling.

🔹 Publication Date: Published on Jul 4

🔹 Paper Links:
• arXiv Page: https://arxivexplained.com/papers/memos-a-memory-os-for-ai-system
• PDF: https://arxiv.org/pdf/2507.03724
• Project Page: https://memos.openmem.net/
• Github: https://github.com/MemTensor/MemOS

🔹 Models citing this paper:
https://huggingface.co/kagvi13/HMP

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MemOS #LLMs #MemoryManagement #OperatingSystems #AI
FG-CLIP: Fine-Grained Visual and Textual Alignment

📝 Summary:
FG-CLIP enhances fine-grained multimodal understanding, overcoming CLIPs limitations with coarse captions. It uses large models for long captions, a high-quality dataset with region boxes and detailed captions, and hard negative samples. FG-CLIP outperforms existing methods on fine-grained and ge...

🔹 Publication Date: Published on May 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2505.05071
• PDF: https://arxiv.org/pdf/2505.05071
• Github: https://github.com/360CVGroup/FG-CLIP

🔹 Models citing this paper:
https://huggingface.co/qihoo360/fg-clip2-base
https://huggingface.co/qihoo360/fg-clip-large
https://huggingface.co/qihoo360/fg-clip-base

Datasets citing this paper:
https://huggingface.co/datasets/qihoo360/FineHARD
https://huggingface.co/datasets/qihoo360/DCI-CN
https://huggingface.co/datasets/qihoo360/DOCCI-CN

Spaces citing this paper:
https://huggingface.co/spaces/qihoo360/FG-CLIP-Retrieval-demo
https://huggingface.co/spaces/qihoo360/FG-CLIP-Densefeature-demo
https://huggingface.co/spaces/qihoo360/FG-CLIP2-Retrieval-demo

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#FGCLIP #FineGrainedAI #MultimodalLearning #ComputerVision #DeepLearning
The Sequential Edge: Inverse-Entropy Voting Beats Parallel Self-Consistency at Matched Compute

📝 Summary:
Sequential scaling for language model reasoning consistently outperforms parallel self-consistency at matched compute, achieving significant accuracy gains. The paper introduces inverse-entropy weighted voting to further enhance sequential scaling, establishing it as the superior test-time strate...

🔹 Publication Date: Published on Nov 4

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.02309
• PDF: https://arxiv.org/pdf/2511.02309

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLM #AIReasoning #SelfConsistency #SequentialScaling #InverseEntropy
In-the-Flow Agentic System Optimization for Effective Planning and Tool Use

📝 Summary:
AgentFlow is a trainable agentic framework that optimizes its planner in-the-flow within multi-turn interactions. It uses Flow-GRPO to train its modules and significantly outperforms top baselines and GPT-4o on various reasoning and tool-use tasks.

🔹 Publication Date: Published on Oct 7

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.05592
• PDF: https://arxiv.org/pdf/2510.05592
• Project Page: https://agentflow.stanford.edu/
• Github: https://github.com/lupantech/AgentFlow

Spaces citing this paper:
https://huggingface.co/spaces/AgentFlow/agentflow
https://huggingface.co/spaces/bioliveir4/agentflow2
https://huggingface.co/spaces/bioliveir4/agentflow

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AI #MachineLearning #AIagents #ToolUse #Planning
Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning

📝 Summary:
PaperCoder is a multi-agent LLM framework that automates converting machine learning papers into functional code repositories. It uses planning, analysis, and generation stages with specialized agents. Evaluations show it effectively creates high-quality implementations, outperforming strong base...

🔹 Publication Date: Published on Apr 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2504.17192
• PDF: https://arxiv.org/pdf/2504.17192
• Project Page: https://huggingface.co/papers/2504.15080
• Github: https://github.com/going-doer/Paper2Code

Datasets citing this paper:
https://huggingface.co/datasets/iaminju/paper2code

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#CodeGeneration #MachineLearning #LLM #AI #Automation
Grounded Misunderstandings in Asymmetric Dialogue: A Perspectivist Annotation Scheme for MapTask

📝 Summary:
This paper introduces a perspectivist annotation scheme for the MapTask corpus. It separately tracks speaker and addressee interpretations to reveal how understanding emerges and diverges. Findings show subtle discrepancies cause referential misalignment despite apparent agreement.

🔹 Publication Date: Published on Nov 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.03718
• PDF: https://arxiv.org/pdf/2511.03718

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#Dialogue #NLP #Communication #Pragmatics #CorpusLinguistics
1