ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
4.01K photos
229 videos
23 files
4.32K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
UI-S1: Advancing GUI Automation via Semi-online Reinforcement Learning

📝 Summary:
UI-S1 introduces Semi-online RL for GUI automation, simulating online RL on offline trajectories to overcome current method limitations. It achieved SOTA performance on dynamic benchmarks, bridging offline training efficiency and online reasoning.

🔹 Publication Date: Published on Sep 15

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2509.11543
• PDF: https://arxiv.org/pdf/2509.11543
• Github: https://github.com/X-PLUG/MobileAgent/tree/main/UI-S1

🔹 Models citing this paper:
https://huggingface.co/mPLUG/UI-S1-7B

Datasets citing this paper:
https://huggingface.co/datasets/mPLUG/UI_S1_dataset

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ReinforcementLearning #GUIAutomation #AI #MachineLearning #IntelligentAgents
PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex Task Automation on PC

📝 Summary:
PC-Agent is a hierarchical multi-agent framework improving MLLM-based GUI agents for complex PC tasks. It uses an Active Perception Module and a hierarchical decision-making architecture with Manager, Progress, and Decision agents. A Reflection agent provides feedback. It achieved a 32% task succ...

🔹 Publication Date: Published on Feb 20

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2502.14282
• PDF: https://arxiv.org/pdf/2502.14282
• Github: https://github.com/X-PLUG/MobileAgent/tree/main/PC-Agent

Spaces citing this paper:
https://huggingface.co/spaces/junyangwang0410/PC-Agent

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultiAgentSystems #AIAgents #MLLMs #PCAutomation #DeepLearning
Look Before You Leap: A GUI-Critic-R1 Model for Pre-Operative Error Diagnosis in GUI Automation

📝 Summary:
GUI automation faces critical errors. This paper introduces GUI-Critic-R1, a pre-operative critic model using Suggestion-aware Gradient Relative Policy Optimization, to provide feedback and diagnose errors before execution. It significantly improves critic accuracy, enhancing automation reliabili...

🔹 Publication Date: Published on Jun 5

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2506.04614
• PDF: https://arxiv.org/pdf/2506.04614
• Github: https://github.com/X-PLUG/MobileAgent

🔹 Models citing this paper:
https://huggingface.co/BonnieOne/GUI-Critic-R1

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#GUIAutomation #ErrorDiagnosis #AI #MachineLearning #SoftwareTesting
Mobile-Agent-v3: Foundamental Agents for GUI Automation

📝 Summary:
GUI-Owl and Mobile-Agent-v3 are open-source GUI agent models achieving state-of-the-art performance on GUI benchmarks. GUI-Owl introduces large-scale environment infrastructure, diverse agent capabilities, and scalable reinforcement learning, with Mobile-Agent-v3 further improving these results.

🔹 Publication Date: Published on Aug 21

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2508.15144
• PDF: https://arxiv.org/pdf/2508.15144
• Project Page: https://github.com/X-PLUG/MobileAgent
• Github: https://github.com/X-PLUG/MobileAgent

🔹 Models citing this paper:
https://huggingface.co/mPLUG/GUI-Owl-7B
https://huggingface.co/mPLUG/GUI-Owl-32B
https://huggingface.co/mPLUG/GUI-Owl-7B-Desktop-RL

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#GUIAgent #Automation #ReinforcementLearning #AIResearch #OpenSourceAI
MSRNet: A Multi-Scale Recursive Network for Camouflaged Object Detection

📝 Summary:
MSRNet proposes a Multi-Scale Recursive Network for camouflaged object detection. It uses a Pyramid Vision Transformer and recursive feature refinement to overcome challenges with small and multiple objects, achieving state-of-the-art results.

🔹 Publication Date: Published on Nov 16

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.12810
• PDF: https://arxiv.org/pdf/2511.12810

🔹 Models citing this paper:
https://huggingface.co/linaa98/MSRNet

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#CamouflagedObjectDetection #ObjectDetection #ComputerVision #DeepLearning #AIResearch
1
Representational Stability of Truth in Large Language Models

📝 Summary:
This paper introduces representational stability to assess how robustly LLMs encode truth. It found that stability is more influenced by epistemic familiarity than linguistic form. Unfamiliar statements cause larger shifts in truth judgments, highlighting a diagnostic for auditing LLMs.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19166
• PDF: https://arxiv.org/pdf/2511.19166

Datasets citing this paper:
https://huggingface.co/datasets/carlomarxx/trilemma-of-truth
https://huggingface.co/datasets/samanthadies/representational_stability

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLMs #RepresentationalStability #AITruthfulness #AISafety #AIResearch
Upsample Anything: A Simple and Hard to Beat Baseline for Feature Upsampling

📝 Summary:
Upsample Anything is a novel test-time optimization framework that enhances low-resolution features to high-resolution outputs without training. It learns an anisotropic Gaussian kernel per image, acting as a universal edge-aware operator. This method achieves state-of-the-art results in tasks li...

🔹 Publication Date: Published on Nov 20

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.16301
• PDF: https://arxiv.org/pdf/2511.16301

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#Upsampling #ComputerVision #ImageProcessing #DeepLearning #AI
EvoVLA: Self-Evolving Vision-Language-Action Model

📝 Summary:
EvoVLA is a self-supervised VLA framework tackling stage hallucination in long-horizon robotic manipulation. It uses triplet contrastive learning, pose-based exploration, and memory to prevent shortcuts. EvoVLA significantly improves success, sample efficiency, and reduces hallucination in sim an...

🔹 Publication Date: Published on Nov 20

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.16166
• PDF: https://arxiv.org/pdf/2511.16166
• Project Page: https://aigeeksgroup.github.io/EvoVLA/
• Github: https://aigeeksgroup.github.io/EvoVLA/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#Robotics #VisionLanguageAction #SelfSupervisedLearning #AI #DeepLearning
💸 PacketSDK--A New Way To Make Revenue From Your Apps

Regardless of whether your app is on desktop, mobile, TV, or Unity platforms, no matter which app monetization tools you’re using, PacketSDK can bring you additional revenue!

● Working Principle: Convert your app's active users into profits 👥💵

● Product Features: Ad-free monetization 🚫, no user interference

● Additional Revenue: Fully compatible with your existing ad SDKs

● CCPA & GDPR: Based on user consent, no collection of any personal data 🔒

● Easy Integration: Only a few simple steps, taking approximately 30 minutes

Join us:https://www.packetsdk.com/?utm-source=SyWayQNK

Contact us & Estimated income:
Telegram:@Packet_SDK
Whatsapp:https://wa.me/85256440384
Teams:https://teams.live.com/l/invite/FBA_1zP2ehmA6Jn4AI

Join early ,earn early!
1
Media is too big
VIEW IN TELEGRAM
One4D: Unified 4D Generation and Reconstruction via Decoupled LoRA Control

📝 Summary:
One4D is a unified framework for 4D generation and reconstruction, producing synchronized RGB frames and pointmaps. It uses Unified Masked Conditioning for varying input sparsities and Decoupled LoRA Control to achieve high-quality results across diverse tasks.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.18922
• PDF: https://arxiv.org/pdf/2511.18922
• Project Page: https://mizhenxing.github.io/One4D
• Github: https://mizhenxing.github.io/One4D

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#4DGeneration #4DReconstruction #ComputerVision #DeepLearning #GenerativeAI
2
Beyond Multiple Choice: Verifiable OpenQA for Robust Vision-Language RFT

📝 Summary:
ReVeL converts multiple-choice questions to verifiable open-form questions to address unreliable MCQA metrics and answer guessing. This framework improves data efficiency and robustness for multimodal language models, revealing significant score inflation in MCQA benchmarks.

🔹 Publication Date: Published on Nov 21

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.17405
• PDF: https://arxiv.org/pdf/2511.17405
• Github: https://flageval-baai.github.io/ReVeL/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#OpenQA #VisionLanguage #LanguageModels #AIEvaluation #MachineLearning
Language Model Council: Benchmarking Foundation Models on Highly Subjective Tasks by Consensus

📝 Summary:
Benchmarking LLMs on subjective tasks like emotional intelligence is challenging. The Language Model Council LMC uses a democratic process with 20 LLMs to formulate, administer, and evaluate tests. This yields more robust, less biased rankings that align better with human leaderboards.

🔹 Publication Date: Published on Jun 12, 2024

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2406.08598
• PDF: https://arxiv.org/pdf/2406.08598
• Github: https://github.com/llm-council/llm-council

Datasets citing this paper:
https://huggingface.co/datasets/llm-council/emotional_application

Spaces citing this paper:
https://huggingface.co/spaces/llm-council/llm-council
https://huggingface.co/spaces/llm-council/sandbox

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLM #Benchmarking #AIEvaluation #FoundationModels #ConsensusAI
SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation

📝 Summary:
SteadyDancer is an Image-to-Video framework that solves identity drift and motion control challenges in human image animation. It achieves robust first-frame preservation via condition reconciliation, adaptive pose, and hierarchical training, outperforming others while using fewer resources.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19320
• PDF: https://arxiv.org/pdf/2511.19320
• Project Page: https://mcg-nju.github.io/steadydancer-web
• Github: https://github.com/MCG-NJU/SteadyDancer

🔹 Models citing this paper:
https://huggingface.co/MCG-NJU/SteadyDancer-14B

Datasets citing this paper:
https://huggingface.co/datasets/MCG-NJU/X-Dance

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#HumanImageAnimation #ImageToVideo #FirstFramePreservation #GenerativeAI #ComputerVision
GigaWorld-0: World Models as Data Engine to Empower Embodied AI

📝 Summary:
GigaWorld-0 is a unified world model framework that generates high-quality, diverse, and physically plausible VLA data by integrating video and 3D modeling. This synthetic data enables embodied AI models to achieve strong real-world performance on physical robots without any real-world training.

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19861
• PDF: https://arxiv.org/pdf/2511.19861

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#EmbodiedAI #WorldModels #SyntheticData #AI #Robotics
Unified all-atom molecule generation with neural fields

📝 Summary:
FuncBind uses neural fields and computer vision models to generate diverse all-atom molecules across various systems, from small molecules to antibodies. This modality-agnostic framework achieves competitive performance in structure-conditioned molecular design and can generate novel binders.

🔹 Publication Date: Published on Nov 19

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.15906
• PDF: https://arxiv.org/pdf/2511.15906
• Github: https://github.com/prescient-design/funcbind/

🔹 Models citing this paper:
https://huggingface.co/mkirchmeyer/funcbind

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MoleculeGeneration #NeuralFields #DrugDiscovery #AIforScience #ComputationalChemistry
Does Understanding Inform Generation in Unified Multimodal Models? From Analysis to Path Forward

📝 Summary:
UniSandbox evaluates Unified Multimodal Models, revealing a gap between understanding and generation in reasoning and knowledge transfer. Chain-of-Thought and self-training effectively bridge this gap, providing insights for future model design.

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.20561
• PDF: https://arxiv.org/pdf/2511.20561
• Github: https://github.com/PKU-YuanGroup/UniSandBox

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultimodalAI #AIUnderstanding #ChainOfThought #LLMs #AIResearch
MedSAM3: Delving into Segment Anything with Medical Concepts

📝 Summary:
MedSAM-3 is a text-promptable medical segmentation model fine-tuned on SAM 3 using semantic conceptual labels. It enables precise, open-vocabulary text-based segmentation of anatomical structures and integrates MLLMs for advanced reasoning. This approach significantly outperforms existing models ...

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19046
• PDF: https://arxiv.org/pdf/2511.19046
• Github: https://github.com/Joey-S-Liu/MedSAM3

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MedicalAI #ImageSegmentation #DeepLearning #MLLMs #FoundationModels
HunyuanOCR Technical Report

📝 Summary:
HunyuanOCR is a lightweight Vision-Language Model for OCR, using a unified end-to-end architecture ViT + LLM. It achieves state-of-the-art performance in diverse tasks, outperforming larger models and commercial APIs, powered by data-driven and RL strategies.

🔹 Publication Date: Published on Nov 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.19575
• PDF: https://arxiv.org/pdf/2511.19575
• Github: https://github.com/Tencent-Hunyuan/HunyuanOCR

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#OCR #VisionLanguageModel #LLM #AI #MachineLearning
iMontage: Unified, Versatile, Highly Dynamic Many-to-many Image Generation

📝 Summary:
iMontage repurposes pre-trained video models to generate high-quality, diverse image sets. It uses a unified framework and minimal adaptation, combining temporal coherence with image diversity for natural transitions and expanded dynamics across many tasks.

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.20635
• PDF: https://arxiv.org/pdf/2511.20635
• Project Page: https://kr1sjfu.github.io/iMontage-web/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ImageGeneration #DeepLearning #ComputerVision #AIMethods #VideoModels
This media is not supported in your browser
VIEW IN TELEGRAM
PhysChoreo: Physics-Controllable Video Generation with Part-Aware Semantic Grounding

📝 Summary:
PhysChoreo generates physically realistic and controllable videos from a single image. It reconstructs part-aware physical properties and simulates dynamic behavior, outperforming existing methods.

🔹 Publication Date: Published on Nov 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.20562
• PDF: https://arxiv.org/pdf/2511.20562

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VideoGeneration #PhysicalSimulation #ComputerVision #DeepLearning #AIResearch