ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
4.03K photos
230 videos
23 files
4.34K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
NURBGen: High-Fidelity Text-to-CAD Generation through LLM-Driven NURBS Modeling

📝 Summary:
NURBGen generates high-fidelity 3D CAD models directly from text using Non-Uniform Rational B-Splines NURBS. It fine-tunes an LLM to translate text into NURBS parameters, enabling robust modeling with a hybrid representation. NURBGen outperforms existing text-to-CAD methods in geometric fidelity ...

🔹 Publication Date: Published on Nov 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.06194
• PDF: https://arxiv.org/pdf/2511.06194

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#TextToCAD #LLM #NURBS #3DModeling #GenerativeAI
📚 Professional Academic Writing & Simulation Services

Looking for high-quality academic assistance? We specialize in research papers, theses, and simulations tailored to your needs. All work is original, plagiarism-free, and aligned with top journal standards. Prices are competitive and flexible—contact us for custom quotes!

Nature Journal Papers: Premium publication-ready manunoscripts for top-tier Nature family journals. 
  Price: $2,000

Q1 & Q2 Journal Papers: In-depth research for high-impact SCI/Scopus Q1-Q2 journals (e.g., engineering, sciences). 
  Price: $1,000

Q3 & Q4 Journal Papers: Solid, peer-review optimized articles for mid-tier journals. 
  Price: $500

Complete Doctoral Thesis: Full PhD dissertation writing, from proposal to defense-ready document (up to 100 pages). 
  Price: $700

M.S. Thesis: Comprehensive master's thesis support, including literature review, methodology, and analysis. 
  Price: $300

Paper Simulation: Custom simulations (e.g., MATLAB, ANSYS, Python models) for research validation and results. 
  Price: $200

Ready to elevate your research? DM me at @husseinsheikho for a free consultation and fast turnaround! 📚💛
Please open Telegram to view this post
VIEW IN TELEGRAM
ML Research Hub pinned «📚 Professional Academic Writing & Simulation Services Looking for high-quality academic assistance? We specialize in research papers, theses, and simulations tailored to your needs. All work is original, plagiarism-free, and aligned with top journal standards.…»
Grounding Computer Use Agents on Human Demonstrations

📝 Summary:
GroundCUA is a large desktop grounding dataset built from expert human demonstrations. It enables GroundNext models to achieve state-of-the-art performance in mapping instructions to UI elements with less training data and strong agentic capabilities.

🔹 Publication Date: Published on Nov 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.07332
• PDF: https://arxiv.org/pdf/2511.07332
• Project Page: https://groundcua.github.io/
• Github: https://groundcua.github.io/

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#AI #Agents #HCI #Datasets #HumanDemonstrations
Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence

📝 Summary:
This work converts pretrained non-recurrent language models into depth-recurrent ones. Using a curriculum of recurrences improves performance on tasks like mathematics at a lower compute budget compared to standard post-training.

🔹 Publication Date: Published on Nov 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.07384
• PDF: https://arxiv.org/pdf/2511.07384
• Github: https://github.com/mcleish7/retrofitting-recurrence

Datasets citing this paper:
https://huggingface.co/datasets/smcleish/retrofitting-llama-fineweb-edu-tokenized

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLM #DeepLearning #AIResearch #NeuralNetworks #ComputationalEfficiency
RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments

📝 Summary:
RLVE improves language model reasoning by dynamically adjusting problem difficulty in verifiable environments. This adaptive approach significantly outperforms static environments and traditional RL, yielding a 3.37% average improvement on reasoning benchmarks.

🔹 Publication Date: Published on Nov 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.07317
• PDF: https://arxiv.org/pdf/2511.07317
• Github: https://github.com/Zhiyuan-Zeng/RLVE

🔹 Models citing this paper:
https://huggingface.co/hamishivi/Nemotron-Research-Reasoning-Qwen-1.5B-v2-RLVE
https://huggingface.co/hamishivi/OpenThinker3-1.5B-RLVE

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ReinforcementLearning #LLMs #AI #AIReasoning #AdaptiveLearning
Llama-Embed-Nemotron-8B: A Universal Text Embedding Model for Multilingual and Cross-Lingual Tasks

📝 Summary:
Llama-Embed-Nemotron-8B is an open-source text embedding model achieving state-of-the-art performance, especially in multilingual tasks. Its success comes from a novel data mix and detailed ablation studies, making it a universal solution.

🔹 Publication Date: Published on Nov 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.07025
• PDF: https://arxiv.org/pdf/2511.07025

🔹 Models citing this paper:
https://huggingface.co/nvidia/llama-embed-nemotron-8b

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#TextEmbeddings #MultilingualNLP #CrossLingual #LanguageModels #AIResearch
Long Grounded Thoughts: Distilling Compositional Visual Reasoning Chains at Scale

📝 Summary:
Researchers developed a new framework to generate over 1M high-quality synthetic vision-centric reasoning questions with complex traces. Finetuning models on this data significantly improves vision-centric performance and surprisingly boosts text and audio reasoning, demonstrating strong cross-mo...

🔹 Publication Date: Published on Nov 7

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.05705
• PDF: https://arxiv.org/pdf/2511.05705

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#VisualReasoning #AI #MachineLearning #MultimodalAI #ComputerVision
Reinforcement Learning Improves Traversal of Hierarchical Knowledge in LLMs

📝 Summary:
Reinforcement learning improves LLMs ability to recall hierarchical knowledge without degrading existing facts. It enhances models procedural skills in navigating knowledge, rather than changing the knowledge representation itself. This leads to better performance on structured prompting and deep...

🔹 Publication Date: Published on Nov 8

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.05933
• PDF: https://arxiv.org/pdf/2511.05933

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ReinforcementLearning #LLMs #ArtificialIntelligence #DeepLearning #KnowledgeRetrieval
Generating an Image From 1,000 Words: Enhancing Text-to-Image With Structured Captions

📝 Summary:
This paper introduces FIBO, a text-to-image model trained on long structured captions to enhance prompt alignment and controllability. It proposes DimFusion for efficient processing and the TaBR evaluation protocol, achieving state-of-the-art results.

🔹 Publication Date: Published on Nov 10

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.06876
• PDF: https://arxiv.org/pdf/2511.06876

🔹 Models citing this paper:
https://huggingface.co/briaai/FIBO

Spaces citing this paper:
https://huggingface.co/spaces/galdavidi/FIBO-Mashup
https://huggingface.co/spaces/briaai/FIBO
https://huggingface.co/spaces/briaai/Fibo-local

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#TextToImage #GenerativeAI #DiffusionModels #AI #MachineLearning
🤖🧠 The Transformer Architecture: How Attention Revolutionized Deep Learning

🗓️ 11 Nov 2025
📚 AI News & Trends

The field of artificial intelligence has witnessed a remarkable evolution and at the heart of this transformation lies the Transformer architecture. Introduced by Vaswani et al. in 2017, the paper “Attention Is All You Need” redefined the foundations of natural language processing (NLP) and sequence modeling. Unlike its predecessors – recurrent and convolutional neural networks, ...

#TransformerArchitecture #AttentionMechanism #DeepLearning #NaturalLanguageProcessing #NLP #AIResearch
1
🤖🧠 BERT: Revolutionizing Natural Language Processing with Bidirectional Transformers

🗓️ 11 Nov 2025
📚 AI News & Trends

In the ever-evolving landscape of artificial intelligence and natural language processing (NLP), BERT (Bidirectional Encoder Representations from Transformers) stands as a monumental breakthrough. Developed by researchers at Google AI in 2018, BERT introduced a new way of understanding the context of language by using deep bidirectional training of the Transformer architecture. Unlike previous models that ...

#BERT #NaturalLanguageProcessing #TransformerArchitecture #BidirectionalLearning #DeepLearning #AIStrategy
🤖🧠 vLLM Semantic Router: The Next Frontier in Intelligent Model Routing for LLMs

🗓️ 11 Nov 2025
📚 AI News & Trends

As large language models (LLMs) continue to evolve, organizations face new challenges in optimizing performance, accuracy and cost across various AI workloads. Running multiple models efficiently – each specialized for specific tasks has become essential for scalable AI deployment. Enter vLLM Semantic Router, an open-source innovation that introduces a new layer of intelligence to the ...

#vLLMSemanticRouter #LargeLanguageModels #AIScaling #ModelRouting #OpenSourceAI #LLMOptimization
🤖🧠 Plandex AI: The Future of Autonomous Coding Agents for Large-Scale Development

🗓️ 11 Nov 2025
📚 AI News & Trends

As software development becomes increasingly complex, developers are turning to AI tools that can manage, understand and automate large portions of the coding workflow. Among the most promising innovations in this space is Plandex AI, an open-source terminal-based coding agent designed for real-world, large-scale projects. Unlike simple AI coding assistants that handle small snippets, Plandex ...

#PlandexAI #AutonomousCoding #LargeScaleDevelopment #AICoding #OpenSourceAI #CodeAutomation
FLEX: Continuous Agent Evolution via Forward Learning from Experience

📝 Summary:
FLEX is a gradient-free paradigm allowing LLM agents to continuously evolve by building an experience library from successes and failures. This leads to substantial performance improvements in tasks like math, chemistry, and protein prediction, demonstrating scalable growth and experience inherit...

🔹 Publication Date: Published on Nov 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.06449
• PDF: https://arxiv.org/pdf/2511.06449

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLMAgents #AI #MachineLearning #ContinuousLearning #ReinforcementLearning
Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B

📝 Summary:
VibeThinker-1.5B, a 1.5B-parameter model, uses the Spectrum-to-Signal Principle to achieve superior reasoning. It outperforms much larger models on math and coding benchmarks, proving small models can deliver advanced AI at low cost.

🔹 Publication Date: Published on Nov 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.06221
• PDF: https://arxiv.org/pdf/2511.06221
• Github: https://github.com/WeiboAI/VibeThinker

🔹 Models citing this paper:
https://huggingface.co/WeiboAI/VibeThinker-1.5B
https://huggingface.co/Mungert/VibeThinker-1.5B-GGUF

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#SLM #AIReasoning #ModelOptimization #MachineLearning #EfficientAI
VideoSSR: Video Self-Supervised Reinforcement Learning

📝 Summary:
VideoSSR is a novel self-supervised reinforcement learning framework that leverages intrinsic video information to generate high-quality training data. It uses three pretext tasks and the VideoSSR-30K dataset, improving MLLM performance across 17 benchmarks by over 5%.

🔹 Publication Date: Published on Nov 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.06281
• PDF: https://arxiv.org/pdf/2511.06281
• Project Page: https://github.com/lcqysl/VideoSSR
• Github: https://github.com/lcqysl/VideoSSR

🔹 Models citing this paper:
https://huggingface.co/yhx12/VideoSSR

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#ReinforcementLearning #SelfSupervisedLearning #VideoAI #MachineLearning #DeepLearning
Walking the Tightrope of LLMs for Software Development: A Practitioners' Perspective

📝 Summary:
This study investigated software developers' perspectives on Large Language Models, identifying benefits like improved workflow and entrepreneurship, alongside risks to personal well-being and reputation. It highlights key trade-offs and best practices for adopting LLMs in software development.

🔹 Publication Date: Published on Nov 9

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.06428
• PDF: https://arxiv.org/pdf/2511.06428

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#LLMs #SoftwareDevelopment #AIinDevelopment #DeveloperExperience #TechResearch
Adaptive Multi-Agent Response Refinement in Conversational Systems

📝 Summary:
This paper presents a multi-agent framework for refining conversational responses across factuality, personalization, and coherence. It employs dynamic agent coordination, outperforming single LLM approaches on challenging conversational datasets.

🔹 Publication Date: Published on Nov 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.08319
• PDF: https://arxiv.org/pdf/2511.08319

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#MultiAgentSystems #ConversationalAI #LLMs #NLP #AIResearch
KLASS: KL-Guided Fast Inference in Masked Diffusion Models

📝 Summary:
KLASS accelerates masked diffusion model inference by using KL divergence to identify stable, high-confidence predictions. It unmasks multiple tokens per iteration, significantly speeding up generation and improving quality across text, image, and molecular tasks.

🔹 Publication Date: Published on Nov 7

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.05664
• PDF: https://arxiv.org/pdf/2511.05664
• Github: https://github.com/shkim0116/KLASS

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#DiffusionModels #GenerativeAI #MachineLearning #AIResearch #ModelAcceleration
1
The Path Not Taken: RLVR Provably Learns Off the Principals

📝 Summary:
RLVR learns by modifying parameters off principal directions in low-curvature subspaces, appearing sparse due to optimization bias. This distinct optimization regime contrasts with SFT, meaning SFT-era fine-tuning methods are flawed for RLVR.

🔹 Publication Date: Published on Nov 11

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.08567
• PDF: https://arxiv.org/pdf/2511.08567

==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT

#RLVR #MachineLearning #Optimization #DeepLearning #AIResearch
🔥1