✨ Title: UniREditBench: A Unified Reasoning-based Image Editing Benchmark
📝 Summary:
UniREditBench is a new benchmark for reasoning-based image editing. It covers diverse scenarios including multi-object interactions and game-worlds, using multimodal evaluation to assess generative models. This helps improve their performance on complex editing tasks.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01295
• PDF: https://arxiv.org/pdf/2511.01295
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
UniREditBench is a new benchmark for reasoning-based image editing. It covers diverse scenarios including multi-object interactions and game-worlds, using multimodal evaluation to assess generative models. This helps improve their performance on complex editing tasks.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01295
• PDF: https://arxiv.org/pdf/2511.01295
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: LongCat-Flash-Omni Technical Report
📝 Summary:
LongCat-Flash-Omni is a 560B parameter open-source omni-modal model excelling at low-latency real-time audio-visual interaction. It employs a progressive training strategy and achieves state-of-the-art performance across diverse multimodal benchmarks.
🔹 Publication Date: Published on Oct 31
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.00279
• PDF: https://arxiv.org/pdf/2511.00279
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
LongCat-Flash-Omni is a 560B parameter open-source omni-modal model excelling at low-latency real-time audio-visual interaction. It employs a progressive training strategy and achieves state-of-the-art performance across diverse multimodal benchmarks.
🔹 Publication Date: Published on Oct 31
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.00279
• PDF: https://arxiv.org/pdf/2511.00279
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: TIR-Bench: A Comprehensive Benchmark for Agentic Thinking-with-Images Reasoning
📝 Summary:
TIR-Bench introduces a comprehensive benchmark for evaluating agentic thinking-with-images in AI. It features 13 tasks requiring novel tool use for image processing. The benchmark is universally challenging, demanding genuine thinking-with-images capabilities.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01833
• PDF: https://arxiv.org/pdf/2511.01833
• Github: https://github.com/agents-x-project/TIR-Bench
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
TIR-Bench introduces a comprehensive benchmark for evaluating agentic thinking-with-images in AI. It features 13 tasks requiring novel tool use for image processing. The benchmark is universally challenging, demanding genuine thinking-with-images capabilities.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01833
• PDF: https://arxiv.org/pdf/2511.01833
• Github: https://github.com/agents-x-project/TIR-Bench
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: Unified Diffusion VLA: Vision-Language-Action Model via Joint Discrete Denoising Diffusion Process
📝 Summary:
This paper introduces Unified Diffusion VLA UD-VLA, a vision-language-action model that jointly optimizes image generation and action prediction. It uses a Joint Discrete Denoising Diffusion Process JD3P for intrinsic synergy across modalities. UD-VLA achieves state-of-the-art results on multiple...
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01718
• PDF: https://arxiv.org/pdf/2511.01718
• Project Page: https://irpn-eai.github.io/UD-VLA.github.io/
• Github: https://github.com/OpenHelix-Team/UD-VLA
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
This paper introduces Unified Diffusion VLA UD-VLA, a vision-language-action model that jointly optimizes image generation and action prediction. It uses a Joint Discrete Denoising Diffusion Process JD3P for intrinsic synergy across modalities. UD-VLA achieves state-of-the-art results on multiple...
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01718
• PDF: https://arxiv.org/pdf/2511.01718
• Project Page: https://irpn-eai.github.io/UD-VLA.github.io/
• Github: https://github.com/OpenHelix-Team/UD-VLA
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: The Underappreciated Power of Vision Models for Graph Structural Understanding
📝 Summary:
Vision models show surprising power for graph understanding, matching GNNs on benchmarks and outperforming them on global structural perception. Our new GraphAbstract benchmark reveals vision models excel at holistic graph properties and scale-invariant reasoning, suggesting their use for graph f...
🔹 Publication Date: Published on Oct 27
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24788
• PDF: https://arxiv.org/pdf/2510.24788
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
Vision models show surprising power for graph understanding, matching GNNs on benchmarks and outperforming them on global structural perception. Our new GraphAbstract benchmark reveals vision models excel at holistic graph properties and scale-invariant reasoning, suggesting their use for graph f...
🔹 Publication Date: Published on Oct 27
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24788
• PDF: https://arxiv.org/pdf/2510.24788
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: ROVER: Benchmarking Reciprocal Cross-Modal Reasoning for Omnimodal Generation
📝 Summary:
ROVER is a new benchmark evaluating reciprocal cross-modal reasoning in unified multimodal models. It tests how models use one modality to guide or verify outputs in another, in both verbal and visual generation tasks. Experiments show cross-modal reasoning is vital for visual generation, but mod...
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01163
• PDF: https://arxiv.org/pdf/2511.01163
• Github: https://roverbench.github.io/
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
ROVER is a new benchmark evaluating reciprocal cross-modal reasoning in unified multimodal models. It tests how models use one modality to guide or verify outputs in another, in both verbal and visual generation tasks. Experiments show cross-modal reasoning is vital for visual generation, but mod...
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01163
• PDF: https://arxiv.org/pdf/2511.01163
• Github: https://roverbench.github.io/
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: Trove: A Flexible Toolkit for Dense Retrieval
📝 Summary:
Trove is an open-source toolkit for dense retrieval that simplifies research. It offers efficient on-the-fly data management, reducing memory use and allowing flexible dataset experiments. Trove is highly customizable and provides a unified, scalable pipeline for evaluation.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01857
• PDF: https://arxiv.org/pdf/2511.01857
• Project Page: https://ir-trove.dev/
• Github: https://github.com/BatsResearch/trove
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
Trove is an open-source toolkit for dense retrieval that simplifies research. It offers efficient on-the-fly data management, reducing memory use and allowing flexible dataset experiments. Trove is highly customizable and provides a unified, scalable pipeline for evaluation.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01857
• PDF: https://arxiv.org/pdf/2511.01857
• Project Page: https://ir-trove.dev/
• Github: https://github.com/BatsResearch/trove
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: Data-Efficient RLVR via Off-Policy Influence Guidance
📝 Summary:
This paper proposes CROPI a new method for efficient data selection in Reinforcement Learning with Verifiable Rewards RLVR. It uses off-policy influence estimation and sparse random projection to identify the most valuable data points. CROPI significantly accelerates training achieving 2.66x spee...
🔹 Publication Date: Published on Oct 30
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.26491
• PDF: https://arxiv.org/pdf/2510.26491
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
This paper proposes CROPI a new method for efficient data selection in Reinforcement Learning with Verifiable Rewards RLVR. It uses off-policy influence estimation and sparse random projection to identify the most valuable data points. CROPI significantly accelerates training achieving 2.66x spee...
🔹 Publication Date: Published on Oct 30
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.26491
• PDF: https://arxiv.org/pdf/2510.26491
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: How Far Are Surgeons from Surgical World Models? A Pilot Study on Zero-shot Surgical Video Generation with Expert Assessment
📝 Summary:
This study introduces SurgVeo and the Surgical Plausibility Pyramid to evaluate video generation models in surgery. Experts found Veo-3 visually convincing but lacking in actual surgical understanding. This highlights a critical gap between visual mimicry and causal knowledge in surgical AI.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01775
• PDF: https://arxiv.org/pdf/2511.01775
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
This study introduces SurgVeo and the Surgical Plausibility Pyramid to evaluate video generation models in surgery. Experts found Veo-3 visually convincing but lacking in actual surgical understanding. This highlights a critical gap between visual mimicry and causal knowledge in surgical AI.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01775
• PDF: https://arxiv.org/pdf/2511.01775
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: UME-R1: Exploring Reasoning-Driven Generative Multimodal Embeddings
📝 Summary:
UME-R1 introduces generative multimodal embeddings, unifying embedding tasks within a generative paradigm. Its two-stage MLLM training creates reasoning-driven embeddings that significantly outperform conventional discriminative methods, offering a foundation for new interpretability.
🔹 Publication Date: Published on Nov 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.00405
• PDF: https://arxiv.org/pdf/2511.00405
• Github: https://github.com/DeepLearnXMU/UME-R1
🔹 Models citing this paper:
• https://huggingface.co/zhibinlan/UME-R1-2B
• https://huggingface.co/zhibinlan/UME-R1-7B
✨ Datasets citing this paper:
• https://huggingface.co/datasets/zhibinlan/UME-sft-train
• https://huggingface.co/datasets/zhibinlan/UME-rl-train
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
UME-R1 introduces generative multimodal embeddings, unifying embedding tasks within a generative paradigm. Its two-stage MLLM training creates reasoning-driven embeddings that significantly outperform conventional discriminative methods, offering a foundation for new interpretability.
🔹 Publication Date: Published on Nov 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.00405
• PDF: https://arxiv.org/pdf/2511.00405
• Github: https://github.com/DeepLearnXMU/UME-R1
🔹 Models citing this paper:
• https://huggingface.co/zhibinlan/UME-R1-2B
• https://huggingface.co/zhibinlan/UME-R1-7B
✨ Datasets citing this paper:
• https://huggingface.co/datasets/zhibinlan/UME-sft-train
• https://huggingface.co/datasets/zhibinlan/UME-rl-train
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: World Simulation with Video Foundation Models for Physical AI
📝 Summary:
Cosmos-Predict2.5 is a new world foundation model for physical AI, unifying Text, Image, and Video2World generation with enhanced quality and control for robotics. It works with Cosmos-Transfer2.5 for Sim2Real translation. Both are open-source to accelerate embodied intelligence research.
🔹 Publication Date: Published on Oct 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.00062
• PDF: https://arxiv.org/pdf/2511.00062
• Github: https://github.com/nvidia-cosmos/cosmos-transfer2.5
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
Cosmos-Predict2.5 is a new world foundation model for physical AI, unifying Text, Image, and Video2World generation with enhanced quality and control for robotics. It works with Cosmos-Transfer2.5 for Sim2Real translation. Both are open-source to accelerate embodied intelligence research.
🔹 Publication Date: Published on Oct 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.00062
• PDF: https://arxiv.org/pdf/2511.00062
• Github: https://github.com/nvidia-cosmos/cosmos-transfer2.5
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
❤1
✨ Title: Do Vision-Language Models Measure Up? Benchmarking Visual Measurement Reading with MeasureBench
📝 Summary:
Current VLMs struggle with visual measurement reading, especially indicator localization. We introduce MeasureBench, a new benchmark with real-world and synthetic images, and a data synthesis pipeline. VLMs show poor fine-grained spatial grounding, leading to significant numeric errors despite pl...
🔹 Publication Date: Published on Oct 30
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.26865
• PDF: https://arxiv.org/pdf/2510.26865
• Project Page: https://flageval-baai.github.io/MeasureBenchPage/
• Github: https://github.com/flageval-baai/MeasureBench
✨ Datasets citing this paper:
• https://huggingface.co/datasets/FlagEval/MeasureBench
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
Current VLMs struggle with visual measurement reading, especially indicator localization. We introduce MeasureBench, a new benchmark with real-world and synthetic images, and a data synthesis pipeline. VLMs show poor fine-grained spatial grounding, leading to significant numeric errors despite pl...
🔹 Publication Date: Published on Oct 30
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.26865
• PDF: https://arxiv.org/pdf/2510.26865
• Project Page: https://flageval-baai.github.io/MeasureBenchPage/
• Github: https://github.com/flageval-baai/MeasureBench
✨ Datasets citing this paper:
• https://huggingface.co/datasets/FlagEval/MeasureBench
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback
📝 Summary:
UniLumos is a fast, unified image and video relighting framework. It uses RGB-space geometry feedback to ensure physically plausible results, unlike prior diffusion models. It achieves state-of-the-art quality with a 20x speedup.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01678
• PDF: https://arxiv.org/pdf/2511.01678
• Github: https://github.com/alibaba-damo-academy/Lumos-Custom
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
UniLumos is a fast, unified image and video relighting framework. It uses RGB-space geometry feedback to ensure physically plausible results, unlike prior diffusion models. It achieves state-of-the-art quality with a 20x speedup.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01678
• PDF: https://arxiv.org/pdf/2511.01678
• Github: https://github.com/alibaba-damo-academy/Lumos-Custom
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: Actial: Activate Spatial Reasoning Ability of Multimodal Large Language Models
📝 Summary:
Multimodal LLMs struggle with detailed 3D spatial reasoning and cross-view consistency. This paper introduces Viewpoint Learning with the Viewpoint-100K dataset and a two-stage fine-tuning strategy. Their method significantly activates MLLM spatial reasoning, improving performance on various tasks.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01618
• PDF: https://arxiv.org/pdf/2511.01618
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
Multimodal LLMs struggle with detailed 3D spatial reasoning and cross-view consistency. This paper introduces Viewpoint Learning with the Viewpoint-100K dataset and a two-stage fine-tuning strategy. Their method significantly activates MLLM spatial reasoning, improving performance on various tasks.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01618
• PDF: https://arxiv.org/pdf/2511.01618
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: ToolScope: An Agentic Framework for Vision-Guided and Long-Horizon Tool Use
📝 Summary:
ToolScope is an agentic framework for MLLMs that unifies global planning with local multimodal perception, using a specialized Perceive tool to manage visual context in long-horizon VQA tasks. It improves performance on VQA benchmarks by an average of 6.69%.
🔹 Publication Date: Published on Oct 31
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.27363
• PDF: https://arxiv.org/pdf/2510.27363
• Github: https://github.com/dengmengjie/ToolScope
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
ToolScope is an agentic framework for MLLMs that unifies global planning with local multimodal perception, using a specialized Perceive tool to manage visual context in long-horizon VQA tasks. It improves performance on VQA benchmarks by an average of 6.69%.
🔹 Publication Date: Published on Oct 31
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.27363
• PDF: https://arxiv.org/pdf/2510.27363
• Github: https://github.com/dengmengjie/ToolScope
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
This media is not supported in your browser
VIEW IN TELEGRAM
✨ Title: PHUMA: Physically-Grounded Humanoid Locomotion Dataset
📝 Summary:
PHUMA is a new dataset for humanoid locomotion, leveraging large-scale human video while eliminating physical artifacts. Through careful curation and physics-constrained retargeting, PHUMA provides reliable motions. Policies trained with PHUMA significantly outperform existing datasets in imitati...
🔹 Publication Date: Published on Oct 30
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.26236
• PDF: https://arxiv.org/pdf/2510.26236
• Project Page: https://davian-robotics.github.io/PHUMA/
• Github: https://github.com/davian-robotics/PHUMA
✨ Datasets citing this paper:
• https://huggingface.co/datasets/DAVIAN-Robotics/PHUMA
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
PHUMA is a new dataset for humanoid locomotion, leveraging large-scale human video while eliminating physical artifacts. Through careful curation and physics-constrained retargeting, PHUMA provides reliable motions. Policies trained with PHUMA significantly outperform existing datasets in imitati...
🔹 Publication Date: Published on Oct 30
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.26236
• PDF: https://arxiv.org/pdf/2510.26236
• Project Page: https://davian-robotics.github.io/PHUMA/
• Github: https://github.com/davian-robotics/PHUMA
✨ Datasets citing this paper:
• https://huggingface.co/datasets/DAVIAN-Robotics/PHUMA
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: MotionStream: Real-Time Video Generation with Interactive Motion Controls
📝 Summary:
MotionStream enables real-time video generation with interactive motion controls, achieving sub-second latency and 29 FPS streaming. It distills a motion-controlled text-to-video teacher into a causal student, using novel attention mechanisms for infinite-length, high-quality video.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01266
• PDF: https://arxiv.org/pdf/2511.01266
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
MotionStream enables real-time video generation with interactive motion controls, achieving sub-second latency and 29 FPS streaming. It distills a motion-controlled text-to-video teacher into a causal student, using novel attention mechanisms for infinite-length, high-quality video.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01266
• PDF: https://arxiv.org/pdf/2511.01266
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: OpenSIR: Open-Ended Self-Improving Reasoner
📝 Summary:
OpenSIR is a self-play framework where LLMs improve reasoning by alternating teacher and student roles. It generates novel math problems without external supervision, optimizing for difficulty and diversity. This enables open-ended learning and significant performance gains on benchmarks.
🔹 Publication Date: Published on Nov 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.00602
• PDF: https://arxiv.org/pdf/2511.00602
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
OpenSIR is a self-play framework where LLMs improve reasoning by alternating teacher and student roles. It generates novel math problems without external supervision, optimizing for difficulty and diversity. This enables open-ended learning and significant performance gains on benchmarks.
🔹 Publication Date: Published on Nov 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.00602
• PDF: https://arxiv.org/pdf/2511.00602
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: NaviTrace: Evaluating Embodied Navigation of Vision-Language Models
📝 Summary:
NaviTrace is a new benchmark to evaluate vision-language models for robotic navigation using 2D trace prediction. It uses a semantic-aware score across diverse scenarios and embodiment types. VLMs consistently show poor spatial grounding and goal localization, falling short of human performance.
🔹 Publication Date: Published on Oct 30
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.26909
• PDF: https://arxiv.org/pdf/2510.26909
• Project Page: https://leggedrobotics.github.io/navitrace_webpage/
• Github: https://github.com/leggedrobotics/navitrace_evaluation
✨ Datasets citing this paper:
• https://huggingface.co/datasets/leggedrobotics/navitrace
✨ Spaces citing this paper:
• https://huggingface.co/spaces/leggedrobotics/navitrace_leaderboard
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
NaviTrace is a new benchmark to evaluate vision-language models for robotic navigation using 2D trace prediction. It uses a semantic-aware score across diverse scenarios and embodiment types. VLMs consistently show poor spatial grounding and goal localization, falling short of human performance.
🔹 Publication Date: Published on Oct 30
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.26909
• PDF: https://arxiv.org/pdf/2510.26909
• Project Page: https://leggedrobotics.github.io/navitrace_webpage/
• Github: https://github.com/leggedrobotics/navitrace_evaluation
✨ Datasets citing this paper:
• https://huggingface.co/datasets/leggedrobotics/navitrace
✨ Spaces citing this paper:
• https://huggingface.co/spaces/leggedrobotics/navitrace_leaderboard
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: Vote-in-Context: Turning VLMs into Zero-Shot Rank Fusers
📝 Summary:
Vote-in-Context ViC turns VLMs into zero-shot rank fusers and rerankers. It serializes content and retriever data into prompts, enabling adaptive reasoning. ViC achieves state-of-the-art zero-shot video retrieval, greatly surpassing baselines.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01617
• PDF: https://arxiv.org/pdf/2511.01617
• Github: https://github.com/mohammad2012191/ViC
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
Vote-in-Context ViC turns VLMs into zero-shot rank fusers and rerankers. It serializes content and retriever data into prompts, enabling adaptive reasoning. ViC achieves state-of-the-art zero-shot video retrieval, greatly surpassing baselines.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01617
• PDF: https://arxiv.org/pdf/2511.01617
• Github: https://github.com/mohammad2012191/ViC
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
✨ Title: Multi-Step Knowledge Interaction Analysis via Rank-2 Subspace Disentanglement
📝 Summary:
A novel rank-2 subspace disentangles Parametric and Context Knowledge in LLM multi-step explanations. It enables the first detailed analysis of how these knowledge types interact, showing distinct patterns in faithful versus hallucinated outputs.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01706
• PDF: https://arxiv.org/pdf/2511.01706
• Github: https://github.com/copenlu/pk-ck-knowledge-disentanglement
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
📝 Summary:
A novel rank-2 subspace disentangles Parametric and Context Knowledge in LLM multi-step explanations. It enables the first detailed analysis of how these knowledge types interact, showing distinct patterns in faithful versus hallucinated outputs.
🔹 Publication Date: Published on Nov 3
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.01706
• PDF: https://arxiv.org/pdf/2511.01706
• Github: https://github.com/copenlu/pk-ck-knowledge-disentanglement
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT