✨POLARIS: Projection-Orthogonal Least Squares for Robust and Adaptive Inversion in Diffusion Models
📝 Summary:
POLARIS minimizes approximate noise errors in diffusion models during image inversion. It robustly treats the guidance scale as a step-wise variable, significantly improving image editing and restoration accuracy by reducing errors at each step.
🔹 Publication Date: Published on Nov 29
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.00369
• PDF: https://arxiv.org/pdf/2512.00369
• Project Page: https://polaris-code-official.github.io/
• Github: https://github.com/Chatonz/POLARIS
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#DiffusionModels #ImageProcessing #AI #MachineLearning #ComputerVision
📝 Summary:
POLARIS minimizes approximate noise errors in diffusion models during image inversion. It robustly treats the guidance scale as a step-wise variable, significantly improving image editing and restoration accuracy by reducing errors at each step.
🔹 Publication Date: Published on Nov 29
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.00369
• PDF: https://arxiv.org/pdf/2512.00369
• Project Page: https://polaris-code-official.github.io/
• Github: https://github.com/Chatonz/POLARIS
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#DiffusionModels #ImageProcessing #AI #MachineLearning #ComputerVision
❤2
✨Flow Straighter and Faster: Efficient One-Step Generative Modeling via MeanFlow on Rectified Trajectories
📝 Summary:
Rectified MeanFlow enables efficient one-step generative modeling. It achieves this by modeling the mean velocity field on a single-step rectified trajectory with a truncation heuristic, improving both sample quality and training efficiency over prior methods.
🔹 Publication Date: Published on Nov 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23342
• PDF: https://arxiv.org/pdf/2511.23342
• Github: https://github.com/Xinxi-Zhang/Re-MeanFlow
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#GenerativeAI #MachineLearning #DeepLearning #AIResearch #MeanFlow
📝 Summary:
Rectified MeanFlow enables efficient one-step generative modeling. It achieves this by modeling the mean velocity field on a single-step rectified trajectory with a truncation heuristic, improving both sample quality and training efficiency over prior methods.
🔹 Publication Date: Published on Nov 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23342
• PDF: https://arxiv.org/pdf/2511.23342
• Github: https://github.com/Xinxi-Zhang/Re-MeanFlow
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#GenerativeAI #MachineLearning #DeepLearning #AIResearch #MeanFlow
👍1
✨MEGConformer: Conformer-Based MEG Decoder for Robust Speech and Phoneme Classification
📝 Summary:
Conformer-based decoders were adapted for MEG signals to perform Speech Detection and Phoneme Classification. Using MEG-oriented augmentations and normalization, their systems achieved high performance, surpassing competition baselines and ranking within the top-10 in both tasks.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01443
• PDF: https://arxiv.org/pdf/2512.01443
• Github: https://github.com/neural2speech/libribrain-experiments
🔹 Models citing this paper:
• https://huggingface.co/zuazo/megconformer-speech-detection
• https://huggingface.co/zuazo/megconformer-phoneme-classification
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#MEGConformer #MEG #SpeechProcessing #Neuroscience #AI
📝 Summary:
Conformer-based decoders were adapted for MEG signals to perform Speech Detection and Phoneme Classification. Using MEG-oriented augmentations and normalization, their systems achieved high performance, surpassing competition baselines and ranking within the top-10 in both tasks.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01443
• PDF: https://arxiv.org/pdf/2512.01443
• Github: https://github.com/neural2speech/libribrain-experiments
🔹 Models citing this paper:
• https://huggingface.co/zuazo/megconformer-speech-detection
• https://huggingface.co/zuazo/megconformer-phoneme-classification
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#MEGConformer #MEG #SpeechProcessing #Neuroscience #AI
Media is too big
VIEW IN TELEGRAM
✨Generative Video Motion Editing with 3D Point Tracks
📝 Summary:
This paper presents a track-conditioned video-to-video framework for precise joint camera and object motion editing. It uses 3D point tracks to maintain spatiotemporal coherence and handle occlusions through explicit depth cues. This enables diverse motion edits.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02015
• PDF: https://arxiv.org/pdf/2512.02015
• Project Page: https://edit-by-track.github.io/
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#VideoEditing #GenerativeAI #ComputerVision #3DTracking #DeepLearning
📝 Summary:
This paper presents a track-conditioned video-to-video framework for precise joint camera and object motion editing. It uses 3D point tracks to maintain spatiotemporal coherence and handle occlusions through explicit depth cues. This enables diverse motion edits.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02015
• PDF: https://arxiv.org/pdf/2512.02015
• Project Page: https://edit-by-track.github.io/
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#VideoEditing #GenerativeAI #ComputerVision #3DTracking #DeepLearning
❤1👍1
✨ORION: Teaching Language Models to Reason Efficiently in the Language of Thought
📝 Summary:
ORION models compress reasoning into ultra-compressed structured tokens, inspired by Mentalese. This reduces reasoning steps by 4-16x, cuts inference latency by 5x, and training costs by 7-9x while maintaining high accuracy.
🔹 Publication Date: Published on Nov 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22891
• PDF: https://arxiv.org/pdf/2511.22891
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#LLM #AI #AIReasoning #CognitiveAI #DeepLearning
📝 Summary:
ORION models compress reasoning into ultra-compressed structured tokens, inspired by Mentalese. This reduces reasoning steps by 4-16x, cuts inference latency by 5x, and training costs by 7-9x while maintaining high accuracy.
🔹 Publication Date: Published on Nov 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22891
• PDF: https://arxiv.org/pdf/2511.22891
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#LLM #AI #AIReasoning #CognitiveAI #DeepLearning
✨A Hierarchical Framework for Humanoid Locomotion with Supernumerary Limbs
📝 Summary:
A hierarchical control framework enables stable humanoid locomotion with supernumerary limbs. It combines learning-based gait with model-based limb balancing, improving stability and reducing the CoM trajectory Dynamic Time Warping distance by 47%. This decoupled design effectively mitigates dyna...
🔹 Publication Date: Published on Nov 25
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.00077
• PDF: https://arxiv.org/pdf/2512.00077
• Github: https://github.com/heyzbw/HuSLs
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#Robotics #HumanoidRobotics #Locomotion #ControlSystems #SupernumeraryLimbs
📝 Summary:
A hierarchical control framework enables stable humanoid locomotion with supernumerary limbs. It combines learning-based gait with model-based limb balancing, improving stability and reducing the CoM trajectory Dynamic Time Warping distance by 47%. This decoupled design effectively mitigates dyna...
🔹 Publication Date: Published on Nov 25
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.00077
• PDF: https://arxiv.org/pdf/2512.00077
• Github: https://github.com/heyzbw/HuSLs
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#Robotics #HumanoidRobotics #Locomotion #ControlSystems #SupernumeraryLimbs
✨DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models
📝 Summary:
DeepSeek-V3.2 introduces DeepSeek Sparse Attention and a scalable reinforcement learning framework. This allows it to achieve superior reasoning and agent performance, with its Speciale variant surpassing GPT-5 and matching Gemini-3.0-Pro in complex tasks.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02556
• PDF: https://arxiv.org/pdf/2512.02556
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#LLM #AI #DeepLearning #ReinforcementLearning #GenerativeAI
📝 Summary:
DeepSeek-V3.2 introduces DeepSeek Sparse Attention and a scalable reinforcement learning framework. This allows it to achieve superior reasoning and agent performance, with its Speciale variant surpassing GPT-5 and matching Gemini-3.0-Pro in complex tasks.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02556
• PDF: https://arxiv.org/pdf/2512.02556
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#LLM #AI #DeepLearning #ReinforcementLearning #GenerativeAI
✨Does Hearing Help Seeing? Investigating Audio-Video Joint Denoising for Video Generation
📝 Summary:
This paper shows audio-video joint denoising significantly improves video generation quality. By using audio as a privileged signal, the AVFullDiT model regularizes video dynamics, leading to better video quality beyond just synchrony.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02457
• PDF: https://arxiv.org/pdf/2512.02457
• Project Page: https://jianzongwu.github.io/projects/does-hearing-help-seeing/
• Github: https://github.com/jianzongwu/Does-Hearing-Help-Seeing
✨ Datasets citing this paper:
• https://huggingface.co/datasets/jianzongwu/ALT-Merge
• https://huggingface.co/datasets/jianzongwu/VGGSound-T2AV
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#VideoGeneration #MultimodalAI #DeepLearning #ComputerVision #AIResearch
📝 Summary:
This paper shows audio-video joint denoising significantly improves video generation quality. By using audio as a privileged signal, the AVFullDiT model regularizes video dynamics, leading to better video quality beyond just synchrony.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02457
• PDF: https://arxiv.org/pdf/2512.02457
• Project Page: https://jianzongwu.github.io/projects/does-hearing-help-seeing/
• Github: https://github.com/jianzongwu/Does-Hearing-Help-Seeing
✨ Datasets citing this paper:
• https://huggingface.co/datasets/jianzongwu/ALT-Merge
• https://huggingface.co/datasets/jianzongwu/VGGSound-T2AV
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#VideoGeneration #MultimodalAI #DeepLearning #ComputerVision #AIResearch
✨PAI-Bench: A Comprehensive Benchmark For Physical AI
📝 Summary:
PAI-Bench is a new benchmark evaluating multi-modal LLMs and video generative models for physical AI perception and prediction. It reveals current models struggle with physical coherence, forecasting, and causal reasoning in real-world dynamics. This highlights significant gaps for future physica...
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01989
• PDF: https://arxiv.org/pdf/2512.01989
• Github: https://github.com/SHI-Labs/physical-ai-bench
✨ Spaces citing this paper:
• https://huggingface.co/spaces/shi-labs/physical-ai-bench-leaderboard
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#PhysicalAI #LLMs #Benchmarking #GenerativeAI #ComputerVision
📝 Summary:
PAI-Bench is a new benchmark evaluating multi-modal LLMs and video generative models for physical AI perception and prediction. It reveals current models struggle with physical coherence, forecasting, and causal reasoning in real-world dynamics. This highlights significant gaps for future physica...
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01989
• PDF: https://arxiv.org/pdf/2512.01989
• Github: https://github.com/SHI-Labs/physical-ai-bench
✨ Spaces citing this paper:
• https://huggingface.co/spaces/shi-labs/physical-ai-bench-leaderboard
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#PhysicalAI #LLMs #Benchmarking #GenerativeAI #ComputerVision
✨Revisiting the Necessity of Lengthy Chain-of-Thought in Vision-centric Reasoning Generalization
📝 Summary:
Concise Chain-of-Thought steps, specifically minimal visual grounding, are most effective for achieving generalizable visual reasoning in vision-language models. Longer or visual CoT primarily accelerate training but do not improve final performance or generalization across tasks.
🔹 Publication Date: Published on Nov 27
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22586
• PDF: https://arxiv.org/pdf/2511.22586
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#ChainOfThought #VisionLanguageModels #VisualReasoning #AIGeneralization #DeepLearning
📝 Summary:
Concise Chain-of-Thought steps, specifically minimal visual grounding, are most effective for achieving generalizable visual reasoning in vision-language models. Longer or visual CoT primarily accelerate training but do not improve final performance or generalization across tasks.
🔹 Publication Date: Published on Nov 27
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.22586
• PDF: https://arxiv.org/pdf/2511.22586
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#ChainOfThought #VisionLanguageModels #VisualReasoning #AIGeneralization #DeepLearning
✨GUI Exploration Lab: Enhancing Screen Navigation in Agents via Multi-Turn Reinforcement Learning
📝 Summary:
GUI Exploration Lab is a simulation environment to train GUI agents for screen navigation. It finds supervised fine-tuning establishes basics, single-turn reinforcement learning improves generalization, and multi-turn RL enhances exploration for superior navigation performance.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02423
• PDF: https://arxiv.org/pdf/2512.02423
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#ReinforcementLearning #GUIAgents #AINavigation #MachineLearning #AIResearch
📝 Summary:
GUI Exploration Lab is a simulation environment to train GUI agents for screen navigation. It finds supervised fine-tuning establishes basics, single-turn reinforcement learning improves generalization, and multi-turn RL enhances exploration for superior navigation performance.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02423
• PDF: https://arxiv.org/pdf/2512.02423
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#ReinforcementLearning #GUIAgents #AINavigation #MachineLearning #AIResearch
✨Benchmarking Scientific Understanding and Reasoning for Video Generation using VideoScience-Bench
📝 Summary:
VideoScience-Bench introduces a new benchmark evaluating video models scientific reasoning. It assesses their ability to generate phenomena consistent with undergraduate physics and chemistry, filling a critical gap. It is the first to evaluate models as scientific reasoners.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02942
• PDF: https://arxiv.org/pdf/2512.02942
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#VideoGeneration #AIResearch #ScientificReasoning #AIModels #Benchmarking
📝 Summary:
VideoScience-Bench introduces a new benchmark evaluating video models scientific reasoning. It assesses their ability to generate phenomena consistent with undergraduate physics and chemistry, filling a critical gap. It is the first to evaluate models as scientific reasoners.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02942
• PDF: https://arxiv.org/pdf/2512.02942
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#VideoGeneration #AIResearch #ScientificReasoning #AIModels #Benchmarking
✨UnicEdit-10M: A Dataset and Benchmark Breaking the Scale-Quality Barrier via Unified Verification for Reasoning-Enriched Edits
📝 Summary:
This paper tackles image editing model performance gaps due to data scarcity by introducing UnicEdit-10M, a 10M-scale high-quality dataset from a lightweight verified pipeline. It also proposes UnicBench, a new benchmark with novel metrics to diagnose reasoning limitations in models.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02790
• PDF: https://arxiv.org/pdf/2512.02790
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#ImageEditing #AI #Dataset #Benchmark #ComputerVision
📝 Summary:
This paper tackles image editing model performance gaps due to data scarcity by introducing UnicEdit-10M, a 10M-scale high-quality dataset from a lightweight verified pipeline. It also proposes UnicBench, a new benchmark with novel metrics to diagnose reasoning limitations in models.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02790
• PDF: https://arxiv.org/pdf/2512.02790
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#ImageEditing #AI #Dataset #Benchmark #ComputerVision
✨Guided Self-Evolving LLMs with Minimal Human Supervision
📝 Summary:
R-Few enables stable LLM self-evolution using a guided Self-Play Challenger-Solver framework with minimal human input. It leverages human examples for synthetic data and a curriculum for training, consistently improving math and reasoning.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02472
• PDF: https://arxiv.org/pdf/2512.02472
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#LLM #SelfEvolvingAI #MachineLearning #DeepLearning #AIResearch
📝 Summary:
R-Few enables stable LLM self-evolution using a guided Self-Play Challenger-Solver framework with minimal human input. It leverages human examples for synthetic data and a curriculum for training, consistently improving math and reasoning.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02472
• PDF: https://arxiv.org/pdf/2512.02472
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#LLM #SelfEvolvingAI #MachineLearning #DeepLearning #AIResearch
✨DualCamCtrl: Dual-Branch Diffusion Model for Geometry-Aware Camera-Controlled Video Generation
📝 Summary:
DualCamCtrl is a novel diffusion model for camera-controlled video generation. It employs a dual-branch framework and Semantic Guided Mutual Alignment to generate consistent RGB and depth, better disentangling appearance and geometry for accurate camera trajectories.
🔹 Publication Date: Published on Nov 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23127
• PDF: https://arxiv.org/pdf/2511.23127
• Project Page: https://soyouthinkyoucantell.github.io/dualcamctrl-page/
• Github: https://github.com/EnVision-Research/DualCamCtrl
🔹 Models citing this paper:
• https://huggingface.co/FayeHongfeiZhang/DualCamCtrl
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#DiffusionModels #VideoGeneration #ComputerVision #GenerativeAI #DeepLearning
📝 Summary:
DualCamCtrl is a novel diffusion model for camera-controlled video generation. It employs a dual-branch framework and Semantic Guided Mutual Alignment to generate consistent RGB and depth, better disentangling appearance and geometry for accurate camera trajectories.
🔹 Publication Date: Published on Nov 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23127
• PDF: https://arxiv.org/pdf/2511.23127
• Project Page: https://soyouthinkyoucantell.github.io/dualcamctrl-page/
• Github: https://github.com/EnVision-Research/DualCamCtrl
🔹 Models citing this paper:
• https://huggingface.co/FayeHongfeiZhang/DualCamCtrl
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#DiffusionModels #VideoGeneration #ComputerVision #GenerativeAI #DeepLearning
Media is too big
VIEW IN TELEGRAM
✨DiG-Flow: Discrepancy-Guided Flow Matching for Robust VLA Models
📝 Summary:
DiG-Flow enhances VLA model robustness by using geometric regularization to align observation and action embeddings. It measures embedding discrepancy, applies residual updates, and consistently boosts performance on complex tasks and with limited data.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01715
• PDF: https://arxiv.org/pdf/2512.01715
• Project Page: https://beingbeyond.github.io/DiG-Flow/
• Github: https://beingbeyond.github.io/DiG-Flow
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#VLAModels #RobustAI #FlowMatching #MachineLearning #DeepLearning
📝 Summary:
DiG-Flow enhances VLA model robustness by using geometric regularization to align observation and action embeddings. It measures embedding discrepancy, applies residual updates, and consistently boosts performance on complex tasks and with limited data.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01715
• PDF: https://arxiv.org/pdf/2512.01715
• Project Page: https://beingbeyond.github.io/DiG-Flow/
• Github: https://beingbeyond.github.io/DiG-Flow
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#VLAModels #RobustAI #FlowMatching #MachineLearning #DeepLearning
👍1
✨Glance: Accelerating Diffusion Models with 1 Sample
📝 Summary:
Glance accelerates diffusion models with a phase-aware strategy using lightweight LoRA adapters. This method applies varying speedups across denoising stages, achieving up to 5x acceleration and strong generalization with minimal retraining on just 1 sample.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02899
• PDF: https://arxiv.org/pdf/2512.02899
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#DiffusionModels #ModelAcceleration #LoRA #DeepLearning #GenerativeAI
📝 Summary:
Glance accelerates diffusion models with a phase-aware strategy using lightweight LoRA adapters. This method applies varying speedups across denoising stages, achieving up to 5x acceleration and strong generalization with minimal retraining on just 1 sample.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02899
• PDF: https://arxiv.org/pdf/2512.02899
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#DiffusionModels #ModelAcceleration #LoRA #DeepLearning #GenerativeAI
✨Video4Spatial: Towards Visuospatial Intelligence with Context-Guided Video Generation
📝 Summary:
Video4Spatial uses video diffusion models with only visual data to perform complex spatial tasks like navigation and object grounding. It demonstrates strong spatial understanding, planning, and generalization, advancing visuospatial reasoning.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03040
• PDF: https://arxiv.org/pdf/2512.03040
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#Video4Spatial #VisuospatialAI #DiffusionModels #SpatialReasoning #ComputerVision
📝 Summary:
Video4Spatial uses video diffusion models with only visual data to perform complex spatial tasks like navigation and object grounding. It demonstrates strong spatial understanding, planning, and generalization, advancing visuospatial reasoning.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.03040
• PDF: https://arxiv.org/pdf/2512.03040
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#Video4Spatial #VisuospatialAI #DiffusionModels #SpatialReasoning #ComputerVision
✨YingVideo-MV: Music-Driven Multi-Stage Video Generation
📝 Summary:
YingVideo-MV is the first framework to generate high-quality, music-driven long performance videos with synchronized camera motion. It uses audio analysis, diffusion transformers, and a camera adapter, achieving precise music-motion-camera synchronization.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02492
• PDF: https://arxiv.org/pdf/2512.02492
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#VideoGeneration #MusicAI #GenerativeAI #DiffusionModels #ComputerVision
📝 Summary:
YingVideo-MV is the first framework to generate high-quality, music-driven long performance videos with synchronized camera motion. It uses audio analysis, diffusion transformers, and a camera adapter, achieving precise music-motion-camera synchronization.
🔹 Publication Date: Published on Dec 2
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.02492
• PDF: https://arxiv.org/pdf/2512.02492
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#VideoGeneration #MusicAI #GenerativeAI #DiffusionModels #ComputerVision
✨SimScale: Learning to Drive via Real-World Simulation at Scale
📝 Summary:
SimScale is a simulation framework synthesizing diverse driving scenarios from logs. Co-training with this data significantly improves autonomous driving robustness and generalization, scaling with simulation data even without new real-world input.
🔹 Publication Date: Published on Nov 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23369
• PDF: https://arxiv.org/pdf/2511.23369
• Project Page: https://opendrivelab.com/SimScale
• Github: https://github.com/OpenDriveLab/SimScale
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#AutonomousDriving #Simulation #AI #MachineLearning #Robotics
📝 Summary:
SimScale is a simulation framework synthesizing diverse driving scenarios from logs. Co-training with this data significantly improves autonomous driving robustness and generalization, scaling with simulation data even without new real-world input.
🔹 Publication Date: Published on Nov 28
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2511.23369
• PDF: https://arxiv.org/pdf/2511.23369
• Project Page: https://opendrivelab.com/SimScale
• Github: https://github.com/OpenDriveLab/SimScale
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#AutonomousDriving #Simulation #AI #MachineLearning #Robotics
✨TRivia: Self-supervised Fine-tuning of Vision-Language Models for Table Recognition
📝 Summary:
TRivia is a self-supervised fine-tuning method for vision-language models to learn table recognition from unlabeled data. It uses a question-answering reward mechanism to autonomously optimize the model. This open-source solution outperforms state-of-the-art systems on popular benchmarks.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01248
• PDF: https://arxiv.org/pdf/2512.01248
• Github: https://github.com/opendatalab/TRivia
🔹 Models citing this paper:
• https://huggingface.co/opendatalab/TRivia-3B
✨ Spaces citing this paper:
• https://huggingface.co/spaces/opendatalab/TRivia-3B
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#TableRecognition #VisionLanguageModels #SelfSupervisedLearning #AI #DeepLearning
📝 Summary:
TRivia is a self-supervised fine-tuning method for vision-language models to learn table recognition from unlabeled data. It uses a question-answering reward mechanism to autonomously optimize the model. This open-source solution outperforms state-of-the-art systems on popular benchmarks.
🔹 Publication Date: Published on Dec 1
🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2512.01248
• PDF: https://arxiv.org/pdf/2512.01248
• Github: https://github.com/opendatalab/TRivia
🔹 Models citing this paper:
• https://huggingface.co/opendatalab/TRivia-3B
✨ Spaces citing this paper:
• https://huggingface.co/spaces/opendatalab/TRivia-3B
==================================
For more data science resources:
✓ https://news.1rj.ru/str/DataScienceT
#TableRecognition #VisionLanguageModels #SelfSupervisedLearning #AI #DeepLearning