This media is not supported in your browser
VIEW IN TELEGRAM
💚 #META 3D Casual Captures 💚
👉#META unveils ShapeR, a novel approach for conditional 3D object shape generation from casually captured sequences. Impressive results. Repo under CC BY-NC 4.0💙
👉Review https://t.ly/j08sJ
👉Paper arxiv.org/pdf/2601.11514
👉Project facebookresearch.github.io/ShapeR/
👉Repo github.com/facebookresearch/ShapeR
👉#META unveils ShapeR, a novel approach for conditional 3D object shape generation from casually captured sequences. Impressive results. Repo under CC BY-NC 4.0💙
👉Review https://t.ly/j08sJ
👉Paper arxiv.org/pdf/2601.11514
👉Project facebookresearch.github.io/ShapeR/
👉Repo github.com/facebookresearch/ShapeR
🔥7❤4👏1
💊Foundation Medical SAM3 💊
👉Medical SAM3: foundation model for universal prompt-driven medical image segmentation, by fully fine-tuning SAM3 on large-scale, heterogeneous 2D/3D medical imaging datasets with paired segmentation masks-text prompts. Repo & Demo announced💙
👉Review https://t.ly/C6jcy
👉Paper https://arxiv.org/pdf/2601.10880
👉Project chongcongjiang.github.io/MedicalSAM3/#
👉Repo github.com/AIM-Research-Lab/Medical-SAM3
👉Medical SAM3: foundation model for universal prompt-driven medical image segmentation, by fully fine-tuning SAM3 on large-scale, heterogeneous 2D/3D medical imaging datasets with paired segmentation masks-text prompts. Repo & Demo announced💙
👉Review https://t.ly/C6jcy
👉Paper https://arxiv.org/pdf/2601.10880
👉Project chongcongjiang.github.io/MedicalSAM3/#
👉Repo github.com/AIM-Research-Lab/Medical-SAM3
❤13🔥3👍2👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🦧Mask-Guided Matting🦧
👉VideoMaMa is novel a diffusion-based model that converts binary masks into continuous alpha mattes. Repo, Dataset & Demo💙
👉Review https://t.ly/l_0f8
👉Paper arxiv.org/pdf/2601.14255
👉Project cvlab-kaist.github.io/VideoMaMa
👉Repo github.com/cvlab-kaist/VideoMaMa
👉Demo huggingface.co/spaces/SammyLim/VideoMaMa
👉VideoMaMa is novel a diffusion-based model that converts binary masks into continuous alpha mattes. Repo, Dataset & Demo💙
👉Review https://t.ly/l_0f8
👉Paper arxiv.org/pdf/2601.14255
👉Project cvlab-kaist.github.io/VideoMaMa
👉Repo github.com/cvlab-kaist/VideoMaMa
👉Demo huggingface.co/spaces/SammyLim/VideoMaMa
❤5🔥2👍1
This media is not supported in your browser
VIEW IN TELEGRAM
💜MoRo: Human Motion💜
👉Masked modeling for human motion Recovery under Occlusions. Given a monocular video captured from a static camera, MoRo (by ETHZ & META) robustly reconstructs accurate/physically plausible human motion, even under challenging occlusions. Repo released💙
👉Review https://t.ly/kK_je
👉Paper arxiv.org/pdf/2601.16079
👉Project mikeqzy.github.io/MoRo/
👉Repo github.com/mikeqzy/MoRo
👉Masked modeling for human motion Recovery under Occlusions. Given a monocular video captured from a static camera, MoRo (by ETHZ & META) robustly reconstructs accurate/physically plausible human motion, even under challenging occlusions. Repo released💙
👉Review https://t.ly/kK_je
👉Paper arxiv.org/pdf/2601.16079
👉Project mikeqzy.github.io/MoRo/
👉Repo github.com/mikeqzy/MoRo
❤6👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 BBoxMaskPose v2 is fire 🔥
👉BBoxMaskPose v2 by ČVUT offers SOTA performance in detection, segmentation & 2D pose in crowded scenes. It enables 3D human reconstruction even in scenes with complex interactions. Code, Models & data available💙
👉Review https://t.ly/GkkDl
👉Paper arxiv.org/pdf/2601.15200
👉Project https://lnkd.in/dQ_3hxjC
👉Repo https://lnkd.in/dVqwD3jN
👉BBoxMaskPose v2 by ČVUT offers SOTA performance in detection, segmentation & 2D pose in crowded scenes. It enables 3D human reconstruction even in scenes with complex interactions. Code, Models & data available💙
👉Review https://t.ly/GkkDl
👉Paper arxiv.org/pdf/2601.15200
👉Project https://lnkd.in/dQ_3hxjC
👉Repo https://lnkd.in/dVqwD3jN
❤5👍3👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🦠Generalized-Scale Counting🦠
👉GeCo2 (Ljubljana) is a novel e2e SOTA few-shot method that explicitly addresses the object scale issues. Repo & Demo 💙
👉Review https://t.ly/2_7I8
👉Paper https://arxiv.org/pdf/2511.08048
👉Repo https://github.com/jerpelhan/GECO2
👉Demo huggingface.co/spaces/jerpelhan/GECO2-demo
👉GeCo2 (Ljubljana) is a novel e2e SOTA few-shot method that explicitly addresses the object scale issues. Repo & Demo 💙
👉Review https://t.ly/2_7I8
👉Paper https://arxiv.org/pdf/2511.08048
👉Repo https://github.com/jerpelhan/GECO2
👉Demo huggingface.co/spaces/jerpelhan/GECO2-demo
👍11❤1🔥1
🔥🔥Super-Hard Poll folks🔥🔥
👉 This dilemma is driving me crazy. Vote: https://www.linkedin.com/posts/visionarynet_activity-7421974594917588992-YNAG
(and of course comment here)
👉 This dilemma is driving me crazy. Vote: https://www.linkedin.com/posts/visionarynet_activity-7421974594917588992-YNAG
(and of course comment here)
❤5👍1🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🌻MLLMs Fine Segmentation🌻
👉SimpleSeg: MLLMs with native pixel-level perception. Repo & Model available💙
👉Review https://t.ly/eVguh
👉Paper arxiv.org/pdf/2601.19228
👉Project simpleseg.github.io/
👉Repo github.com/songtianhui/SimpleSeg
👉SimpleSeg: MLLMs with native pixel-level perception. Repo & Model available💙
👉Review https://t.ly/eVguh
👉Paper arxiv.org/pdf/2601.19228
👉Project simpleseg.github.io/
👉Repo github.com/songtianhui/SimpleSeg
🔥4👍3❤2👏1
🔥 DeepSeek-OCR 2 is out 🔥
👉DeepSeek-AI announced the new version of its powerful SOTA OCR. A new architectural approach with the potential to achieve genuine 2D reasoning. Codes & weights💙
👉Review https://t.ly/gX4bX
👉Paper https://arxiv.org/pdf/2601.20552
👉Repo github.com/deepseek-ai/DeepSeek-OCR-2
👉DeepSeek-AI announced the new version of its powerful SOTA OCR. A new architectural approach with the potential to achieve genuine 2D reasoning. Codes & weights💙
👉Review https://t.ly/gX4bX
👉Paper https://arxiv.org/pdf/2601.20552
👉Repo github.com/deepseek-ai/DeepSeek-OCR-2
❤8🔥7👏1
This media is not supported in your browser
VIEW IN TELEGRAM
📊 SOTA Style Transfer 📊
👉TeleAI unveils TeleStyle, a lightweight yet effective model for image/video stylization. Built upon Qwen-Image-Edit, TeleStyle leverages the base model’s robust capabilities in content preservation & style customization. Code & Model released💙
👉Review https://t.ly/viVR0
👉Paper arxiv.org/pdf/2601.20175
👉Project tele-ai.github.io/TeleStyle/
👉Repo github.com/Tele-AI/TeleStyle
👉TeleAI unveils TeleStyle, a lightweight yet effective model for image/video stylization. Built upon Qwen-Image-Edit, TeleStyle leverages the base model’s robust capabilities in content preservation & style customization. Code & Model released💙
👉Review https://t.ly/viVR0
👉Paper arxiv.org/pdf/2601.20175
👉Project tele-ai.github.io/TeleStyle/
👉Repo github.com/Tele-AI/TeleStyle
❤12👍2🔥1🤯1🤣1
This media is not supported in your browser
VIEW IN TELEGRAM
🍑 Metric Anything is out 🍑
👉Metric Anything (Li Auto inc.) is a simple and scalable pretraining framework that learns metric depth from noisy, diverse 3D sources without manually engineered prompts, camera-specific modeling, or task-specific architectures. Impressive. Code announced 💙
👉Review https://t.ly/54Ccr
👉Paper arxiv.org/pdf/2601.22054
👉Project metric-anything.github.io/metric-anything-io/
👉Repo github.com/metric-anything/metric-anything
👉Metric Anything (Li Auto inc.) is a simple and scalable pretraining framework that learns metric depth from noisy, diverse 3D sources without manually engineered prompts, camera-specific modeling, or task-specific architectures. Impressive. Code announced 💙
👉Review https://t.ly/54Ccr
👉Paper arxiv.org/pdf/2601.22054
👉Project metric-anything.github.io/metric-anything-io/
👉Repo github.com/metric-anything/metric-anything
🔥11❤5👏1
❤7
This media is not supported in your browser
VIEW IN TELEGRAM
🌈Segment Any Events by Language🌈
👉SEAL (by NUS) is the first Semantic-aware Segment Any Events framework that addresses Open-Vocabulary Event Instance Segmentation. Code announced💙
👉Review https://t.ly/1ZMF0
👉Paper https://arxiv.org/pdf/2601.23159
👉Project https://0nandon.github.io/SEAL/
👉Repo https://github.com/0nandon/SEAL
👉SEAL (by NUS) is the first Semantic-aware Segment Any Events framework that addresses Open-Vocabulary Event Instance Segmentation. Code announced💙
👉Review https://t.ly/1ZMF0
👉Paper https://arxiv.org/pdf/2601.23159
👉Project https://0nandon.github.io/SEAL/
👉Repo https://github.com/0nandon/SEAL
🔥7❤4👏1🤯1
👉RAM prices skyrocketing
👉Me acting like a rich kid.
Let's talk: https://www.linkedin.com/posts/visionarynet_ai-ram-ddr5-activity-7424127924020072448-NbaO
👉Me acting like a rich kid.
Let's talk: https://www.linkedin.com/posts/visionarynet_ai-ram-ddr5-activity-7424127924020072448-NbaO
🤣24❤4🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🐮CoWTracker: Track-Warping🐮
👉CoWTracker (VGG + META) is a novel dense point tracker that eschews cost volumes in favor of warping. Code/Models under FAIR NC💙
👉Review https://t.ly/6bAn9
👉Paper https://arxiv.org/pdf/2602.04877
👉Project https://cowtracker.github.io/
👉Repo https://github.com/facebookresearch/cowtracker
👉CoWTracker (VGG + META) is a novel dense point tracker that eschews cost volumes in favor of warping. Code/Models under FAIR NC💙
👉Review https://t.ly/6bAn9
👉Paper https://arxiv.org/pdf/2602.04877
👉Project https://cowtracker.github.io/
👉Repo https://github.com/facebookresearch/cowtracker
🔥4❤1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🌈TrajVG Trajectory-Geometry🌈
👉TrajVG is a novel reconstruction framework that makes cross-frame 3D correspondence an explicit prediction by estimating camera-coordinate 3D trajectories. Code announced💙
👉Review https://t.ly/yVi01
👉Paper arxiv.org/pdf/2602.04439
👉Project xingy038.github.io/TrajVG/
👉Repo github.com/xingy038/TrajVG
👉TrajVG is a novel reconstruction framework that makes cross-frame 3D correspondence an explicit prediction by estimating camera-coordinate 3D trajectories. Code announced💙
👉Review https://t.ly/yVi01
👉Paper arxiv.org/pdf/2602.04439
👉Project xingy038.github.io/TrajVG/
👉Repo github.com/xingy038/TrajVG
❤7🔥1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🪙MOMENTUM #NeurIPS 2025 🪙
👉MOMENTUM by Google (H/T Huguens Jean, Ph.D.) is a production multimodal agent architecture built on the Google ADK. It orchestrates 22 specialized tools (Gemini for reasoning, Imagen 4.0 for image generation, and Veo 3.1 for synthesis). Code announced💙
👉Review https://t.ly/06h7Q
👉Paper https://momentum-project-page-232993426383.us-central1.run.app/momentum_paper.pdf
👉Project https://momentum-project-page-232993426383.us-central1.run.app/
👉Repo TBA
👉MOMENTUM by Google (H/T Huguens Jean, Ph.D.) is a production multimodal agent architecture built on the Google ADK. It orchestrates 22 specialized tools (Gemini for reasoning, Imagen 4.0 for image generation, and Veo 3.1 for synthesis). Code announced💙
👉Review https://t.ly/06h7Q
👉Paper https://momentum-project-page-232993426383.us-central1.run.app/momentum_paper.pdf
👉Project https://momentum-project-page-232993426383.us-central1.run.app/
👉Repo TBA
👍3❤1🔥1
😶🌫️ SOTA Full-Head Synthesis 😶🌫️
👉HyPlaneHead, the new SOTA in full-head image synthesis, delivering HQ results with significantly fewer artifacts compared to existing 3D-aware models. Repo announced💙
👉Review https://t.ly/WYfP3
👉Paper arxiv.org/pdf/2509.16748
👉Project https://lhyfst.github.io/hyplanehead/
👉Repo github.com/lhyfst/HyPlaneHead
👉HyPlaneHead, the new SOTA in full-head image synthesis, delivering HQ results with significantly fewer artifacts compared to existing 3D-aware models. Repo announced💙
👉Review https://t.ly/WYfP3
👉Paper arxiv.org/pdf/2509.16748
👉Project https://lhyfst.github.io/hyplanehead/
👉Repo github.com/lhyfst/HyPlaneHead
❤3🔥3👍2👏1😢1
This media is not supported in your browser
VIEW IN TELEGRAM
🍟 AnyTouch 2 is out 🍟
👉AnyTouch 2 is a general tactile representation learning framework for diverse optical tactile sensors that unifies object-level understanding with fine-grained, force-aware dynamic perception. Repo, Model & Data💙
👉Review https://t.ly/fP4dP
👉Paper https://arxiv.org/pdf/2602.09617
👉Project gewu-lab.github.io/AnyTouch2/
👉Repo github.com/GeWu-Lab/AnyTouch2
👉AnyTouch 2 is a general tactile representation learning framework for diverse optical tactile sensors that unifies object-level understanding with fine-grained, force-aware dynamic perception. Repo, Model & Data💙
👉Review https://t.ly/fP4dP
👉Paper https://arxiv.org/pdf/2602.09617
👉Project gewu-lab.github.io/AnyTouch2/
👉Repo github.com/GeWu-Lab/AnyTouch2
❤6🔥1