This media is not supported in your browser
VIEW IN TELEGRAM
💚 MatAnyone 2 is out! 💚
👉MatAnyone 2 is the most advanced human video matting framework that preserves fine details by avoiding segmentation-like boundaries, while also shows enhanced robustness under challenging real-world conditions. Repo & Dataset announced💙
👉Review https://t.ly/vxOBO
👉Paper arxiv.org/pdf/2512.11782
👉Project pq-yang.github.io/projects/MatAnyone2
👉Repo github.com/pq-yang/MatAnyone2
👉MatAnyone 2 is the most advanced human video matting framework that preserves fine details by avoiding segmentation-like boundaries, while also shows enhanced robustness under challenging real-world conditions. Repo & Dataset announced💙
👉Review https://t.ly/vxOBO
👉Paper arxiv.org/pdf/2512.11782
👉Project pq-yang.github.io/projects/MatAnyone2
👉Repo github.com/pq-yang/MatAnyone2
🔥5❤4👍1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
💷 SOTA Zero-Shot Stereo Matching💷
👉Fast-FoundationStereo by #Nvidia is a novel family of architectures that achieve, for the first time, strong zero-shot generalization at real-time frame rate via divide-&-conquer acceleration. Code & Data announced💙
👉Review https://t.ly/XD6pO
👉Paper https://lnkd.in/d9_YKW2A
👉Project https://lnkd.in/dKDxm7EX
👉Repo https://lnkd.in/dR4-PdsW
👉Fast-FoundationStereo by #Nvidia is a novel family of architectures that achieve, for the first time, strong zero-shot generalization at real-time frame rate via divide-&-conquer acceleration. Code & Data announced💙
👉Review https://t.ly/XD6pO
👉Paper https://lnkd.in/d9_YKW2A
👉Project https://lnkd.in/dKDxm7EX
👉Repo https://lnkd.in/dR4-PdsW
2🔥10❤4👍1
This media is not supported in your browser
VIEW IN TELEGRAM
👀DriverGaze360: Driver SOTA👀
👉DriverGaze360 is a large-scale 360◦ field of view driver attention dataset, containing ∼1M gaze-labeled frames. Code & Dataset announced💙
👉Review https://t.ly/ZcoUw
👉Paper arxiv.org/pdf/2512.14266
👉Project av.dfki.de/drivergaze360/
👉Repo github.com/dfki-av/drivergaze360
👉Data av.dfki.de/drivergaze360/dataset
👉DriverGaze360 is a large-scale 360◦ field of view driver attention dataset, containing ∼1M gaze-labeled frames. Code & Dataset announced💙
👉Review https://t.ly/ZcoUw
👉Paper arxiv.org/pdf/2512.14266
👉Project av.dfki.de/drivergaze360/
👉Repo github.com/dfki-av/drivergaze360
👉Data av.dfki.de/drivergaze360/dataset
🔥10❤4👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🫠FlexAvatar: 3D Heads🫠
👉TUM introduces FlexAvatar, a novel method for creating HQ and complete 3D head avatars from a single image. Code announced💙
👉Review https://t.ly/Rkdtd
👉Paper arxiv.org/pdf/2512.15599
👉Project tobias-kirschstein.github.io/flexavatar/
👉Repo TBA
👉TUM introduces FlexAvatar, a novel method for creating HQ and complete 3D head avatars from a single image. Code announced💙
👉Review https://t.ly/Rkdtd
👉Paper arxiv.org/pdf/2512.15599
👉Project tobias-kirschstein.github.io/flexavatar/
👉Repo TBA
🔥8❤4👍1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🏜️ Depth Any Panoramas 🏜️
👉DAP is the new SOTA foundation model for panoramic depth estimation with a large scale dataset. Data & Repo under MIT💙
👉Review https://t.ly/LaUmd
👉Paper arxiv.org/pdf/2512.16913
👉Project https://lnkd.in/dvqNV9jx
👉Repo https://lnkd.in/dmNzhb-7
👉Demo https://lnkd.in/dDwjMF3u
👉DAP is the new SOTA foundation model for panoramic depth estimation with a large scale dataset. Data & Repo under MIT💙
👉Review https://t.ly/LaUmd
👉Paper arxiv.org/pdf/2512.16913
👉Project https://lnkd.in/dvqNV9jx
👉Repo https://lnkd.in/dmNzhb-7
👉Demo https://lnkd.in/dDwjMF3u
🔥9❤6👍2👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🎯Generative Refocusing is out🎯
👉Generative Refocusing is a two-step process that uses DeblurNet to recover all-in-focus images from various inputs and BokehNet for creating controllable bokeh (in semi-supervised mode). Repo under Apache2.0💙
👉Review https://t.ly/8t7PA
👉Paper arxiv.org/pdf/2512.16923
👉Project generative-refocusing.github.io/
👉Repo github.com/rayray9999/Genfocus
👉Demo huggingface.co/spaces/nycu-cplab/Genfocus-Demo
👉Generative Refocusing is a two-step process that uses DeblurNet to recover all-in-focus images from various inputs and BokehNet for creating controllable bokeh (in semi-supervised mode). Repo under Apache2.0💙
👉Review https://t.ly/8t7PA
👉Paper arxiv.org/pdf/2512.16923
👉Project generative-refocusing.github.io/
👉Repo github.com/rayray9999/Genfocus
👉Demo huggingface.co/spaces/nycu-cplab/Genfocus-Demo
🔥7❤3
This media is not supported in your browser
VIEW IN TELEGRAM
⭐TOP 5 Papers you loved in 2025⭐
👉 In 2025 novel architectures have redefined efficiency and accuracy, and almost every day brought a new SOTA in image understanding, tracking, and GenAI. It’s been an inspiring ride, and 2026 it will be even wilder. This community (LinkedIn + Telegram) is now around 80,000+ people.
𝐏𝐚𝐩𝐞𝐫𝐬 (𝐛𝐲 𝐲𝐨𝐮𝐫 𝐩𝐫𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞):
⭐3D LLM https://t.ly/ejr1s
⭐DynOMo https://t.ly/t5pCf
⭐Track Transf. https://t.ly/NPyW4
⭐YOLOv12 https://t.ly/jj1oR
⭐G-Surface Tracking https://t.ly/udpMq
Thank you all💙
👉 In 2025 novel architectures have redefined efficiency and accuracy, and almost every day brought a new SOTA in image understanding, tracking, and GenAI. It’s been an inspiring ride, and 2026 it will be even wilder. This community (LinkedIn + Telegram) is now around 80,000+ people.
𝐏𝐚𝐩𝐞𝐫𝐬 (𝐛𝐲 𝐲𝐨𝐮𝐫 𝐩𝐫𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞):
⭐3D LLM https://t.ly/ejr1s
⭐DynOMo https://t.ly/t5pCf
⭐Track Transf. https://t.ly/NPyW4
⭐YOLOv12 https://t.ly/jj1oR
⭐G-Surface Tracking https://t.ly/udpMq
Thank you all💙
❤23👏3👍2🔥1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🦙 Depth as Neural Implicit 🦙
👉InfiniDepth represents depth as neural implicit fields, "infinite" (i.e.16K) resolution and geometrical details. Repo under Apache 2.0💙
👉Review https://t.ly/4we5t
👉Paper https://lnkd.in/dpiHQExj
👉Project https://lnkd.in/dy3JxKye
👉Repo https://lnkd.in/dAXbnK5z
👉InfiniDepth represents depth as neural implicit fields, "infinite" (i.e.16K) resolution and geometrical details. Repo under Apache 2.0💙
👉Review https://t.ly/4we5t
👉Paper https://lnkd.in/dpiHQExj
👉Project https://lnkd.in/dy3JxKye
👉Repo https://lnkd.in/dAXbnK5z
1🔥12❤2👏2
This media is not supported in your browser
VIEW IN TELEGRAM
🌍Label Any Object in 3D 🌍
👉LabelAny3D: novel analysis-by-synthesis framework that reconstructs holistic 3D scenes from 2D to efficiently produce HQ 3D BBs annotations. Repo under CC-BY-4.0 license💙
👉Review https://t.ly/bO93j
👉Paper https://lnkd.in/dYb97zWG
👉Project https://lnkd.in/dJ9UKERb
👉Repo https://lnkd.in/d9SxtmiA
👉LabelAny3D: novel analysis-by-synthesis framework that reconstructs holistic 3D scenes from 2D to efficiently produce HQ 3D BBs annotations. Repo under CC-BY-4.0 license💙
👉Review https://t.ly/bO93j
👉Paper https://lnkd.in/dYb97zWG
👉Project https://lnkd.in/dJ9UKERb
👉Repo https://lnkd.in/d9SxtmiA
🔥7❤6👍1👏1
🔥 New #AI Startups in 2026? 🔥
In 2026, which area would you focus on?
🤖Agents → workflows, copilots, etc.
🏭Vertical AI → Pharma, Automotive, Energy ...
🧠Infrastructure → MLOps, Security, Cost Control ...
🎨AI for Creators/Media → Video, avatars, contents ...
Please, help me understanding what's next with this poll on LinkedIn :)
https://www.linkedin.com/posts/visionarynet_ai-ai-deeplearning-activity-7415377341779996672-sQO1
LUV U \m/
In 2026, which area would you focus on?
🤖Agents → workflows, copilots, etc.
🏭Vertical AI → Pharma, Automotive, Energy ...
🧠Infrastructure → MLOps, Security, Cost Control ...
🎨AI for Creators/Media → Video, avatars, contents ...
Please, help me understanding what's next with this poll on LinkedIn :)
https://www.linkedin.com/posts/visionarynet_ai-ai-deeplearning-activity-7415377341779996672-sQO1
LUV U \m/
Linkedin
#ai #ai #deeplearning #aiwithpapers #metaverse | Alessandro Ferrari
🔥🔥 New #AI Startups in 2026? 🔥🔥
👉 Looking ahead to 2026, the question is no longer “can we build it?” but “where does it actually create durable value?” in the AI field. So, if you were to launch an AI startup in 2026, which area would you focus on?
🤖Agents…
👉 Looking ahead to 2026, the question is no longer “can we build it?” but “where does it actually create durable value?” in the AI field. So, if you were to launch an AI startup in 2026, which area would you focus on?
🤖Agents…
🔥4❤1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥Orient Anything V2 is out🔥
👉Orient Anything V2 is a foundation model for unified understanding of object 3D orientation and rotation from single or paired images. Repo under CC-BY-4.0💙
👉Review https://t.ly/Ht7Xd
👉Paper arxiv.org/pdf/2601.05573
👉Project orient-anythingv2.github.io/
👉Repo github.com/SpatialVision/Orient-Anything-V2
👉Orient Anything V2 is a foundation model for unified understanding of object 3D orientation and rotation from single or paired images. Repo under CC-BY-4.0💙
👉Review https://t.ly/Ht7Xd
👉Paper arxiv.org/pdf/2601.05573
👉Project orient-anythingv2.github.io/
👉Repo github.com/SpatialVision/Orient-Anything-V2
❤2🔥2👍1