This media is not supported in your browser
VIEW IN TELEGRAM
⭐TOP 5 Papers you loved in 2025⭐
👉 In 2025 novel architectures have redefined efficiency and accuracy, and almost every day brought a new SOTA in image understanding, tracking, and GenAI. It’s been an inspiring ride, and 2026 it will be even wilder. This community (LinkedIn + Telegram) is now around 80,000+ people.
𝐏𝐚𝐩𝐞𝐫𝐬 (𝐛𝐲 𝐲𝐨𝐮𝐫 𝐩𝐫𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞):
⭐3D LLM https://t.ly/ejr1s
⭐DynOMo https://t.ly/t5pCf
⭐Track Transf. https://t.ly/NPyW4
⭐YOLOv12 https://t.ly/jj1oR
⭐G-Surface Tracking https://t.ly/udpMq
Thank you all💙
👉 In 2025 novel architectures have redefined efficiency and accuracy, and almost every day brought a new SOTA in image understanding, tracking, and GenAI. It’s been an inspiring ride, and 2026 it will be even wilder. This community (LinkedIn + Telegram) is now around 80,000+ people.
𝐏𝐚𝐩𝐞𝐫𝐬 (𝐛𝐲 𝐲𝐨𝐮𝐫 𝐩𝐫𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞):
⭐3D LLM https://t.ly/ejr1s
⭐DynOMo https://t.ly/t5pCf
⭐Track Transf. https://t.ly/NPyW4
⭐YOLOv12 https://t.ly/jj1oR
⭐G-Surface Tracking https://t.ly/udpMq
Thank you all💙
❤23👏3👍2🔥1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🦙 Depth as Neural Implicit 🦙
👉InfiniDepth represents depth as neural implicit fields, "infinite" (i.e.16K) resolution and geometrical details. Repo under Apache 2.0💙
👉Review https://t.ly/4we5t
👉Paper https://lnkd.in/dpiHQExj
👉Project https://lnkd.in/dy3JxKye
👉Repo https://lnkd.in/dAXbnK5z
👉InfiniDepth represents depth as neural implicit fields, "infinite" (i.e.16K) resolution and geometrical details. Repo under Apache 2.0💙
👉Review https://t.ly/4we5t
👉Paper https://lnkd.in/dpiHQExj
👉Project https://lnkd.in/dy3JxKye
👉Repo https://lnkd.in/dAXbnK5z
1🔥12❤2👏2
This media is not supported in your browser
VIEW IN TELEGRAM
🌍Label Any Object in 3D 🌍
👉LabelAny3D: novel analysis-by-synthesis framework that reconstructs holistic 3D scenes from 2D to efficiently produce HQ 3D BBs annotations. Repo under CC-BY-4.0 license💙
👉Review https://t.ly/bO93j
👉Paper https://lnkd.in/dYb97zWG
👉Project https://lnkd.in/dJ9UKERb
👉Repo https://lnkd.in/d9SxtmiA
👉LabelAny3D: novel analysis-by-synthesis framework that reconstructs holistic 3D scenes from 2D to efficiently produce HQ 3D BBs annotations. Repo under CC-BY-4.0 license💙
👉Review https://t.ly/bO93j
👉Paper https://lnkd.in/dYb97zWG
👉Project https://lnkd.in/dJ9UKERb
👉Repo https://lnkd.in/d9SxtmiA
🔥7❤6👍1👏1
🔥 New #AI Startups in 2026? 🔥
In 2026, which area would you focus on?
🤖Agents → workflows, copilots, etc.
🏭Vertical AI → Pharma, Automotive, Energy ...
🧠Infrastructure → MLOps, Security, Cost Control ...
🎨AI for Creators/Media → Video, avatars, contents ...
Please, help me understanding what's next with this poll on LinkedIn :)
https://www.linkedin.com/posts/visionarynet_ai-ai-deeplearning-activity-7415377341779996672-sQO1
LUV U \m/
In 2026, which area would you focus on?
🤖Agents → workflows, copilots, etc.
🏭Vertical AI → Pharma, Automotive, Energy ...
🧠Infrastructure → MLOps, Security, Cost Control ...
🎨AI for Creators/Media → Video, avatars, contents ...
Please, help me understanding what's next with this poll on LinkedIn :)
https://www.linkedin.com/posts/visionarynet_ai-ai-deeplearning-activity-7415377341779996672-sQO1
LUV U \m/
Linkedin
#ai #ai #deeplearning #aiwithpapers #metaverse | Alessandro Ferrari
🔥🔥 New #AI Startups in 2026? 🔥🔥
👉 Looking ahead to 2026, the question is no longer “can we build it?” but “where does it actually create durable value?” in the AI field. So, if you were to launch an AI startup in 2026, which area would you focus on?
🤖Agents…
👉 Looking ahead to 2026, the question is no longer “can we build it?” but “where does it actually create durable value?” in the AI field. So, if you were to launch an AI startup in 2026, which area would you focus on?
🤖Agents…
🔥5❤1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥Orient Anything V2 is out🔥
👉Orient Anything V2 is a foundation model for unified understanding of object 3D orientation and rotation from single or paired images. Repo under CC-BY-4.0💙
👉Review https://t.ly/Ht7Xd
👉Paper arxiv.org/pdf/2601.05573
👉Project orient-anythingv2.github.io/
👉Repo github.com/SpatialVision/Orient-Anything-V2
👉Orient Anything V2 is a foundation model for unified understanding of object 3D orientation and rotation from single or paired images. Repo under CC-BY-4.0💙
👉Review https://t.ly/Ht7Xd
👉Paper arxiv.org/pdf/2601.05573
👉Project orient-anythingv2.github.io/
👉Repo github.com/SpatialVision/Orient-Anything-V2
❤4🔥2👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🫛Active Object Reconstruction🫛
👉ObjSplat (Beijing) autonomously plans viewpoints and progressively reconstructs an unknown object into a Hi-Fi Gaussian model and water-tight mesh, enabling direct use in physics simulations. Tough paper and repo announced💙
👉Review https://t.ly/au6HE
👉Paper arxiv.org/pdf/2601.06997
👉Project li-yuetao.github.io/ObjSplat-page/
👉Repo https://github.com/Li-Yuetao/ObjSplat
👉ObjSplat (Beijing) autonomously plans viewpoints and progressively reconstructs an unknown object into a Hi-Fi Gaussian model and water-tight mesh, enabling direct use in physics simulations. Tough paper and repo announced💙
👉Review https://t.ly/au6HE
👉Paper arxiv.org/pdf/2601.06997
👉Project li-yuetao.github.io/ObjSplat-page/
👉Repo https://github.com/Li-Yuetao/ObjSplat
❤6
In 2026, who should we keep an eye on?
Vote: https://www.linkedin.com/posts/visionarynet_ai-deeplearning-aiwithpapers-activity-7416886610795077632-qQeP/
Vote: https://www.linkedin.com/posts/visionarynet_ai-deeplearning-aiwithpapers-activity-7416886610795077632-qQeP/
🔥2❤1🤯1
👉Games Workshop (Warhammer) is banning the use of AI in creative and design processes to protect IP and human creativity. A decision that goes against the current hype of widespread AI adoption.
And what about your organization? I need your help👇
Vote: https://www.linkedin.com/posts/visionarynet_ai-activity-7417106327019196417-TpGL
And what about your organization? I need your help👇
Vote: https://www.linkedin.com/posts/visionarynet_ai-activity-7417106327019196417-TpGL
❤2🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
💚Segment Anything Geometry💚
👉3AM (NYCU + #Nvidia) offers cross-view correspondence even under large viewpoint changes, cluttered scenes, and variations in capture conditions, enabling robust object tracking from both videos & casual multi-view images. Repo (coming) & Demo available💙
👉Review https://t.ly/olZwE
👉Paper https://arxiv.org/pdf/2601.08831
👉Project https://jayisaking.github.io/3AM-Page/
👉Repo https://github.com/jayisaking
👉Demo https://huggingface.co/spaces/nycu-cplab/3AM
👉3AM (NYCU + #Nvidia) offers cross-view correspondence even under large viewpoint changes, cluttered scenes, and variations in capture conditions, enabling robust object tracking from both videos & casual multi-view images. Repo (coming) & Demo available💙
👉Review https://t.ly/olZwE
👉Paper https://arxiv.org/pdf/2601.08831
👉Project https://jayisaking.github.io/3AM-Page/
👉Repo https://github.com/jayisaking
👉Demo https://huggingface.co/spaces/nycu-cplab/3AM
🔥10❤4👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🎇 Multi-target SAM3 🎇
👉SAM3-DMS is a novel training-free decoupled strategy that utilizes fine-grained memory selection on individual objects. Robust identity preservation and tracking stability. Repo under SAM License💙
👉Review https://t.ly/jJOAr
👉Paper https://arxiv.org/pdf/2601.09699
👉Repo https://github.com/FudanCVL/SAM3-DMS
👉SAM3-DMS is a novel training-free decoupled strategy that utilizes fine-grained memory selection on individual objects. Robust identity preservation and tracking stability. Repo under SAM License💙
👉Review https://t.ly/jJOAr
👉Paper https://arxiv.org/pdf/2601.09699
👉Repo https://github.com/FudanCVL/SAM3-DMS
❤2🔥2👏2