This media is not supported in your browser
VIEW IN TELEGRAM
🎒 EG3D: source code is out! 🎒
👉#Nvidia just opened EG3D: real time multi-view faces w/ HQ #3D geometry!
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Tri-plane-based 3D GAN framework
✅Pose-correlated attribute (expression)
✅SOTA in uncond. 3D-aware synthesis
✅Source code & models NOW available!
More: https://bit.ly/3aOfHs0
👉#Nvidia just opened EG3D: real time multi-view faces w/ HQ #3D geometry!
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Tri-plane-based 3D GAN framework
✅Pose-correlated attribute (expression)
✅SOTA in uncond. 3D-aware synthesis
✅Source code & models NOW available!
More: https://bit.ly/3aOfHs0
🔥7🤯6👍4❤2
🔥One Millisecond Backbone. Fire!🔥
👉MobileOne by #Apple: efficient mobile backbone with inference <1 ms on #iPhone12!
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅75.9% top-1 accuracy on ImageNet
✅38× faster than MobileFormer net
✅Classification, detection & segmentation
✅Source code & model soon available!
More: https://bit.ly/3tsT7f2
👉MobileOne by #Apple: efficient mobile backbone with inference <1 ms on #iPhone12!
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅75.9% top-1 accuracy on ImageNet
✅38× faster than MobileFormer net
✅Classification, detection & segmentation
✅Source code & model soon available!
More: https://bit.ly/3tsT7f2
❤24👍2
This media is not supported in your browser
VIEW IN TELEGRAM
🧨 Scaling Transformers to GigaPixels!🧨
👉Novel ViT called Hierarchical Image Pyramid Transformer (HIPT) -> Scaling to GigaPixels!
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Gigapixel whole-slide imaging (WSI)
✅Leveraging natural hier. structure of WSI
✅Self-supervised Hi-Res representations
✅Source code and models available!
More: https://bit.ly/3xLuzkg
👉Novel ViT called Hierarchical Image Pyramid Transformer (HIPT) -> Scaling to GigaPixels!
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Gigapixel whole-slide imaging (WSI)
✅Leveraging natural hier. structure of WSI
✅Self-supervised Hi-Res representations
✅Source code and models available!
More: https://bit.ly/3xLuzkg
🤯16👍1
This media is not supported in your browser
VIEW IN TELEGRAM
👗BodyMap: Hyper-Detailed Humans👗
👉#META unveils 1st-ever dense continuous correspondence for clothed humans
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅1st-ever dense continuous corresp.
✅HQ fingers, hair, and clothes
✅Novel ViT-based architecture
✅SOTA on DensePose COCO
More: https://bit.ly/39nEPps
👉#META unveils 1st-ever dense continuous correspondence for clothed humans
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅1st-ever dense continuous corresp.
✅HQ fingers, hair, and clothes
✅Novel ViT-based architecture
✅SOTA on DensePose COCO
More: https://bit.ly/39nEPps
👍13❤2
🐹 NOAH just open-sourced! 🐹
👉A novel approach to find the optimal design of prompt modules through NAS algos.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅NOAH from Neural prOmpt seArcH
✅Parameter-efficient “prompt modules”
✅Efficient NAS-based implementation
✅Better than transfer, few-shot & domain gen.
More: https://bit.ly/3MKfVhi
👉A novel approach to find the optimal design of prompt modules through NAS algos.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅NOAH from Neural prOmpt seArcH
✅Parameter-efficient “prompt modules”
✅Efficient NAS-based implementation
✅Better than transfer, few-shot & domain gen.
More: https://bit.ly/3MKfVhi
👍5👏2🥰1
This media is not supported in your browser
VIEW IN TELEGRAM
🏄🏻♀️Neural Super-Resolution in Movies🏄🏻♀️
👉Implicit neural representation to get arbitrary spatial resolution & FPS -> Super Resolution!
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Video as continuous video representation
✅Clips in arbitrary space/time resolution
✅OOD generalization in space-time
✅Source code and models available
More: https://bit.ly/3xsqccf
👉Implicit neural representation to get arbitrary spatial resolution & FPS -> Super Resolution!
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Video as continuous video representation
✅Clips in arbitrary space/time resolution
✅OOD generalization in space-time
✅Source code and models available
More: https://bit.ly/3xsqccf
🔥6👍2
This media is not supported in your browser
VIEW IN TELEGRAM
🧠 Bias in #AI, explained simple 🧠
👉Asking DallE-Mini to help me to show what the BIAS in #AI is
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐞𝐝 𝐒𝐚𝐦𝐩𝐥𝐞𝐬:
✅Best eng.->men/Caucasians
✅Best doctors->men/Caucasians
✅Top CEOs->men/Caucasians
✅Chef, kitchen->men/Caucasians
✅Rich People->only Caucasians
✅Poor People->non-Caucasians
✅Italian engineers->back in 30's
✅Chinese eng.->infrastructures
✅Italian working->local market
✅Chinese working->vegetables
✅Men workers->constructions
✅Women workers->only office
More: https://bit.ly/3b0UFqd
👉Asking DallE-Mini to help me to show what the BIAS in #AI is
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐞𝐝 𝐒𝐚𝐦𝐩𝐥𝐞𝐬:
✅Best eng.->men/Caucasians
✅Best doctors->men/Caucasians
✅Top CEOs->men/Caucasians
✅Chef, kitchen->men/Caucasians
✅Rich People->only Caucasians
✅Poor People->non-Caucasians
✅Italian engineers->back in 30's
✅Chinese eng.->infrastructures
✅Italian working->local market
✅Chinese working->vegetables
✅Men workers->constructions
✅Women workers->only office
More: https://bit.ly/3b0UFqd
👍13❤6😁4
This media is not supported in your browser
VIEW IN TELEGRAM
🦕 SAVi++: Segmentation by #Google 🦕
👉Novel unsupervised object-centric #AI to predict depth signals from slot-based video representation
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Segmenting complex dynamic scenes
✅Static/Moving objects on naturalistic BG
✅LiDAR-SAVi: segmenting in the wild
✅Source code and model soon available!
More: https://bit.ly/3n3hywd
👉Novel unsupervised object-centric #AI to predict depth signals from slot-based video representation
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Segmenting complex dynamic scenes
✅Static/Moving objects on naturalistic BG
✅LiDAR-SAVi: segmenting in the wild
✅Source code and model soon available!
More: https://bit.ly/3n3hywd
🔥7👍6🥰1
This media is not supported in your browser
VIEW IN TELEGRAM
✋HaGRID : Half Million Hands👋
👉Russian Sberbank opens HaGRID, enormous dataset for HGR. "Peace" label is present 🔵🟡
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅552,992 samples, 18 classes
✅HD resolution in RGB format
✅BBox, gesture, leading hands
✅Dataset/models available
More: https://bit.ly/3n2cd8r
👉Russian Sberbank opens HaGRID, enormous dataset for HGR. "Peace" label is present 🔵🟡
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅552,992 samples, 18 classes
✅HD resolution in RGB format
✅BBox, gesture, leading hands
✅Dataset/models available
More: https://bit.ly/3n2cd8r
❤11🤔2
🔥 #AIwithPapers: we are 2,900+! 🔥
💙💛 Cheers from "Black Metal Lady Gaga" plotted by DallE-mini 💙💛
😈 Invite your friends -> https://news.1rj.ru/str/AI_DeepLearning
💙💛 Cheers from "Black Metal Lady Gaga" plotted by DallE-mini 💙💛
😈 Invite your friends -> https://news.1rj.ru/str/AI_DeepLearning
😁8👍3❤2
This media is not supported in your browser
VIEW IN TELEGRAM
🍅Segmentation with INSANE Occlusions🍅
👉CMU unveils WALT: segmenting in severe occlusion scenarios. Performance over human.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅WALT: Watch & Learn Time-lapse
✅4K/1080p cams on streets over a year
✅Performance over human-supervised
✅Object-occluder-occluded neural layers
✅Source code under MIT license
More: https://bit.ly/3n7pvjO
👉CMU unveils WALT: segmenting in severe occlusion scenarios. Performance over human.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅WALT: Watch & Learn Time-lapse
✅4K/1080p cams on streets over a year
✅Performance over human-supervised
✅Object-occluder-occluded neural layers
✅Source code under MIT license
More: https://bit.ly/3n7pvjO
🤯14👍4🔥3
This media is not supported in your browser
VIEW IN TELEGRAM
🐠Largest Dataset for #autonomousdriving🐠
👉SHIFT: largest synthetic dataset for #selfdrivingcars. Shifts in cloud, rain, fog, time of day, vehicle & pedestrian density🤯
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅4,800+ clips, multi-view sensor suite
✅Semantic/instance, M/stereo depth
✅2D/3D object detection, MOT
✅Optical flow, point cloud registration
✅Visual-Odo, trajectory & human pose
More: https://bit.ly/3HJBUUT
👉SHIFT: largest synthetic dataset for #selfdrivingcars. Shifts in cloud, rain, fog, time of day, vehicle & pedestrian density🤯
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅4,800+ clips, multi-view sensor suite
✅Semantic/instance, M/stereo depth
✅2D/3D object detection, MOT
✅Optical flow, point cloud registration
✅Visual-Odo, trajectory & human pose
More: https://bit.ly/3HJBUUT
🤯9👍5❤2
This media is not supported in your browser
VIEW IN TELEGRAM
🦑Big Egocentric Dataset by #Meta 🦑
👉Novel dataset to speed-up research on egocentric MR/AI
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅159 sequences, multiple sensors
✅Scenarios: cooking, exercising, etc.
✅‘Desktop Activities’ via multi-view mocap
✅Dataset available upon request
More: https://bit.ly/3QDccVW
👉Novel dataset to speed-up research on egocentric MR/AI
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅159 sequences, multiple sensors
✅Scenarios: cooking, exercising, etc.
✅‘Desktop Activities’ via multi-view mocap
✅Dataset available upon request
More: https://bit.ly/3QDccVW
🔥8👍3
This media is not supported in your browser
VIEW IN TELEGRAM
🦋Transf-Codebook HD-Face Restoration🦋
👉S-Lab unveils CodeFormer: hyper-datailed face restoration from degraded clips
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Face restoration as a code prediction
✅Discrete CB prior in small proxy space
✅Controllable transformation for LQ->HQ
✅Robustness and global coherence
✅Code and models soon available
More: https://bit.ly/3QEa9B5
👉S-Lab unveils CodeFormer: hyper-datailed face restoration from degraded clips
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Face restoration as a code prediction
✅Discrete CB prior in small proxy space
✅Controllable transformation for LQ->HQ
✅Robustness and global coherence
✅Code and models soon available
More: https://bit.ly/3QEa9B5
🔥13👍7❤1
This media is not supported in your browser
VIEW IN TELEGRAM
🍔 Fully Controllable "NeRF" Faces 🍔
👉Neural control of pose/expressions from single portrait video
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅NeRF-control of the human head
✅Loss of rigidity by dynamic NeRF
✅3D full control/modelling of faces
✅No source code or models yet 😢
More: https://bit.ly/3OEjwi7
👉Neural control of pose/expressions from single portrait video
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅NeRF-control of the human head
✅Loss of rigidity by dynamic NeRF
✅3D full control/modelling of faces
✅No source code or models yet 😢
More: https://bit.ly/3OEjwi7
🔥8👍3❤2
This media is not supported in your browser
VIEW IN TELEGRAM
🫀I M AVATAR: source code is out!🫀
👉Neural implicit head avatars from monocular videos
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅#3D morphing-based implicit avatar
✅Detailed Geometry/appearance
✅D-Rendering e2e learning from clips
✅Novel synthetic dataset for evaluation
More: https://bit.ly/3A2yzy9
👉Neural implicit head avatars from monocular videos
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅#3D morphing-based implicit avatar
✅Detailed Geometry/appearance
✅D-Rendering e2e learning from clips
✅Novel synthetic dataset for evaluation
More: https://bit.ly/3A2yzy9
👍8👏4
This media is not supported in your browser
VIEW IN TELEGRAM
🗺️Neural Translation Image -> Map🗺️
👉A novel method for instantaneous mapping as a translation problem
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Bird’s-eye-view (BEV) map from image
✅A restricted data-efficient transformer
✅Monotonic attention from lang.domain
✅SOTA across several datasets
More: https://bit.ly/39MQ76Z
👉A novel method for instantaneous mapping as a translation problem
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Bird’s-eye-view (BEV) map from image
✅A restricted data-efficient transformer
✅Monotonic attention from lang.domain
✅SOTA across several datasets
More: https://bit.ly/39MQ76Z
🔥20👍6😱1
This media is not supported in your browser
VIEW IN TELEGRAM
🥶 E2V-SDE: biggest troll ever? 🥶
👉E2V-SDE paper (accepted to #CVPR2022) consists of texts copied from 10+ previously published papers 😂
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Latent ODEs for Irregularly-Sampled TS
✅Stochastic Adversarial Video Prediction
✅Continuous Latent Process Flows
✅More papers....
More: https://bit.ly/3bsL8Zw (AUDIO ON!)
👉E2V-SDE paper (accepted to #CVPR2022) consists of texts copied from 10+ previously published papers 😂
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Latent ODEs for Irregularly-Sampled TS
✅Stochastic Adversarial Video Prediction
✅Continuous Latent Process Flows
✅More papers....
More: https://bit.ly/3bsL8Zw (AUDIO ON!)
👍9
This media is not supported in your browser
VIEW IN TELEGRAM
🔥🔥YOLOv6 is out: PURE FIRE!🔥🔥
👉YOLOv6 is a single-stage object detection framework for industrial applications
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Efficient Decoupled Head with SIoU Loss
✅Hardware-friendly for Backbone/Neck
✅520+ FPS on T4 + TensorRT FP16
✅Released under GNU General Public v3.0
More: https://bit.ly/3OLjncK
👉YOLOv6 is a single-stage object detection framework for industrial applications
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Efficient Decoupled Head with SIoU Loss
✅Hardware-friendly for Backbone/Neck
✅520+ FPS on T4 + TensorRT FP16
✅Released under GNU General Public v3.0
More: https://bit.ly/3OLjncK
🔥37👍6
This media is not supported in your browser
VIEW IN TELEGRAM
🐪 BlazePose: Real-Time Human Tracking 🐪
👉Novel real-time #3D human landmarks from #google. Suitable for mobile.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅MoCap from single RGB on mobile
✅Avatar, Fitness, #Yoga & AR/VR
✅Full body pose from monocular
✅Novel 3D ground truth acquisition
✅Additional hand landmarks
✅Fully integrated in #MediaPipe
More: https://bit.ly/3uvyiAv
👉Novel real-time #3D human landmarks from #google. Suitable for mobile.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅MoCap from single RGB on mobile
✅Avatar, Fitness, #Yoga & AR/VR
✅Full body pose from monocular
✅Novel 3D ground truth acquisition
✅Additional hand landmarks
✅Fully integrated in #MediaPipe
More: https://bit.ly/3uvyiAv
🔥14👍4