This media is not supported in your browser
VIEW IN TELEGRAM
🦑Big Egocentric Dataset by #Meta 🦑
👉Novel dataset to speed-up research on egocentric MR/AI
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅159 sequences, multiple sensors
✅Scenarios: cooking, exercising, etc.
✅‘Desktop Activities’ via multi-view mocap
✅Dataset available upon request
More: https://bit.ly/3QDccVW
👉Novel dataset to speed-up research on egocentric MR/AI
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅159 sequences, multiple sensors
✅Scenarios: cooking, exercising, etc.
✅‘Desktop Activities’ via multi-view mocap
✅Dataset available upon request
More: https://bit.ly/3QDccVW
🔥8👍3
This media is not supported in your browser
VIEW IN TELEGRAM
🦋Transf-Codebook HD-Face Restoration🦋
👉S-Lab unveils CodeFormer: hyper-datailed face restoration from degraded clips
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Face restoration as a code prediction
✅Discrete CB prior in small proxy space
✅Controllable transformation for LQ->HQ
✅Robustness and global coherence
✅Code and models soon available
More: https://bit.ly/3QEa9B5
👉S-Lab unveils CodeFormer: hyper-datailed face restoration from degraded clips
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Face restoration as a code prediction
✅Discrete CB prior in small proxy space
✅Controllable transformation for LQ->HQ
✅Robustness and global coherence
✅Code and models soon available
More: https://bit.ly/3QEa9B5
🔥13👍7❤1
This media is not supported in your browser
VIEW IN TELEGRAM
🍔 Fully Controllable "NeRF" Faces 🍔
👉Neural control of pose/expressions from single portrait video
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅NeRF-control of the human head
✅Loss of rigidity by dynamic NeRF
✅3D full control/modelling of faces
✅No source code or models yet 😢
More: https://bit.ly/3OEjwi7
👉Neural control of pose/expressions from single portrait video
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅NeRF-control of the human head
✅Loss of rigidity by dynamic NeRF
✅3D full control/modelling of faces
✅No source code or models yet 😢
More: https://bit.ly/3OEjwi7
🔥8👍3❤2
This media is not supported in your browser
VIEW IN TELEGRAM
🫀I M AVATAR: source code is out!🫀
👉Neural implicit head avatars from monocular videos
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅#3D morphing-based implicit avatar
✅Detailed Geometry/appearance
✅D-Rendering e2e learning from clips
✅Novel synthetic dataset for evaluation
More: https://bit.ly/3A2yzy9
👉Neural implicit head avatars from monocular videos
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅#3D morphing-based implicit avatar
✅Detailed Geometry/appearance
✅D-Rendering e2e learning from clips
✅Novel synthetic dataset for evaluation
More: https://bit.ly/3A2yzy9
👍8👏4
This media is not supported in your browser
VIEW IN TELEGRAM
🗺️Neural Translation Image -> Map🗺️
👉A novel method for instantaneous mapping as a translation problem
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Bird’s-eye-view (BEV) map from image
✅A restricted data-efficient transformer
✅Monotonic attention from lang.domain
✅SOTA across several datasets
More: https://bit.ly/39MQ76Z
👉A novel method for instantaneous mapping as a translation problem
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Bird’s-eye-view (BEV) map from image
✅A restricted data-efficient transformer
✅Monotonic attention from lang.domain
✅SOTA across several datasets
More: https://bit.ly/39MQ76Z
🔥20👍6😱1
This media is not supported in your browser
VIEW IN TELEGRAM
🥶 E2V-SDE: biggest troll ever? 🥶
👉E2V-SDE paper (accepted to #CVPR2022) consists of texts copied from 10+ previously published papers 😂
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Latent ODEs for Irregularly-Sampled TS
✅Stochastic Adversarial Video Prediction
✅Continuous Latent Process Flows
✅More papers....
More: https://bit.ly/3bsL8Zw (AUDIO ON!)
👉E2V-SDE paper (accepted to #CVPR2022) consists of texts copied from 10+ previously published papers 😂
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Latent ODEs for Irregularly-Sampled TS
✅Stochastic Adversarial Video Prediction
✅Continuous Latent Process Flows
✅More papers....
More: https://bit.ly/3bsL8Zw (AUDIO ON!)
👍9
This media is not supported in your browser
VIEW IN TELEGRAM
🔥🔥YOLOv6 is out: PURE FIRE!🔥🔥
👉YOLOv6 is a single-stage object detection framework for industrial applications
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Efficient Decoupled Head with SIoU Loss
✅Hardware-friendly for Backbone/Neck
✅520+ FPS on T4 + TensorRT FP16
✅Released under GNU General Public v3.0
More: https://bit.ly/3OLjncK
👉YOLOv6 is a single-stage object detection framework for industrial applications
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Efficient Decoupled Head with SIoU Loss
✅Hardware-friendly for Backbone/Neck
✅520+ FPS on T4 + TensorRT FP16
✅Released under GNU General Public v3.0
More: https://bit.ly/3OLjncK
🔥37👍6
This media is not supported in your browser
VIEW IN TELEGRAM
🐪 BlazePose: Real-Time Human Tracking 🐪
👉Novel real-time #3D human landmarks from #google. Suitable for mobile.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅MoCap from single RGB on mobile
✅Avatar, Fitness, #Yoga & AR/VR
✅Full body pose from monocular
✅Novel 3D ground truth acquisition
✅Additional hand landmarks
✅Fully integrated in #MediaPipe
More: https://bit.ly/3uvyiAv
👉Novel real-time #3D human landmarks from #google. Suitable for mobile.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅MoCap from single RGB on mobile
✅Avatar, Fitness, #Yoga & AR/VR
✅Full body pose from monocular
✅Novel 3D ground truth acquisition
✅Additional hand landmarks
✅Fully integrated in #MediaPipe
More: https://bit.ly/3uvyiAv
🔥14👍4
This media is not supported in your browser
VIEW IN TELEGRAM
🔥YOLOv7: YOLO for segmentation🔥
👉YOLOv7: adding a lot of newer skills to the YOLO architecture family.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅YOLOv7, not a successor of YOLO family!
✅Framework for detection & segmentation
✅Applications based on #META detectron2
✅DETR & ViT detection out-of-box
✅Easy support for pipeline thought #ONNX
✅YOLOv4 + InstanceSegm. via single stage
✅The latest YOLOv6 training is supported!
✅Source code under GPL license.
More: https://bit.ly/3ysSJAp
👉YOLOv7: adding a lot of newer skills to the YOLO architecture family.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅YOLOv7, not a successor of YOLO family!
✅Framework for detection & segmentation
✅Applications based on #META detectron2
✅DETR & ViT detection out-of-box
✅Easy support for pipeline thought #ONNX
✅YOLOv4 + InstanceSegm. via single stage
✅The latest YOLOv6 training is supported!
✅Source code under GPL license.
More: https://bit.ly/3ysSJAp
🔥22🤯9👍5😁2
This media is not supported in your browser
VIEW IN TELEGRAM
🔥🔥 HD Dichotomous Segmentation 🔥🔥
👉 A new task to segment highly accurate objects from natural images.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅5,000+ HD images + accurate binary mask
✅IS-Net baseline in high-dim feature spaces
✅HCE: model vs. human interventions
✅Source code (should be) available soon
More: https://bit.ly/3ah2BDO
👉 A new task to segment highly accurate objects from natural images.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅5,000+ HD images + accurate binary mask
✅IS-Net baseline in high-dim feature spaces
✅HCE: model vs. human interventions
✅Source code (should be) available soon
More: https://bit.ly/3ah2BDO
🔥13
This media is not supported in your browser
VIEW IN TELEGRAM
🔥🔥 Neural Segmentation on fire 🔥🔥
👉Novel methods for segmentation with mask calibration. Robustness++ in VOS.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Study: VOS robustness vs. perturbations
✅Adaptive object proxy (AOP) aggregation
✅Less errors due unstable pixel-level match
✅Code/models (should be) available soon
More: https://bit.ly/3yhIY6Q
👉Novel methods for segmentation with mask calibration. Robustness++ in VOS.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Study: VOS robustness vs. perturbations
✅Adaptive object proxy (AOP) aggregation
✅Less errors due unstable pixel-level match
✅Code/models (should be) available soon
More: https://bit.ly/3yhIY6Q
👍15❤1🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
😊😎 Seq-DeepFake via Transformers 😎😊
👉S-Lab opens Seq-DeepFake: Detecting Sequential DeepFake Manipulation
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Seq-DeepFake: sequences of facial edits
✅Dataset: 85k #deepfake manipulation
✅Powerful Seq-DeepFake Transformer
✅Code, dataset and models available!
More: https://bit.ly/3ACQXhi
👉S-Lab opens Seq-DeepFake: Detecting Sequential DeepFake Manipulation
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Seq-DeepFake: sequences of facial edits
✅Dataset: 85k #deepfake manipulation
✅Powerful Seq-DeepFake Transformer
✅Code, dataset and models available!
More: https://bit.ly/3ACQXhi
👍15🔥2❤1
This media is not supported in your browser
VIEW IN TELEGRAM
🦒 Text2LIVE: Text-Driven Neural Editing 🦒
👉#Amazon unveils a novel #AI for text-driven edit of videos. Insane! 🤯
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Semantic edits of real-world videos
✅Edit layer–RGBA representing target
✅Edit layers synthesized on single input
✅No masks or a pre-trained generator
More: https://bit.ly/3NVP6aE
👉#Amazon unveils a novel #AI for text-driven edit of videos. Insane! 🤯
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Semantic edits of real-world videos
✅Edit layer–RGBA representing target
✅Edit layers synthesized on single input
✅No masks or a pre-trained generator
More: https://bit.ly/3NVP6aE
🤯18👍9🔥8❤1
This media is not supported in your browser
VIEW IN TELEGRAM
📟📟AI-Designed Circuits with Deep RL📟📟
👉#Nvidia unveils an #AI to design circuits from scratch, smaller and faster than SOTA ones
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Parallel prefix circuits for Hi-Perf
✅RL framework to explore the circuit space
✅Smaller, Faster, Power-- from the scratch
More: https://bit.ly/3yY9dk7
👉#Nvidia unveils an #AI to design circuits from scratch, smaller and faster than SOTA ones
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Parallel prefix circuits for Hi-Perf
✅RL framework to explore the circuit space
✅Smaller, Faster, Power-- from the scratch
More: https://bit.ly/3yY9dk7
🤯13👍5🔥3
This media is not supported in your browser
VIEW IN TELEGRAM
👽 Neural I2I with a few shoots 👽
👉#Alibaba unveils a novel portrait stylization. Limited samples (∼100) -> HD outputs
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Calibration first, translation later
✅Balanced distribution to calibrate bias
✅Spatially semantic constraints via geometry
✅Source code and models soon available!
More: https://bit.ly/3IwOmHO
👉#Alibaba unveils a novel portrait stylization. Limited samples (∼100) -> HD outputs
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Calibration first, translation later
✅Balanced distribution to calibrate bias
✅Spatially semantic constraints via geometry
✅Source code and models soon available!
More: https://bit.ly/3IwOmHO
❤10👍5😱1
This media is not supported in your browser
VIEW IN TELEGRAM
🤹♂️ K-Means Mask Transformer 🤹♂️
👉#Google AI unveils kMaX-DeepLab, novel E2E method for segmentation
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅kMaX-DeepLab: k-means Mask Xformer
✅Rethinking relationship pixels / object
✅Cross-attention -> k-means clustering
✅The new SOTA on several dataset
More: https://bit.ly/3O2QV5I
👉#Google AI unveils kMaX-DeepLab, novel E2E method for segmentation
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅kMaX-DeepLab: k-means Mask Xformer
✅Rethinking relationship pixels / object
✅Cross-attention -> k-means clustering
✅The new SOTA on several dataset
More: https://bit.ly/3O2QV5I
🔥11👍2👏1
This media is not supported in your browser
VIEW IN TELEGRAM
☀️ 4D Neural Relightable Humans ☀️
👉Relighting4D: free-viewpoints relighting of humans under unknown illuminations
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Relight dynamic, free viewpoints
✅Disentangled reflectance/geometry
✅SOTA on synthetic/real datasets
✅Code/models under MIT License
More: https://bit.ly/3RF3yH9
👉Relighting4D: free-viewpoints relighting of humans under unknown illuminations
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Relight dynamic, free viewpoints
✅Disentangled reflectance/geometry
✅SOTA on synthetic/real datasets
✅Code/models under MIT License
More: https://bit.ly/3RF3yH9
🔥9👍2
This media is not supported in your browser
VIEW IN TELEGRAM
🍰 Long-Term Object Segmentation 🍰
👉XMem: object segmentation for long clips with unified feature memory stores
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Inspired by Atkinson–Shiffrin model
✅Stores with different temporal scales
✅Memory consolidation algorithm
✅Compact/powerful long-term memory
✅Source code and models available
More: https://bit.ly/3PP0EOn
👉XMem: object segmentation for long clips with unified feature memory stores
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Inspired by Atkinson–Shiffrin model
✅Stores with different temporal scales
✅Memory consolidation algorithm
✅Compact/powerful long-term memory
✅Source code and models available
More: https://bit.ly/3PP0EOn
🤯16👍5👏3
AI with Papers - Artificial Intelligence & Deep Learning
🦔 CogVideo: insane text-to-clip 🦔 👉CogVideo: 9B-parameters world's first large scale open-source text-to-video 😵 𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬: ✅Largest open-source T2C transformer ✅Finetuning of text-to-image model ✅Multi-frame-rate hierarchical training ✅From pretrained…
This media is not supported in your browser
VIEW IN TELEGRAM
🔥🔥 Update 🔥🔥
👉Code https://github.com/THUDM/CogVideo
👉Demo https://wudao.aminer.cn/cogvideo/
More: https://bit.ly/3yP86BQ
👉Code https://github.com/THUDM/CogVideo
👉Demo https://wudao.aminer.cn/cogvideo/
More: https://bit.ly/3yP86BQ
🔥5❤4👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥Grand Unification of Object Tracking🔥
👉UNICORN: unified method for SOT, MOT, VOS, & MOTS with a single neural net. 🤯
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Great unification for 4 tracking tasks
✅Bridging methods / pixel-wise corresp.
✅SOTA on 8 challenging benchmarks
✅Source code under MIT License
More: https://bit.ly/3o74h6g
👉UNICORN: unified method for SOT, MOT, VOS, & MOTS with a single neural net. 🤯
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Great unification for 4 tracking tasks
✅Bridging methods / pixel-wise corresp.
✅SOTA on 8 challenging benchmarks
✅Source code under MIT License
More: https://bit.ly/3o74h6g
👍13🔥3🤯1😱1