This media is not supported in your browser
VIEW IN TELEGRAM
Недавно я вдруг поняла, что в телеграме можно делать вот такие гифки из стикеров и фото.
Forwarded from CGIT_Vines (CGIT_Vines)
This media is not supported in your browser
VIEW IN TELEGRAM
С удивлением обнаружил, что ещё есть люди, которые не в курсе, и просто хотел напомнить, что в блендере есть crease pencil.
Джама давно его форсил, и сейчас ещё вот этот чувак появился, видео его же.
Вообще, туторов, конечно, полно, и инструмент в концептинге полезный.
Джама давно его форсил, и сейчас ещё вот этот чувак появился, видео его же.
Вообще, туторов, конечно, полно, и инструмент в концептинге полезный.
Forwarded from CGIT_Vines (CGIT_Vines)
This media is not supported in your browser
VIEW IN TELEGRAM
Ооочень интересная новинка от Facebook AI research.
Мокап система на основе анализа видео. Можно получить отдельно тело, отдельно руки или тело и руки вместе.
Кто разберётся и потестит, покажите результаты, хочется глянуть.
https://github.com/facebookresearch/frankmocap
Мокап система на основе анализа видео. Можно получить отдельно тело, отдельно руки или тело и руки вместе.
Кто разберётся и потестит, покажите результаты, хочется глянуть.
https://github.com/facebookresearch/frankmocap
Forwarded from Karim Iskakov - канал (LFP bot)
This media is not supported in your browser
VIEW IN TELEGRAM
Deep generative model writes programs to build novel 3D chairs from cuboids. Well-learned latent space supports interpolations between programs. No doubt, the next step is to train an embedder from IKEA assembly instructions.
🌐 rkjones4.github.io/shapeAssembly
📝 arxiv.org/abs/2009.08026
📉 @loss_function_porn
🌐 rkjones4.github.io/shapeAssembly
📝 arxiv.org/abs/2009.08026
📉 @loss_function_porn
Forwarded from Sgryob
SimCLR
Abstract
SimCLR is a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework.
We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
* Available in pytorch-lightning-bolts with a couple of code lines
* Paper
* Colab implementation
* Explanation and implementation details in a series of short videos
#self_supervised
Abstract
SimCLR is a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework.
We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
* Available in pytorch-lightning-bolts with a couple of code lines
* Paper
* Colab implementation
* Explanation and implementation details in a series of short videos
#self_supervised
GitHub
GitHub - Lightning-Universe/lightning-bolts: Toolbox of models, callbacks, and datasets for AI/ML researchers.
Toolbox of models, callbacks, and datasets for AI/ML researchers. - GitHub - Lightning-Universe/lightning-bolts: Toolbox of models, callbacks, and datasets for AI/ML researchers.
Forwarded from Graph Machine Learning
NYC Deep Learning Course: Structured Prediction
Final lecture of the course on deep learning led by Yann LeCun. It covers structured prediction, energy-based factor graphs, and graph transformer networks.
Final lecture of the course on deep learning led by Yann LeCun. It covers structured prediction, energy-based factor graphs, and graph transformer networks.
YouTube
Week 14 – Lecture: Structured prediction with energy based models
Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Yann LeCun
Week 14: http://bit.ly/pDL-en-14
0:00:00 – Week 14 – Lecture
LECTURE Part A: http://bit.ly/pDL-en-14-1
In this section, we discussed the structured prediction.…
Playlist: http://bit.ly/pDL-YouTube
Speaker: Yann LeCun
Week 14: http://bit.ly/pDL-en-14
0:00:00 – Week 14 – Lecture
LECTURE Part A: http://bit.ly/pDL-en-14-1
In this section, we discussed the structured prediction.…
https://poloclub.github.io/ganlab/
Милая интерактивная демка с объяснением ганов на tf.js.
Милая интерактивная демка с объяснением ганов на tf.js.
https://github.com/HavenFeng/photometric_optimization
It seems here they optimize jointly mesh and texture from image with differential rendering, pytorch3d.
It takes ~20sec on 1 gpu GTX1080Ti to optimize for one image. Texture comes from PCA model which is more than 1Gb in size. It doesn't support well asian faces, because it optimizes only based on landmarks projected to 3d mesh.
But the results sometimes look quite promising.
#differentiable_rendering #face #face_reconstruction #morphable_model
It seems here they optimize jointly mesh and texture from image with differential rendering, pytorch3d.
It takes ~20sec on 1 gpu GTX1080Ti to optimize for one image. Texture comes from PCA model which is more than 1Gb in size. It doesn't support well asian faces, because it optimizes only based on landmarks projected to 3d mesh.
But the results sometimes look quite promising.
#differentiable_rendering #face #face_reconstruction #morphable_model
Unity launches open projects — creating game in open source where anybody can contribute (programmers, artists, crocodiles).
https://youtu.be/jrQimv_7gcc
⭐️ Github repository for Project #1
⭐️ Roadmap for Project #1
https://youtu.be/jrQimv_7gcc
⭐️ Github repository for Project #1
⭐️ Roadmap for Project #1
YouTube
Unity Open Projects (Launch Trailer)
Welcome to Open Projects, an open-source initiative where we will expose the game development journey as it unfolds, and welcome you to the team as an active participant.
⭐ Join the #UnityOpenProjects forum! https://on.unity.com/35UzPEp
⭐ Github repository…
⭐ Join the #UnityOpenProjects forum! https://on.unity.com/35UzPEp
⭐ Github repository…
[DeepMind, Imperial College]
* abs
* official code
https://github.com/lucidrains/byol-pytorch
Very good paper overview:
https://youtu.be/YPfUiOMYOEE
#self_supervised
* abs
* official code
https://github.com/lucidrains/byol-pytorch
Very good paper overview:
https://youtu.be/YPfUiOMYOEE
#self_supervised
Forwarded from Data Science by ODS.ai 🦜
NVidia released a technology to change face alignment on video
Nvidia has unveiled AI face-alignment that means you're always looking at the camera during video calls. Its new Maxine platform uses GANs to reconstruct the unseen parts of your head — just like a deepfake.
Link: https://www.theverge.com/2020/10/5/21502003/nvidia-ai-videoconferencing-maxine-platform-face-gaze-alignment-gans-compression-resolution
#NVidia #deepfake #GAN
Nvidia has unveiled AI face-alignment that means you're always looking at the camera during video calls. Its new Maxine platform uses GANs to reconstruct the unseen parts of your head — just like a deepfake.
Link: https://www.theverge.com/2020/10/5/21502003/nvidia-ai-videoconferencing-maxine-platform-face-gaze-alignment-gans-compression-resolution
#NVidia #deepfake #GAN