SimCLR
Abstract
SimCLR is a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework.
We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
* Available in pytorch-lightning-bolts with a couple of code lines
* Paper
* Colab implementation
* Explanation and implementation details in a series of short videos
#self_supervised
Abstract
SimCLR is a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework.
We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
* Available in pytorch-lightning-bolts with a couple of code lines
* Paper
* Colab implementation
* Explanation and implementation details in a series of short videos
#self_supervised
GitHub
GitHub - Lightning-Universe/lightning-bolts: Toolbox of models, callbacks, and datasets for AI/ML researchers.
Toolbox of models, callbacks, and datasets for AI/ML researchers. - GitHub - Lightning-Universe/lightning-bolts: Toolbox of models, callbacks, and datasets for AI/ML researchers.
Forwarded from Graph Machine Learning
NYC Deep Learning Course: Structured Prediction
Final lecture of the course on deep learning led by Yann LeCun. It covers structured prediction, energy-based factor graphs, and graph transformer networks.
Final lecture of the course on deep learning led by Yann LeCun. It covers structured prediction, energy-based factor graphs, and graph transformer networks.
YouTube
Week 14 – Lecture: Structured prediction with energy based models
Course website: http://bit.ly/pDL-home
Playlist: http://bit.ly/pDL-YouTube
Speaker: Yann LeCun
Week 14: http://bit.ly/pDL-en-14
0:00:00 – Week 14 – Lecture
LECTURE Part A: http://bit.ly/pDL-en-14-1
In this section, we discussed the structured prediction.…
Playlist: http://bit.ly/pDL-YouTube
Speaker: Yann LeCun
Week 14: http://bit.ly/pDL-en-14
0:00:00 – Week 14 – Lecture
LECTURE Part A: http://bit.ly/pDL-en-14-1
In this section, we discussed the structured prediction.…
https://poloclub.github.io/ganlab/
Милая интерактивная демка с объяснением ганов на tf.js.
Милая интерактивная демка с объяснением ганов на tf.js.
https://github.com/HavenFeng/photometric_optimization
It seems here they optimize jointly mesh and texture from image with differential rendering, pytorch3d.
It takes ~20sec on 1 gpu GTX1080Ti to optimize for one image. Texture comes from PCA model which is more than 1Gb in size. It doesn't support well asian faces, because it optimizes only based on landmarks projected to 3d mesh.
But the results sometimes look quite promising.
#differentiable_rendering #face #face_reconstruction #morphable_model
It seems here they optimize jointly mesh and texture from image with differential rendering, pytorch3d.
It takes ~20sec on 1 gpu GTX1080Ti to optimize for one image. Texture comes from PCA model which is more than 1Gb in size. It doesn't support well asian faces, because it optimizes only based on landmarks projected to 3d mesh.
But the results sometimes look quite promising.
#differentiable_rendering #face #face_reconstruction #morphable_model
Unity launches open projects — creating game in open source where anybody can contribute (programmers, artists, crocodiles).
https://youtu.be/jrQimv_7gcc
⭐️ Github repository for Project #1
⭐️ Roadmap for Project #1
https://youtu.be/jrQimv_7gcc
⭐️ Github repository for Project #1
⭐️ Roadmap for Project #1
YouTube
Unity Open Projects (Launch Trailer)
Welcome to Open Projects, an open-source initiative where we will expose the game development journey as it unfolds, and welcome you to the team as an active participant.
⭐ Join the #UnityOpenProjects forum! https://on.unity.com/35UzPEp
⭐ Github repository…
⭐ Join the #UnityOpenProjects forum! https://on.unity.com/35UzPEp
⭐ Github repository…
[DeepMind, Imperial College]
* abs
* official code
https://github.com/lucidrains/byol-pytorch
Very good paper overview:
https://youtu.be/YPfUiOMYOEE
#self_supervised
* abs
* official code
https://github.com/lucidrains/byol-pytorch
Very good paper overview:
https://youtu.be/YPfUiOMYOEE
#self_supervised
Forwarded from Data Science by ODS.ai 🦜
NVidia released a technology to change face alignment on video
Nvidia has unveiled AI face-alignment that means you're always looking at the camera during video calls. Its new Maxine platform uses GANs to reconstruct the unseen parts of your head — just like a deepfake.
Link: https://www.theverge.com/2020/10/5/21502003/nvidia-ai-videoconferencing-maxine-platform-face-gaze-alignment-gans-compression-resolution
#NVidia #deepfake #GAN
Nvidia has unveiled AI face-alignment that means you're always looking at the camera during video calls. Its new Maxine platform uses GANs to reconstruct the unseen parts of your head — just like a deepfake.
Link: https://www.theverge.com/2020/10/5/21502003/nvidia-ai-videoconferencing-maxine-platform-face-gaze-alignment-gans-compression-resolution
#NVidia #deepfake #GAN
This media is not supported in your browser
VIEW IN TELEGRAM
Найдено у CG_Vines.
Оригинальная линка на реддит.
https://ebsynth.com/
Если получится поиграть с приложением на макоси, дам знать.
Саму идею мы уже видели, а вот приложение еще нет.
Tutorial: https://youtu.be/0RLtHuu5jV4
Оригинальная линка на реддит.
https://ebsynth.com/
Если получится поиграть с приложением на макоси, дам знать.
Саму идею мы уже видели, а вот приложение еще нет.
Tutorial: https://youtu.be/0RLtHuu5jV4
echoinside
Найдено у CG_Vines. Оригинальная линка на реддит. https://ebsynth.com/ Если получится поиграть с приложением на макоси, дам знать. Саму идею мы уже видели, а вот приложение еще нет. Tutorial: https://youtu.be/0RLtHuu5jV4
This media is not supported in your browser
VIEW IN TELEGRAM
Попробовала сделать короткий видос с одним кейфреймом. Из важных особенностей — контуры на обрисованном видео и исходном видео должны совпадать. Нельзя пририсовать рожки, если их не было на видео или сделать нож из шаурмы.
АААА!!!
Этот чувак делает просто отличные ревью статей.
Это очень забавно.
https://youtu.be/TrdevFK_am4
Этот чувак делает просто отличные ревью статей.
Это очень забавно.
https://youtu.be/TrdevFK_am4
YouTube
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)
#ai #research #transformers
Transformers are Ruining Convolutions. This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks…
Transformers are Ruining Convolutions. This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks…