Forwarded from TechSparks
И ещё об искусстве;)
В Стэнфорде довольно успешно сумели научить алгоритм предсказывать, что люди ощущают — а не просто какие объекты видят — созерцая произведения искусства. Другая формулировка — сенсационная, но обманнная: научили машину понимать эмоции. Понимать она не умеет, но комментарии к картинам выдаёт очень человеческие и вполне эмоциональные ;)
Чтобы научить алгоритм, пришлось привлечь тысячи людей, которые разметили обучающий датасет: создали 440 000 эмоционально окрашенных откликов на 8100 картин. Тоже профессия будущего, между прочим: размечать учебный материал для алгоритмов; причём и материал, и сами принципы разметки становятся все сложнее.
https://hai.stanford.edu/news/artists-intent-ai-recognizes-emotions-visual-art
В Стэнфорде довольно успешно сумели научить алгоритм предсказывать, что люди ощущают — а не просто какие объекты видят — созерцая произведения искусства. Другая формулировка — сенсационная, но обманнная: научили машину понимать эмоции. Понимать она не умеет, но комментарии к картинам выдаёт очень человеческие и вполне эмоциональные ;)
Чтобы научить алгоритм, пришлось привлечь тысячи людей, которые разметили обучающий датасет: создали 440 000 эмоционально окрашенных откликов на 8100 картин. Тоже профессия будущего, между прочим: размечать учебный материал для алгоритмов; причём и материал, и сами принципы разметки становятся все сложнее.
https://hai.stanford.edu/news/artists-intent-ai-recognizes-emotions-visual-art
hai.stanford.edu
Artist’s Intent: AI Recognizes Emotions in Visual Art | Stanford HAI
A team of AI researchers has trained its algorithms to see the emotional intent behind great works of art, possibly leading to computers that see much deeper than current technologies.
Pytorch Profiler
* blogpost
* blogpost
Along with PyTorch 1.8.1 release, we are excited to announce PyTorch Profiler – the new and improved performance debugging profiler for PyTorch. Developed as part of a collaboration between Microsoft and Facebook, the PyTorch Profiler is an open-source tool that enables accurate and efficient performance analysis and troubleshooting for large-scale deep learning models.
#toolsMip-NeRF: A Multiscale Representation
for Anti-Aliasing Neural Radiance Fields
[Google, UC Berkeley]
* youtube
* project page
* paper
for Anti-Aliasing Neural Radiance Fields
[Google, UC Berkeley]
* youtube
* project page
* paper
The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. The straightforward solution of supersampling by rendering with multiple rays per pixel is impractical for NeRF, because rendering each ray requires querying a multilayer perceptron hundreds of times. Our solution, which we call "mip-NeRF" (à la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale. By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF's ability to represent fine details, while also being 7% faster than NeRF and half the size. Compared to NeRF, mip-NeRF reduces average error rates by 16% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset that we present. mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.YouTube
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
project page: https://jonbarron.info/mipnerf/
Forwarded from Machine Learning World (StatsBot)
Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance Normalization
📦 Github: https://github.com/czczup/URST
📄 Paper: https://arxiv.org/abs/2103.11784
📦 Github: https://github.com/czczup/URST
📄 Paper: https://arxiv.org/abs/2103.11784
Forwarded from Just links
Our new(ish) paper, Contrast To Divide. TL;DR: self-supervised pre-training is a very strong instrument when working with noisy labels. Like+retweet are more than welcome
https://twitter.com/evgeniyzhe/status/1375486632728616969
https://twitter.com/evgeniyzhe/status/1375486632728616969
Twitter
Evgenii Zheltonozhskii
Our new paper, C2D (https://t.co/AhrDVP8C0I, https://t.co/UcdS4nYTqH) shows how self-supervised pre-training boosts learning with noisy labels, achieves SOTA performance and provides in-depth analysis. Authors @evgeniyzhe @ChaimBaskin Avi Mendelson, Alex…
Forwarded from Karim Iskakov - канал (Vladimir Ivashkin)
This media is not supported in your browser
VIEW IN TELEGRAM
Realtime NeRF inference in browser! Try it out:
🌐 https://phog.github.io/snerg/#demos
📉 @loss_function_porn
🌐 https://phog.github.io/snerg/#demos
📉 @loss_function_porn
Repurposing GANs for One-shot Semantic Part Segmentation
* abs
* project page
* not official code
- another similar work from NVIDIA
* abs
* project page
* not official code
Do GANs learn meaningful structural parts of objects during their attempt to reproduce those objects? In this work, we test this hypothesis and propose a simple and effective approach based on GANs for semantic part segmentation that requires as few as one label example along with an unlabeled dataset. Our key idea is to leverage a trained GAN to extract pixel-wise representation from the input image and use it as feature vectors for a segmentation network. Our experiments demonstrate that GANs representation is "readily discriminative" and produces surprisingly good results that are comparable to those from supervised baselines trained with significantly more labels. We believe this novel repurposing of GANs underlies a new class of unsupervised representation learning that is applicable to many other tasks.
#gan #semantic_seg- another similar work from NVIDIA
Forwarded from эйай ньюз
На реддите запостили особый колаб-ноутбук который каждый раз дает Tesla-P100 GPU и 25 Gb RAM.
Можно копировать себе и использовать. Поспешите пока лавочку не прикрыли.
Ссылка: https://colab.research.google.com/drive/1D6krVG0PPJR2Je9g5eN_2h6JP73_NUXz
Можно копировать себе и использовать. Поспешите пока лавочку не прикрыли.
Ссылка: https://colab.research.google.com/drive/1D6krVG0PPJR2Je9g5eN_2h6JP73_NUXz
High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation
[Facebook Reality Labs]
* youtube
* pdf
* abs
[Facebook Reality Labs]
* youtube
* abs
3D video avatars can empower virtual communications by providing compression, privacy, entertainment, and a sense of presence in AR/VR. Best 3D photo-realistic AR/VR avatars driven by video, that can minimize uncanny effects, rely on person-specific models. However, existing person-specific photo-realistic 3D models are not robust to lighting, hence their results typically miss subtle facial behaviors and cause artifacts in the avatar. This is a major drawback for the scalability of these models in communication systems (e.g., Messenger, Skype, FaceTime) and AR/VR. This paper addresses previous limitations by learning a deep learning lighting model, that in combination with a high-quality 3D face tracking algorithm, provides a method for subtle and robust facial motion transfer from a regular video to a 3D photo-realistic avatar. Extensive experimental validation and comparisons to other state-of-the-art methods demonstrate the effectiveness of the proposed framework in real-world scenarios with variability in pose, expression, and illumination.
#face_trackingYouTube
(CVPR 2021) High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation
3D video avatars can empower virtual communications
by providing compression, privacy, entertainment, and a
sense of presence in AR/VR. Best 3D photo-realistic AR/VR
avatars driven by video, that can minimize uncanny effects,
rely on person-specific models.…
by providing compression, privacy, entertainment, and a
sense of presence in AR/VR. Best 3D photo-realistic AR/VR
avatars driven by video, that can minimize uncanny effects,
rely on person-specific models.…
Forwarded from Being Danil Krivoruchko
Matt Winckelmann все-таки удивительный человек.
Помимо работы в двух лучших на планете моушен-студиях (и еще классного вводного курса по UE на Ентагме) у него есть еще персональные проекты. Сегодня вот узнал про свежий, и там прямо все красиво. Мэтт запустил бота Рейчел (привет, Blade runner), который в течение года генерил 3д-дейлики, которые как по мне не сильно отличаются от 99% других дейликов, и постил их в свой заведеный инстаграм аккаунт.
Результат - у бота в полтора раза больше подписчиков, чем у Мэтта. По-моему идеальный художественный комментарий на тему "экономики внимания", "инфлюенсеров" и прочей ИГ-культуры.
https://www.mwinckelmann.com/rachaelisnotreal
Помимо работы в двух лучших на планете моушен-студиях (и еще классного вводного курса по UE на Ентагме) у него есть еще персональные проекты. Сегодня вот узнал про свежий, и там прямо все красиво. Мэтт запустил бота Рейчел (привет, Blade runner), который в течение года генерил 3д-дейлики, которые как по мне не сильно отличаются от 99% других дейликов, и постил их в свой заведеный инстаграм аккаунт.
Результат - у бота в полтора раза больше подписчиков, чем у Мэтта. По-моему идеальный художественный комментарий на тему "экономики внимания", "инфлюенсеров" и прочей ИГ-культуры.
https://www.mwinckelmann.com/rachaelisnotreal
Conway's Game of Life in blender nodes. See the thread for the nodes setup.
https://twitter.com/GelamiSalami/status/1375139627351220234
#b3d
https://twitter.com/GelamiSalami/status/1375139627351220234
#b3d
Twitter
GelamiSalami
Found out a way to have image buffers in the node editor Here's Conway's Game of Life with nodes #blender #b3d #eevee https://t.co/hddRPT90MP
Unreal and Unity released proper good bois today
* Unreal MetaPet
* Unity pettable object (poor Unity...)
* Unreal MetaPet
* Unity pettable object (
Twitter
Unreal Engine
Say hello to MetaPets 🐾 the next-generation of fur-ever friends from Unreal Engine. Creating #MetaPets is as easy as a walk in the park using the new 🐶 MetaPet Creator. #UE4 Unleash your potential and see the pawsibilities 👇
Forwarded from Data Science by ODS.ai 🦜
EfficientNetV2: Smaller Models and Faster Training
A new paper from Google Brain with a new SOTA architecture called EfficientNetV2. The authors develop a new family of CNN models that are optimized both for accuracy and training speed. The main improvements are:
- an improved training-aware neural architecture search with new building blocks and ideas to jointly optimize training speed and parameter efficiency;
- a new approach to progressive learning that adjusts regularization along with the image size;
As a result, the new approach can reach SOTA results while training faster (up to 11x) and smaller (up to 6.8x).
Paper: https://arxiv.org/abs/2104.00298
Code will be available here:
https://github.com/google/automl/efficientnetv2
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-effnetv2
#cv #sota #nas #deeplearning
A new paper from Google Brain with a new SOTA architecture called EfficientNetV2. The authors develop a new family of CNN models that are optimized both for accuracy and training speed. The main improvements are:
- an improved training-aware neural architecture search with new building blocks and ideas to jointly optimize training speed and parameter efficiency;
- a new approach to progressive learning that adjusts regularization along with the image size;
As a result, the new approach can reach SOTA results while training faster (up to 11x) and smaller (up to 6.8x).
Paper: https://arxiv.org/abs/2104.00298
Code will be available here:
https://github.com/google/automl/efficientnetv2
A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-effnetv2
#cv #sota #nas #deeplearning