Regularizing Generative Adversarial Networks under Limited Data
[Google research, UC Merced, Waymo, Yonsei University]
* pdf, abs
* code
#gan #limited_data
[Google research, UC Merced, Waymo, Yonsei University]
* pdf, abs
* code
The success of the GAN models hinges on a large amount of training data. This work proposes a regularization approach for training robust GAN models on limited data. We theoretically show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data. Extensive experiments on several benchmark datasets demonstrate that the proposed regularization scheme 1) improves the generalization performance and stabilizes the learning dynamics of GAN models under limited training data, and 2) complements the recent data augmentation methods. These properties facilitate training GAN models to achieve state-of-the-art performance when only limited training data of the ImageNet benchmark is available.
- related work#gan #limited_data
Series of lectures from Samsung AI, in Russian
- Обзор применения нейросетей в компьютерной графике, Глеб Стеркин
- Neural rendering, генерация новых изображений без построения геометрии сцены, Глеб Стеркин
- Вся серия лекций тут
#courses
- Обзор применения нейросетей в компьютерной графике, Глеб Стеркин
- Neural rendering, генерация новых изображений без построения геометрии сцены, Глеб Стеркин
- Вся серия лекций тут
#courses
YouTube
Обзор применения нейросетей в компьютерной графике. Глеб Стеркин
В лекции рассматривается ряд сценариев, в которых нейронные сети могут упростить создание компьютерной графики: рендеринг, синтез объектов, анимация персонажей. Автор Глеб Стеркин, инженер-исследователь Samsung.
Ссылка на презентацию - https://docs.goog…
Ссылка на презентацию - https://docs.goog…
Recently I discovered Toronto Geometry Colloquium channel.
They also have twitter and website.
- episode from the poster
They also have twitter and website.
"It is a weekly hour-long webseries showcasing geometry processing research, including a 10-min opener speaker and a 50-min headliner in the style of live comedy!"I have to say, this is an amazing series. I will share some featured talks I found there in the following posts.
- episode from the poster
COALESCE: Component Assembly by Learning to Synthesize Connections
[Simon Fraser University, Adobe research]
* pdf, abs
* youtube talk
Kitbashing with deep learning.
[Simon Fraser University, Adobe research]
* pdf, abs
* youtube talk
Kitbashing with deep learning.
We introduce COALESCE, the first data-driven framework for component-based shape assembly which employs deep learning to synthesize part connections. To handle geometric and topological mismatches between parts, we remove the mismatched portions via erosion, and rely on a joint synthesis step, which is learned from data, to fill the gap and arrive at a natural part joint. Given a set of input parts extracted from different objects, COALESCE automatically aligns them and synthesizes plausible joints to connect the parts into a coherent 3D object represented by a mesh. The joint synthesis network, designed to focus on joint regions, reconstructs the surface between the parts by predicting an implicit shape representation that agrees with existing parts, while generating a smooth and topologically meaningful connection.#3d #implicit_geometry
Cycles-X rendering engine is available in an experimental branch and it works significantly faster than cycles on CPU and GPU.
#blender
There is much be done. We expect it will take at least 6 months until this work is part of an official Blender release.- Official blog post
#blender
This media is not supported in your browser
VIEW IN TELEGRAM
New Fb Oculus avatars
Appearing first in three games for Quest.
- source
Appearing first in three games for Quest.
- source
Oculus is beginning to roll out redesigned avatars that are more expressive and customizable than those that launched in 2016.
By the end of 2021, Oculus will have opened its new avatar SDK to all developers, and these VR personas will be supported in Facebook Horizon, the company’s own expansive social VR playground. Though, games are just one application for these refreshed avatars. Oculus says the avatar you create will eventually appear in some form within the Facebook app, Messenger, Instagram, and more, but only if you choose to.
#avatars #VRThis media is not supported in your browser
VIEW IN TELEGRAM
Softwrap - Dynamics For Retopology.
- available on blendermarket
- available on blendermarket
Softwrap works by running a custom softbody simulation while snapping in a way similar to the shrinkwrap modifier.
#simulation #physics #tools #blenderForwarded from Denis Sexy IT 🤖
This media is not supported in your browser
VIEW IN TELEGRAM
Логичное продолжение нейронки которая стала популярной из-за того как клево она оживляет фотографии с лицами: двигающиеся ЧБ-фотографии, портреты, мемы, все это результат работы алгоритма который называется First Order Model.
Несмотря на то, что алгоритм хорошо работает, оживлять им что-то кроме лиц довольно сложно, хоть он это и поддерживает — «гличи» и помарки создают довольно неприятный эффект.
И вот, спасибо группе ученых, скоро мемы можно будет оживлять в полный рост — новый алгоритм уже может понимать какие именно части тела как бы двигались на фотографии исходя из исходного видео – сделал нарезку с видео, там все понятно (живая фигурка особенно криповая получилась).
Страница проекта:
https://snap-research.github.io/articulated-animation/
(код проекта выложат попозже)
Несмотря на то, что алгоритм хорошо работает, оживлять им что-то кроме лиц довольно сложно, хоть он это и поддерживает — «гличи» и помарки создают довольно неприятный эффект.
И вот, спасибо группе ученых, скоро мемы можно будет оживлять в полный рост — новый алгоритм уже может понимать какие именно части тела как бы двигались на фотографии исходя из исходного видео – сделал нарезку с видео, там все понятно (живая фигурка особенно криповая получилась).
Страница проекта:
https://snap-research.github.io/articulated-animation/
(код проекта выложат попозже)
Sketch-based Normal Map Generation with Geometric Sampling
* pdf, abs
* pdf, abs
Normal map is an important and efficient way to represent complex 3D models. A designer may benefit from the auto-generation of high quality and accurate normal maps from freehand sketches in 3D content creation. This paper proposes a deep generative model for generating normal maps from users’ sketch with geometric sampling. Our generative model is based on Conditional Generative Adversarial Network with the curvature-sensitive points sampling of conditional masks. This sampling process can help eliminate the ambiguity of generation results as network input. In addition, we adopted a U-Net structure discriminator to help the generator be better trained. It is verified that the proposed framework can generate more accurate normal maps.
#gan #sketch
echoinside
Few-shot Image Generation via Cross-domain Correspondence [Adobe Research, UC Davis, UC Berkeley] * project page * pdf * code Training generative models, such as GANs, on a target domain containing limited examples (e.g., 10) can easily result in overfitting.…
code for this paper is available ☺️
https://github.com/utkarshojha/few-shot-gan-adaptation
https://github.com/utkarshojha/few-shot-gan-adaptation
GitHub
GitHub - utkarshojha/few-shot-gan-adaptation: [CVPR '21] Official repository for Few-shot Image Generation via Cross-domain Correspondence
[CVPR '21] Official repository for Few-shot Image Generation via Cross-domain Correspondence - GitHub - utkarshojha/few-shot-gan-adaptation: [CVPR '21] Official repository for Few-...
NVIDIA Omniverse Audio2Face is now available in open beta. Unfortunately, it works only on Windows rn. And it requires some RTX gpu. For some reason I think that this kind of product would be much more consumer friendly as a web app like Mixamo or MetaHuman creator.
- download
- tutorial
- download
- tutorial
Audio2Face simplifies animation of a 3D character to match any voice-over track, whether you’re animating characters for a game, film, real-time digital assistants, or just for fun. You can use the app for interactive real-time applications or as a traditional facial animation authoring tool. Run the results live or bake them out, it’s up to you.
#speech2animationdualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ)
[JAIST, University of Tokyo]
* youtube
* project page
* code
* abs, pdf
[JAIST, University of Tokyo]
* youtube
* project page
* code
* abs, pdf
In this paper, we propose dualFace, a portrait drawing interface to assist users with different levels of drawing skills to complete recognizable and authentic face sketches. dualFace consists of two-stage drawing assistance to provide global and local visual guidance: global guidance,
which helps users draw contour lines of portraits (i.e., geometric structure), and local guidance, which helps users draws details of facial parts (which conform to user-drawn contour lines), inspired by traditional artist workflows in portrait drawing. In the stage of global guidance, the user draws several contour lines, and dualFace then searches several relevant images from an internal database and displays the suggested face contour lines over the background of the canvas. In the stage of local guidance, we synthesize detailed portrait images with a deep generative model from user-drawn contour lines, but use the synthesized results as detailed drawing guidance.
#sketch #retrieval #faceYouTube
dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVM 2021)
An interactive portrait drawing interface for freehand sketching using generative models with global and local two stages.
Z. Huang, et al.. dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching. Journal of Computational Visual Media,…
Z. Huang, et al.. dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching. Journal of Computational Visual Media,…
This media is not supported in your browser
VIEW IN TELEGRAM
Sceneformer: Indoor Scene Generation with Transformers
[ Technical University of Munich]
* project page
* github
* pdf, abs
[ Technical University of Munich]
* project page
* github
* pdf, abs
We address the task of indoor scene generation by generating a sequence of objects, along with their locations and orientations conditioned on a room layout. Large-scale indoor scene datasets allow us to extract patterns from user-designed indoor scenes, and generate new scenes based on these patterns. Existing methods rely on the 2D or 3D appearance of these scenes in addition to object positions, and make assumptions about the possible relations between objects. In contrast, we do not use any appearance information, and implicitly learn object relations using the self-attention mechanism of transformers. Our method is also flexible, as it can be conditioned not only on the room layout but also on text denoscriptions of the room, using only the cross-attention mechanism of transformers.
#indoorThis media is not supported in your browser
VIEW IN TELEGRAM
Explaining in Style: Training a GAN to explain a classifier in StyleSpace
[Google research]
* project page
* pdf
[Google research]
* project page
Image classification models can depend on multiple different semantic attributes of the image. An explanation of the decision of the classifier needs to both discover and visualize these properties. Here we present StylEx, a method for doing this, by training a generative model to specifically explain multiple attributes that underlie classifier decisions. We apply StylEx to multiple domains, including animals, leaves, faces and retinal images. For these, we show how an image can be modified in different ways to change its classifier output. Our results show that the method finds attributes that align well with semantic ones, generate meaningful image-specific explanations, and are human-interpretable as measured in user-studies.
#ganKeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control
[University of Oxford, UC Berkeley, Stanford University, Google Research]
* project page
* demo
* pdf, abs
* code (plan to release on April 30)
[University of Oxford, UC Berkeley, Stanford University, Google Research]
* project page
* demo
* pdf, abs
* code (plan to release on April 30)
We present KeypointDeformer, a novel unsupervised method for shape control through automatically discovered 3D keypoints. Our approach produces intuitive and semantically consistent control of shape deformations. Moreover, our discovered 3D keypoints are consistent across object category instances despite large shape variations. Since our method is unsupervised, it can be readily deployed to new object categories without requiring expensive annotations for 3D keypoints and deformations. Our method also works on real-world 3D scans of shoes from Google scanned objects.
#3d #unsupervised