Deep Gravity – Telegram
Deep Gravity
393 subscribers
60 photos
35 videos
17 files
495 links
AI

Contact:
DeepL.Gravity@gmail.com
Download Telegram
120 #AI Predictions For 2020

Link

🔭 @DeepGravity
Tune #Hyperparameters for Classification #MachineLearning Algorithms

The seven classification algorithms we will look at are as follows:

Logistic Regression
Ridge Classifier
K-Nearest Neighbors (KNN)
Support Vector Machine (SVM)
Bagged Decision Trees (Bagging)
Random Forest
Stochastic Gradient Boosting

Article

🔭 @DeepGravity
#SelfDrivingCar Steering Angle Prediction Based on Image Recognition

Self-driving vehicles have expanded dramatically over the last few years. Udacity has release a dataset containing, among other data, a set of images with the steering angle captured during driving. The Udacity challenge aimed to predict steering angle based on only the provided images. We explore two different models to perform high quality prediction of steering angles based on images using different deep learning techniques including Transfer Learning, 3D CNN, #LSTM and ResNet. If the Udacity challenge was still ongoing, both of our models would have placed in the top ten of all entries.

Paper

🔭 @DeepGravity
#Speech2Face: Learning the Face Behind a Voice

How much can we infer about a person's looks from the way they speak? In this paper, we study the task of reconstructing a facial image of a person from a short audio recording of that person speaking. We design and train a deep neural network to perform this task using millions of natural videos of people speaking from Internet/Youtube. During training, our model learns audiovisual, voice-face correlations that allow it to produce images that capture various physical attributes of the speakers such as age, gender and ethnicity. This is done in a self-supervised manner, by utilizing the natural co-occurrence of faces and speech in Internet videos, without the need to model attributes explicitly. Our reconstructions, obtained directly from audio, reveal the correlations between faces and voices. We evaluate and numerically quantify how--and in what manner--our Speech2Face reconstructions from audio resemble the true face images of the speakers.

Paper

🔭 @DeepGravity
Learning human objectives by evaluating hypothetical behaviours

TL;DR: We present a method for training #ReinforcementLearning agents from human feedback in the presence of unknown unsafe states.

#DeepMind

Link

🔭 @DeepGravity
At #OpenAI, we’ve used the multiplayer video game #Dota 2 as a research platform for general-purpose AI systems. Our Dota 2 #AI, called OpenAI Five, learned by playing over 10,000 years of games against itself. It demonstrated the ability to achieve expert-level performance, learn human–AI cooperation, and operate at internet scale.

Link

🔭 @DeepGravity
#Gartner Hype Cycle for #AI, 2019

🔭 @DeepGravity
#ReinforcementLearning for ArtiSynth

This repository holds the plugin for the #biomechanical simulation environment of ArtiSynth. The purpose of this work is to bridge in between the biomechanical and reinforcement learning domains of research.

Link

🔭 @DeepGravity
#StyleGANv2 Explained!

This video explores changes to the StyleGAN architecture to remove certain artifacts, increase training speed, and achieve a much smoother latent space interpolation! This paper also presents an interesting Deepfake detection algorithm enabled by their improvements to latent space interpolation.

YouTube

🔭 @DeepGravity
Self-regularizing restricted #Boltzmann machines

Focusing on the grand-canonical extension of the ordinary restricted Boltzmann machine, we suggest an energy-based model for feature extraction that uses a layer of hidden units with varying size. By an appropriate choice of the chemical potential and given a sufficiently large number of hidden resources the generative model is able to efficiently deduce the optimal number of hidden units required to learn the target data with exceedingly small generalization error. The formal simplicity of the grand-canonical ensemble combined with a rapidly converging ansatz in mean-field theory enable us to recycle well-established numerical algothhtims during training, like contrastive divergence, with only minor changes. As a proof of principle and to demonstrate the novel features of grand-canonical Boltzmann machines, we train our generative models on data from the Ising theory and #MNIST.

Paper

🔭 @DeepGravity