Deep Gravity – Telegram
Deep Gravity
393 subscribers
60 photos
35 videos
17 files
495 links
AI

Contact:
DeepL.Gravity@gmail.com
Download Telegram
Prediction of Physical Load Level by #MachineLearning Analysis of Heart Activity after Exercises

Paper

🔭 @DeepGravity
Triple #GenerativeAdversarialNetworks

Generative adversarial networks (GANs) have shown promise in image generation and classification given limited supervision. Existing methods extend the unsupervised GAN framework to incorporate supervision heuristically. Specifically, a single discriminator plays two incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. The formulation intrinsically causes two problems: (1) the generator and the discriminator (i.e., the classifier) may not converge to the data distribution at the same time; and (2) the generator cannot control the semantics of the generated samples. In this paper, we present the triple generative adversarial network (Triple-GAN), which consists of three players—a generator, a classifier, and a discriminator. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible objective functions to ensure that the distributions characterized by the generator and the classifier converge to the data distribution. We evaluate Triple-GAN in two challenging settings, namely, semi-supervised learning and the extreme low data regime. In both settings, Triple-GAN can achieve state-of-the-art classification results among deep generative models and generate meaningful samples in a specific class simultaneously.

Paper

🔭 @DeepGravity
Hyperparameter Tuning On #Google Cloud Platform With #Scikit_Learn

Google Cloud Platform’s AI Platform (formerly ML Engine) offers a hyperparameter tuning service for your models. Why should you take the extra time and effort to learn how to use it instead of just running the code you already have on a virtual machine? Are the benefits worth the extra time and effort?

Link

🔭 @DeepGravity
secml: A #Python Library for Secure and Explainable #MachineLearning

We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including not only test-time evasion attacks to generate adversarial examples against deep neural networks, but also training-time poisoning attacks against support vector machines and many other algorithms. These attacks enable evaluating the security of learning algorithms and of the corresponding defenses under both white-box and black-box threat models. To this end, secml provides built-in functions to compute security evaluation curves, showing how quickly classification performance decreases against increasing adversarial perturbations of the input data. secml also includes explainability methods to help understand why adversarial attacks succeed against a given model, by visualizing the most influential features and training prototypes contributing to each decision. It is distributed under the Apache License 2.0, and hosted at https://gitlab.com/secml/secml.

Paper

🔭 @DeepGravity
A Gentle Introduction to #ProbabilityDensityEstimation

After completing this tutorial, you will know:

* Histogram plots provide a fast and reliable way to visualize the probability density of a data sample.
* Parametric probability density estimation involves selecting a common distribution and estimating the parameters for the density function from a data sample.
* Nonparametric probability density estimation involves using a technique to fit a model to the arbitrary distribution of the data, like kernel density estimation.

Link

🔭 @DeepGravity
#Evolution of #NeuralNetworks


Today, #AI lives its golden age whereas neural networks make a great contribution to it. Neural networks change our lifes without even realizing it. It lies behind the image, face and speech recognition, also language translation, even in future predictions. However, it is not coming to the present form in a day. Let’s travel to the past and monitor its previous forms.

Link

🔭 @DeepGravity
unnamed.gif
11.3 MB
#MultiAgent Manipulation via Locomotion using Hierarchical Sim2Real

Link

🔭 @DeepGravity
Model-Based #ReinforcementLearning:
Theory and Practice

Article

#Berkeley

🔭 @DeepGravity
Positive-Unlabeled #RewardLearning

Learning #Reward functions from data is a promising path towards achieving scalable #ReinforcementLearning ( #RL ) for #robotics. However, a major challenge in training agents from learned reward models is that the agent can learn to exploit errors in the reward model to achieve high reward behaviors that do not correspond to the intended task. These reward delusions can lead to unintended and even dangerous behaviors. On the other hand, adversarial imitation learning frameworks tend to suffer the opposite problem, where the discriminator learns to trivially distinguish agent and expert behavior, resulting in reward models that produce low reward signal regardless of the input state. In this paper, we connect these two classes of reward learning methods to positive-unlabeled (PU) learning, and we show that by applying a large-scale PU learning algorithm to the reward learning problem, we can address both the reward under- and over-estimation problems simultaneously. Our approach drastically improves both GAIL and supervised reward learning, without any additional assumptions.

Paper

🔭 @DeepGravity
Yoshua #Bengio, Revered Architect of #AI, Has Some Ideas About What to Build Next

Article

🔭 @DeepGravity
AI Debate 2019: Yoshua Bengio vs Gary Marcus

This is an #AI Debate between Yoshua #Bengio and #GaryMarcus from Dec 23, 2019, organized by Montreal.AI and Mila - Institut Québécois d'Intelligence Artificielle.
Facebook video: https://www.facebook.com/MontrealAI/v...
Reading material: http://www.montreal.ai/aidebate.pdf

YouTube

🔭 @DeepGravity