Deep Gravity – Telegram
Deep Gravity
393 subscribers
60 photos
35 videos
17 files
495 links
AI

Contact:
DeepL.Gravity@gmail.com
Download Telegram
Progressive #VAE Training on Highly Sparse and Imbalanced Data

In this paper, we present a novel approach for training a #VariationalAutoencoder (VAE) on a highly imbalanced data set. The proposed training of a high-resolution VAE model begins with the training of a low-resolution core model, which can be successfully trained on imbalanced data set. In subsequent training steps, new convolutional, upsampling, deconvolutional, and downsampling layers are iteratively attached to the model. In each iteration, the additional layers are trained based on the intermediate pretrained model - a result of previous training iterations. Thus, the resolution of the model is progressively increased up to the required resolution level. In this paper, the progressive VAE training is exploited for learning a latent representation with imbalanced, highly sparse data sets and, consequently, generating routes in a constrained 2D space. Routing problems (e.g., vehicle routing problem, travelling salesman problem, and arc routing) are of special significance in many modern applications (e.g., route planning, network maintenance, developing high-performance nanoelectronic systems, and others) and typically associated with sparse imbalanced data. In this paper, the critical problem of routing billions of components in nanoelectronic devices is considered. The proposed approach exhibits a significant training speedup as compared with state-of-the-art existing VAE training methods, while generating expected image outputs from unseen input data. Furthermore, the final progressive VAE models exhibit much more precise output representation, than the #GenerativeAdversarialNetwork ( #GAN ) models trained with comparable training time. The proposed method is expected to be applicable to a wide range of applications, including but not limited image impainting, sentence interpolation, and semi-supervised learning.

Paper

🔭 @DeepGravity
Image Data Augmentation for #TensorFlow 2, #Keras and #PyTorch with Albumentations in #Python

TL;DR Learn how to create new examples for your dataset using image augmentation techniques. Load a scanned document image and apply various augmentations. Create an augmented dataset for Object Detection.

Article

🔭 @DeepGravity
A Gentle Introduction to Imbalanced #Classification

After completing this tutorial, you will know:

* Imbalanced classification is the problem of classification when there is an unequal distribution of classes in the training dataset.
* The imbalance in the class distribution may vary, but a severe imbalance is more challenging to model and may require specialized techniques.
* Many real-world classification problems have an imbalanced class distribution, such as fraud detection, spam detection, and churn prediction.

Link

🔭 @DeepGravity
Deep Gravity pinned «Our reinforcement learning architect designs have been just published on #NeurIPS2019 AI Art Gallery: https://lnkd.in/dFZ37BN Seems it draws like a baby now, but is growing and hopefully would be a skillful #RL artist very soon. #reinforcementlearning…»
5 #Financial Services Tech Trends to Watch in 2020

1. The Role of #ArtificialIntelligence in Finance Will Expand
2. Financial Services Firms Will Grow Their Use of #Data Analytics
3. #Blockchain Will Be a Key Security Solution
4. More #Bank Branches Will Undergo Digital Transformations
5. Automation Will Take Over More Financial Services

Link

🔭 @DeepGravity
Inside #DeepMind 's epic mission to solve science's trickiest problem
DeepMind's AI has beaten chess grandmasters and #Go champions. But founder and CEO Demis Hassabis now has his sights set on bigger, real-world problems that could change lives

Link

🔭 @DeepGravity
Explaining #ReinforcementLearning: Active vs Passive

We examine the required elements to solve an RL problem, compare passive and active reinforcement learning, and review common active and passive RL techniques.

Article

🔭 @DeepGravity
Prediction of Physical Load Level by #MachineLearning Analysis of Heart Activity after Exercises

Paper

🔭 @DeepGravity
Triple #GenerativeAdversarialNetworks

Generative adversarial networks (GANs) have shown promise in image generation and classification given limited supervision. Existing methods extend the unsupervised GAN framework to incorporate supervision heuristically. Specifically, a single discriminator plays two incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. The formulation intrinsically causes two problems: (1) the generator and the discriminator (i.e., the classifier) may not converge to the data distribution at the same time; and (2) the generator cannot control the semantics of the generated samples. In this paper, we present the triple generative adversarial network (Triple-GAN), which consists of three players—a generator, a classifier, and a discriminator. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible objective functions to ensure that the distributions characterized by the generator and the classifier converge to the data distribution. We evaluate Triple-GAN in two challenging settings, namely, semi-supervised learning and the extreme low data regime. In both settings, Triple-GAN can achieve state-of-the-art classification results among deep generative models and generate meaningful samples in a specific class simultaneously.

Paper

🔭 @DeepGravity
Hyperparameter Tuning On #Google Cloud Platform With #Scikit_Learn

Google Cloud Platform’s AI Platform (formerly ML Engine) offers a hyperparameter tuning service for your models. Why should you take the extra time and effort to learn how to use it instead of just running the code you already have on a virtual machine? Are the benefits worth the extra time and effort?

Link

🔭 @DeepGravity