Deep Gravity – Telegram
Deep Gravity
393 subscribers
60 photos
35 videos
17 files
495 links
AI

Contact:
DeepL.Gravity@gmail.com
Download Telegram
Making progress in #MachineLearning is not possible without being strong in #Mathematics behind it.

To learn math behind ML just Google

🔭 @DeepGravity
#Quantum #GenerativeAdversarialNetworks for learning and loading random distributions

Abstract
Quantum algorithms have the potential to outperform their classical counterparts in a variety of tasks. The realization of the advantage often requires the ability to load classical data efficiently into quantum states. However, the best known methods require O(2n) gates to load an exact representation of a generic data structure into an n-qubit state. This scaling can easily predominate the complexity of a quantum algorithm and, thereby, impair potential quantum advantage. Our work presents a hybrid quantum-classical algorithm for efficient, approximate quantum state loading. More precisely, we use quantum Generative Adversarial Networks (qGANs) to facilitate efficient learning and loading of generic probability distributions - implicitly given by data samples - into quantum states. Through the interplay of a quantum channel, such as a variational quantum circuit, and a classical neural network, the qGAN can learn a representation of the probability distribution underlying the data samples and load it into a quantum state. The loading requires O(poly(n)) gates and can thus enable the use of potentially advantageous quantum algorithms, such as Quantum Amplitude Estimation. We implement the qGAN distribution learning and loading method with Qiskit and test it using a quantum simulation as well as actual quantum processors provided by the IBM Q Experience. Furthermore, we employ quantum simulation to demonstrate the use of the trained quantum channel in a quantum finance application.

Link to the paper on Nature

🔭 @DeepGravity
AI-Play
Allowing Humans and AI play together, unlike other environments such as gym. AI-Play allows the huamn or AI to play witn one another. AI-Play is an educational ,research framework for experimenting with #AI #agents.

To run please open your terminal and access the directory where you decided to place this repository on your disk and type the following command.

#ReinforcementLearning

Link to the GitHub repo

🔭 @DeepGravity
If you are seeking a job in #DataScience worldwide, check this facebook group

#Job

🔭 @DeepGravity
How to Perform #FeatureSelection with Categorical Data

After completing this tutorial, you will know:

* The breast cancer predictive modeling problem with categorical inputs and binary #classification target variable.
* How to evaluate the importance of categorical features using the chi-squared and mutual information statistics.
* How to perform feature selection for categorical data when fitting and evaluating a classification model.

Link

🔭 @DeepGravity
Emerging Technologies 2018

#Gartner

🔭 @DeepGravity
#Facebook #AI Residency program

The AI Residency program will pair you with an AI Researcher and Engineer who will both guide your project. With the team, you will pick a research problem of mutual interest and then devise new deep learning techniques to solve it. We also encourage collaborations beyond the assigned mentors. The research will be communicated to the academic community by submitting papers to top academic venues (for example, NeurIPS, ICML, ICLR, CVPR, ICCV, ACL, EMNLP etc.), as well as open-source code releases and/or product impact.

Link

#Job

🔭 @DeepGravity
How to Develop #MultilayerPerceptronModels for #TimeSeries Forecasting

After completing this tutorial, you will know:

How to develop #MLP models for univariate time series forecasting.
How to develop MLP models for multivariate time series forecasting.
How to develop MLP models for multi-step time series forecasting.

Link

🔭 @DeepGravity
Spark #NLP 101: LightPipeline

A Pipeline is specified as a sequence of stages, and each stage is either a Transformer or an Estimator. These stages are run in order, and the input DataFrame is transformed as it passes through each stage. Now let’s see how this can be done in Spark NLP using Annotators and Transformers.

Link

🔭 @DeepGravity
#MachineLearning #CheatSheet

This cheat sheet contains many classical equations and diagrams on machine learning, which will help you quickly recall knowledge and ideas on machine learning.

The cheat sheet will also appeal to someone who is preparing for a job interview related to machine learning.

Link

🔭 @DeepGravity
Awesome #MonteCarlo Tree Search Papers

Link

🔭 @DeepGravity
When #Bayes, #Ockham, and #Shannon come together to define #MachineLearning

A beautiful idea, which binds together concepts from statistics, information theory, and philosophy to lay down the foundation of machine learning.

Link

🔭 @DeepGravity
A Recipe for Training #NeuralNetworks

Some few weeks ago I posted a tweet on “the most common neural net mistakes”, listing a few common gotchas related to training neural nets. The tweet got quite a bit more engagement than I anticipated (including a webinar :)). Clearly, a lot of people have personally encountered the large gap between “here is how a convolutional layer works” and “our convnet achieves state of the art results”.

Link


🔭 @DeepGravity
How Much Over-parameterization Is Sufficient to Learn Deep #ReLU Networks?

A recent line of research on #DeepLearning focuses on the extremely over-parameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size n and the inverse of the target accuracy ϵ^-1, deep neural networks learned by (stochastic) gradient descent enjoy nice optimization and generalization guarantees. Very recently, it is shown that under certain margin assumption on the training data, a polylogarithmic width condition suffices for two-layer ReLU networks to converge and generalize (Ji and Telgarsky, 2019). However, how much over-parameterization is sufficient to guarantee optimization and generalization for deep neural networks still remains an open question. In this work, we establish sharp optimization and generalization guarantees for deep ReLU networks. Under various assumptions made in previous work, our optimization and generalization guarantees hold with network width polylogarithmic in n and ϵ^-1. Our results push the study of over-parameterized deep neural networks towards more practical settings.

Link

🔭 @DeepGravity
Noise Robust Generative Adversarial Networks

#GenerativeAdversarialNetworks (#GANs) are neural networks that learn data distributions through adversarial training. In intensive studies, recent GANs have shown promising results for reproducing training data. However, in spite of noise, they reproduce data with fidelity. As an alternative, we propose a novel family of GANs called noise-robust GANs (NR-GANs), which can learn a clean image generator even when training data are noisy. In particular, NR-GANs can solve this problem without having complete noise information (e.g., the noise distribution type, noise amount, or signal-noise relation). To achieve this, we introduce a noise generator and train it along with a clean image generator. As it is difficult to generate an image and a noise separately without constraints, we propose distribution and transformation constraints that encourage the noise generator to capture only the noise-specific components. In particular, considering such constraints under different assumptions, we devise two variants of NR-GANs for signal-independent noise and three variants of NR-GANs for signal-dependent noise. On three benchmark datasets, we demonstrate the effectiveness of NR-GANs in noise robust image generation. Furthermore, we show the applicability of NR-GANs in image denoising.

Link

🔭 @DeepGravity
A Quick Guide to #FeatureEngineering

Feature engineering plays a key role in machine learning, data mining, and data analytics. This article provides a general definition for feature engineering, together with an overview of the major issues, approaches, and challenges of the field.

Link

🔭 @DeepGravity