Deep Gravity – Telegram
Deep Gravity
393 subscribers
60 photos
35 videos
17 files
495 links
AI

Contact:
DeepL.Gravity@gmail.com
Download Telegram
Major trends in #NLP : a review of 20 years of #ACL research

The 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019) is starting this week in Florence, Italy. We took the opportunity to review major research trends in the animated NLP space and formulate some implications from the business perspective. The article is backed by a statistical and — guess what — NLP-based analysis of ACL papers from the last 20 years

Link

🔭 @DeepGravity
Generalized Coefficient of Correlation for Non-Linear Relationships

What is the best correlation coefficient R(X, Y) to measure non-linear dependencies between two variables X and Y? Let's say that you want to assess weather there is a linear or quadratic relationship between X and Y. One way to do it is to perform a polynomial regression such as Y = a + bX + cX^2, and then measure the standard coefficient of correlation between the predicted and observed values. How good is this approach?

Link

🔭 @DeepGravity
Research Fellow in Deep Reinforcement Learning for Machine Theory of Mind @ Oxford Brookes

New Post-doc Opening at U. of Toronto on Deep Learning / RL for Traffic Prediction and Control

Seeking PostDoctoral Fellow in Machine Learning (Survival Prediction; Medical Informatics), University of Alberta

2 full-time academic position vacancies on Data Science and related topics in ULB, Brussels, Belgium

Postdoc at Monash University (Melbourne) for probabilistic & deep learning

New Post-doc Opening at U. of Toronto on Deep Learning / RL for Traffic Prediction and Control

MERL is seeking a motivated and qualified individual to conduct research in safe reinforcement learning (RL) and deep learning algorithms for robotics applications.

Fully-funded Post Doctoral Position at InterDigitl, Information Theory for Understanding and Designing Flexible Deep Neural Networks

AI Scientist positions at AI Singapore

RL and LfD research positions (now including interns) at Bosch / UT Austin, focusing on autonomous vehicles

looking for an Integrated Master's cum PhD studentship position across the globe in the areas of Artificial Intelligence, Machine Learning, Data Science, Natural Language Processing

Permanent academic position - Lecturer/Senior Lecturer/Reader in Media & Data Science, University of Glasgow, School of Computing Science

PhD positions in Machine Learning in ECE at George Washington University, USA

2 PhD Candidates in Computer Science, paluno - The Ruhr Institute for Software Technology, Universität Duisburg-Essen

3-year fully funded PhD position on Multimodal Machine Learning for Mental Health (CNRS GREYC, France)

Research Fellow / Senior Research Fellow at the intersection of machine learning and robotics

Two postdoctoral positions are available in the lab of Carlos Fernandez-Granda at the Courant Institute and Center for Data Science at NYU

#Job

🔭 @DeepGravity
#DeepLearning models tend to increase their accuracy with the increasing amount of training data, where’s traditional #MachineLearning models such as #SVM and Naive #Bayes classifier stop improving after a saturation point.

Link

🔭 @DeepGravity
#VariationalAutoencoder Theory

The Variational Autoencoder has taken the #MachineLearning community by storm since Kingma and Welling’s seminal paper was released in 20131.

Link

🔭 @DeepGravity
#DecisionTree vs #RandomForest vs #GradientBoostingMachines: Explained Simply

Decision Trees, Random Forests and Boosting are among the top 16 #data science and machine learning tools used by data scientists. The three methods are similar, with a significant amount of overlap. In a nutshell:

* A decision tree is a simple, decision making-diagram.
* Random forests are a large number of trees, combined (using averages or "majority rules") at the end of the process.
* Gradient boosting machines also combine decision trees, but start the combining process at the beginning, instead of at the end.

Link

🔭 @DeepGravity
Semantic Image #Segmentation with #DeepLab in #TensorFlow

Semantic image segmentation, the task of assigning a semantic label, such as “road”, “sky”, “person”, “dog”, to every pixel in an image enables numerous new applications, such as the synthetic shallow depth-of-field effect shipped in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones and mobile real-time video segmentation. Assigning these semantic labels requires pinpointing the outline of objects, and thus imposes much stricter localization accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection.

Link

🔭 @DeepGravity
Forwarded from Apply Time Positions
Dr. Mahdi Imani, an assistant professor at the Department of Electrical and Computer Engineering at the George Washington University, seeks for multiple PhD students with interests in Machine Learning, Reinforcement Learning and Statistics. Ideal candidates may have:
- Master’s degree in electrical/computer engineering or computer science.
- Strong background in mathematics and statistics.
- Good programming skills (e.g., Python).
Prospective students may email their CV, trannoscripts and English test scores at imani.gwu@gmail.com. For more information, see https://web.seas.gwu.edu/imani/.
Self-supported postdoctoral and visiting scholars are encouraged to contact as well.
--
Mahdi Imani, Ph.D.
Assistant Professor
Dept. of Electrical and Computer Eng.
George Washington University
https://web.seas.gwu.edu/imani/

✔️ @ApplyTime
A very interesting paper by #Harvard University and #OpenAI

#DeepDoubleDescent: WHERE BIGGER MODELS AND MORE DATA HURT

ABSTRACT
We show that a variety of modern deep learning tasks exhibit a “double-descent” phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.

Paper

Related article

#DeepLearning

🔭 @DeepGravity
#Google #DeepMind gamifies memory with its latest #AI work

Google DeepMind scientists built a computer program that gives signals from future to past, in a kind of theoretical model that feels like things people do when they learn from their mistakes. Just remember, it's only a game.

Link

🔭 @DeepGravity
How #NeuralNetworks work—and why they’ve become a big business

The last decade has seen remarkable improvements in the ability of computers to understand the world around them. Photo software automatically recognizes people's faces. Smartphones transcribe spoken words into text. Self-driving cars recognize objects on the road and avoid hitting them.

Underlying these breakthroughs is an artificial intelligence technique called deep learning. Deep learning is based on neural networks, a type of data structure loosely inspired by networks of biological neurons. Neural networks are organized in layers, with inputs from one layer connected to outputs from the next layer.

Link

🔭 @DeepGravity
#ReinforcementLearning to Reduce Building Energy Consumption


#ModelPredictiveControl (MPC)
The basic #MPC concept can be summarized as follows. Suppose that we wish to control a multiple-input, multiple-output process while satisfying inequality constraints on the input and output variables. If a reasonably accurate dynamic model of the process is available, model and current measurements can be used to predict future values of the outputs. Then the appropriate changes in the input variables can be computed based on both predictions and measurements.

Link

🔭 @DeepGravity
#StarGAN v2: Diverse Image Synthesis for Multiple Domains

A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain differences. The code, pretrained models, and dataset can be found at this https URL.

Paper

#GANs

🔭 @DeepGravity
A thousand ways to deploy #MachineLearning models

You have done a great work building that awesome 99% accurate machine learning model but your work most of the time is not done without deploying. Most times our models will be integrated with existing web apps, mobile apps or other systems. How then do we make this happen?

Link

🔭 @DeepGravity
How to Use Out-of-Fold Predictions in #MachineLearning

Machine learning algorithms are typically evaluated using resampling techniques such as k-fold cross-validation.

During the k-fold cross-validation process, predictions are made on test sets comprised of data not used to train the model. These predictions are referred to as out-of-fold predictions, a type of out-of-sample predictions.

Out-of-fold predictions play an important role in machine learning in both estimating the performance of a model when making predictions on new data in the future, so-called the generalization performance of the model, and in the development of ensemble models.

In this tutorial, you will discover a gentle introduction to out-of-fold predictions in machine learning.

After completing this tutorial, you will know:

*Out-of-fold predictions are a type of out-of-sample predictions made on data not used to train a model.
* Out-of-fold predictions are most commonly used to estimate the performance of a model when making predictions on unseen data.
*Out-of-fold predictions can be used to construct an ensemble model called a stacked generalization or stacking ensemble.

Link

🔭 @DeepGravity