#Evolution of #NeuralNetworks
Today, #AI lives its golden age whereas neural networks make a great contribution to it. Neural networks change our lifes without even realizing it. It lies behind the image, face and speech recognition, also language translation, even in future predictions. However, it is not coming to the present form in a day. Let’s travel to the past and monitor its previous forms.
Link
🔭 @DeepGravity
Today, #AI lives its golden age whereas neural networks make a great contribution to it. Neural networks change our lifes without even realizing it. It lies behind the image, face and speech recognition, also language translation, even in future predictions. However, it is not coming to the present form in a day. Let’s travel to the past and monitor its previous forms.
Link
🔭 @DeepGravity
Positive-Unlabeled #RewardLearning
Learning #Reward functions from data is a promising path towards achieving scalable #ReinforcementLearning ( #RL ) for #robotics. However, a major challenge in training agents from learned reward models is that the agent can learn to exploit errors in the reward model to achieve high reward behaviors that do not correspond to the intended task. These reward delusions can lead to unintended and even dangerous behaviors. On the other hand, adversarial imitation learning frameworks tend to suffer the opposite problem, where the discriminator learns to trivially distinguish agent and expert behavior, resulting in reward models that produce low reward signal regardless of the input state. In this paper, we connect these two classes of reward learning methods to positive-unlabeled (PU) learning, and we show that by applying a large-scale PU learning algorithm to the reward learning problem, we can address both the reward under- and over-estimation problems simultaneously. Our approach drastically improves both GAIL and supervised reward learning, without any additional assumptions.
Paper
🔭 @DeepGravity
Learning #Reward functions from data is a promising path towards achieving scalable #ReinforcementLearning ( #RL ) for #robotics. However, a major challenge in training agents from learned reward models is that the agent can learn to exploit errors in the reward model to achieve high reward behaviors that do not correspond to the intended task. These reward delusions can lead to unintended and even dangerous behaviors. On the other hand, adversarial imitation learning frameworks tend to suffer the opposite problem, where the discriminator learns to trivially distinguish agent and expert behavior, resulting in reward models that produce low reward signal regardless of the input state. In this paper, we connect these two classes of reward learning methods to positive-unlabeled (PU) learning, and we show that by applying a large-scale PU learning algorithm to the reward learning problem, we can address both the reward under- and over-estimation problems simultaneously. Our approach drastically improves both GAIL and supervised reward learning, without any additional assumptions.
Paper
🔭 @DeepGravity
Yoshua #Bengio, Revered Architect of #AI, Has Some Ideas About What to Build Next
Article
🔭 @DeepGravity
Article
🔭 @DeepGravity
AI Debate 2019: Yoshua Bengio vs Gary Marcus
This is an #AI Debate between Yoshua #Bengio and #GaryMarcus from Dec 23, 2019, organized by Montreal.AI and Mila - Institut Québécois d'Intelligence Artificielle.
Facebook video: https://www.facebook.com/MontrealAI/v...
Reading material: http://www.montreal.ai/aidebate.pdf
YouTube
🔭 @DeepGravity
This is an #AI Debate between Yoshua #Bengio and #GaryMarcus from Dec 23, 2019, organized by Montreal.AI and Mila - Institut Québécois d'Intelligence Artificielle.
Facebook video: https://www.facebook.com/MontrealAI/v...
Reading material: http://www.montreal.ai/aidebate.pdf
YouTube
🔭 @DeepGravity
#TensorNetworks in #NeuralNetworks
Here, we have a small toy example of how to use a TN inside of a fully connected neural network.
Colab
🔭 @DeepGravity
Here, we have a small toy example of how to use a TN inside of a fully connected neural network.
Colab
🔭 @DeepGravity
Google
Google Colaboratory
#MNIST- Exploration to Execution
Outline.
Understanding the stats/distribution of data set
Dimensional Reduction Visualization.
Best Model finding/fine tuning.
Optimizes comparisons on the data set.
Understanding of trained Weights distribution
Trained model gradient visualization.
Visualizing the trained hidden layers.
Gan training.
Transfer learning on MNIST.
Link
🔭 @DeepGravity
Outline.
Understanding the stats/distribution of data set
Dimensional Reduction Visualization.
Best Model finding/fine tuning.
Optimizes comparisons on the data set.
Understanding of trained Weights distribution
Trained model gradient visualization.
Visualizing the trained hidden layers.
Gan training.
Transfer learning on MNIST.
Link
🔭 @DeepGravity
Medium
MNIST- Exploration to Execution.
Hello All, This is my first story in this publication, I wanna make it as useful as possible.
During the last two days, some famous #MachineLearning researchers elucidated their own definition of #DeepLearning. You might check the related links to read full definitions and discussions on each.
Yann LeCun:
#DL is constructing networks of parameterized functional modules & training them from examples using gradient-based optimization. That's it.
This definition is orthogonal to the learning paradigm: reinforcement, supervised, or self-supervised.
https://www.facebook.com/722677142/posts/10156463919392143/
Andriy Burkov:
Looks like in late 2019, people still need a definition of deep learning, so here's mine: deep learning is finding parameters of a nested parametrized non-linear function by minimizing an example-based differentiable cost function using gradient descent.
https://www.linkedin.com/posts/andriyburkov_looks-like-in-late-2019-people-still-need-activity-6615377527147941888-ce68/
François Chollet:
Deep learning refers to an approach to representation learning where your model is a chain of modules (typically a stack / pyramid, hence the notion of depth), each of which could serve as a standalone feature extractor if trained as such.
https://twitter.com/fchollet/status/1210031900695449600
Link
🔭 @DeepGravity
Yann LeCun:
#DL is constructing networks of parameterized functional modules & training them from examples using gradient-based optimization. That's it.
This definition is orthogonal to the learning paradigm: reinforcement, supervised, or self-supervised.
https://www.facebook.com/722677142/posts/10156463919392143/
Andriy Burkov:
Looks like in late 2019, people still need a definition of deep learning, so here's mine: deep learning is finding parameters of a nested parametrized non-linear function by minimizing an example-based differentiable cost function using gradient descent.
https://www.linkedin.com/posts/andriyburkov_looks-like-in-late-2019-people-still-need-activity-6615377527147941888-ce68/
François Chollet:
Deep learning refers to an approach to representation learning where your model is a chain of modules (typically a stack / pyramid, hence the notion of depth), each of which could serve as a standalone feature extractor if trained as such.
https://twitter.com/fchollet/status/1210031900695449600
Link
🔭 @DeepGravity
Training Agents using Upside-Down #ReinforcementLearning
Traditional Reinforcement Learning (RL) algorithms either predict rewards with value functions or maximize them using policy search. We study an alternative: Upside-Down Reinforcement Learning (Upside-Down RL or #UDRL), that solves RL problems primarily using supervised learning techniques. Many of its main principles are outlined in a companion report [34]. Here we present the first concrete implementation of UDRL and demonstrate its feasibility on certain episodic learning problems. Experimental results show that its performance can be surprisingly competitive with, and even exceed that of traditional baseline algorithms developed over decades of research.
#JürgenSchmidhuber
Paper
🔭 @DeepGravity
Traditional Reinforcement Learning (RL) algorithms either predict rewards with value functions or maximize them using policy search. We study an alternative: Upside-Down Reinforcement Learning (Upside-Down RL or #UDRL), that solves RL problems primarily using supervised learning techniques. Many of its main principles are outlined in a companion report [34]. Here we present the first concrete implementation of UDRL and demonstrate its feasibility on certain episodic learning problems. Experimental results show that its performance can be surprisingly competitive with, and even exceed that of traditional baseline algorithms developed over decades of research.
#JürgenSchmidhuber
Paper
🔭 @DeepGravity
Paper
Swiss people eat chocolate more than other nations and have won the highest number of Nobel prize :)
Swedens have received many Noble prizes but eat less chocolate. It can be considered as an outlier :)
Germans eat chocolate a lot, but fewer Nobel prizes awarded. So, German chocolates are not good :)
Although the data is not fake, this paper is a joke! Read more here.
#Fun
🔭 @DeepGravity
Swiss people eat chocolate more than other nations and have won the highest number of Nobel prize :)
Swedens have received many Noble prizes but eat less chocolate. It can be considered as an outlier :)
Germans eat chocolate a lot, but fewer Nobel prizes awarded. So, German chocolates are not good :)
Although the data is not fake, this paper is a joke! Read more here.
#Fun
🔭 @DeepGravity
Deep Gravity pinned «During the last two days, some famous #MachineLearning researchers elucidated their own definition of #DeepLearning. You might check the related links to read full definitions and discussions on each. Yann LeCun: #DL is constructing networks of parameterized…»
A Comprehensive Hands-on Guide to Transfer Learning with Real-World Applications in Deep Learning
Introduction
Humans have an inherent ability to transfer knowledge across tasks. What we acquire as knowledge while learning about one task, we utilize in the same way to solve related tasks. The more related the tasks, the easier it is for us to transfer, or cross-utilize our knowledge. Some simple examples would be,
Know how to ride a motorbike ⮫ Learn how to ride a car
Know how to play classic piano ⮫ Learn how to play jazz piano
Know math and statistics ⮫ Learn machine learning
Article
🔭 @DeepGravity
Introduction
Humans have an inherent ability to transfer knowledge across tasks. What we acquire as knowledge while learning about one task, we utilize in the same way to solve related tasks. The more related the tasks, the easier it is for us to transfer, or cross-utilize our knowledge. Some simple examples would be,
Know how to ride a motorbike ⮫ Learn how to ride a car
Know how to play classic piano ⮫ Learn how to play jazz piano
Know math and statistics ⮫ Learn machine learning
Article
🔭 @DeepGravity
Medium
A Comprehensive Hands-on Guide to Transfer Learning with Real-World Applications in Deep Learning
Deep Learning on Steroids with the Power of Knowledge Transfer!
An overview of model explainability in modern machine learning
Towards a better understanding of why machine learning models make the decisions they do, and why it matters
Link
🔭 @DeepGravity
Towards a better understanding of why machine learning models make the decisions they do, and why it matters
Link
🔭 @DeepGravity
Medium
An overview of model explainability in modern machine learning
How we can understand black box machine learning models, and why it matters
#Bayesian Model Selection: As A Feature Reduction Technique
A gentle introduction to the application of Bayesian Model Selection to identify important features for machine learning model generation.
Article
🔭 @DeepGravity
A gentle introduction to the application of Bayesian Model Selection to identify important features for machine learning model generation.
Article
🔭 @DeepGravity
Medium
Bayesian Model Selection: As A Feature Reduction Technique
A gentle introduction to application of Bayesian Model Selection to identify important features for machine learning model generation.
#Facebook 's Head of #AI Says the Field Will Soon ‘Hit the Wall’
Jerome Pesenti leads the development of artificial intelligence at one of the world’s most influential—and controversial—companies. As VP of artificial intelligence at Facebook, he oversees hundreds of scientists and engineers whose work shapes the company’s direction and its impact on the wider world.
Link
🔭 @DeepGravity
Jerome Pesenti leads the development of artificial intelligence at one of the world’s most influential—and controversial—companies. As VP of artificial intelligence at Facebook, he oversees hundreds of scientists and engineers whose work shapes the company’s direction and its impact on the wider world.
Link
🔭 @DeepGravity
Wired
Facebook's Head of AI Says the Field Will Soon ‘Hit the Wall’
Jerome Pesenti is encouraged by progress in artificial intelligence, but sees the limits of the current approach to deep learning.
Deep Gravity
During the last two days, some famous #MachineLearning researchers elucidated their own definition of #DeepLearning. You might check the related links to read full definitions and discussions on each. Yann LeCun: #DL is constructing networks of parameterized…
👆
Yoshua Bengio:
Deep learning is inspired by neural networks of the brain to build learning machines which discover rich and useful internal representations, computed as a composition of learned features and functions.
Full definition:
https://www.facebook.com/yoshua.bengio/posts/2269432439828350
🔭 @DeepGravity
Yoshua Bengio:
Deep learning is inspired by neural networks of the brain to build learning machines which discover rich and useful internal representations, computed as a composition of learned features and functions.
Full definition:
https://www.facebook.com/yoshua.bengio/posts/2269432439828350
🔭 @DeepGravity