Zero to One — A Ton of Awe-Inspiring Deep Learning Demos with Code for Beginners
https://medium.com/@SamPutnam/deep-learning-download-and-run-a9a1e374d2d9
https://medium.com/@SamPutnam/deep-learning-download-and-run-a9a1e374d2d9
Medium
Zero to One — A Ton of Awe-Inspiring Deep Learning Demos with Code for Beginners
Check it out!
How to actually build a neural network from blocks? - with notMNIST in Keras [webinar]
https://www.crowdcast.io/e/neural-network-blocks/register
https://www.crowdcast.io/e/neural-network-blocks/register
Crowdcast
How to actually build a neural network from blocks? - Crowdcast
Deep learning is about creating machine learning models from compossible blocks, called layers. In this webinar you will learn convolutional neural network architectures for image classification.
Projected Gradient Descent for finding Max(Min) Eigenvalues
https://sudeepraja.github.io/PGD/
https://sudeepraja.github.io/PGD/
Sudeep Raja
Projected Gradient Descent - Max(Min) Eigenvalues(vectors)
This post is about finding the minimum and maximum eigenvalues and the corresponding eigenvectors of a matrix \(A\) using Projected Gradient Descent. There are several algorithms to find the eigenvalues of a given matrix (See Eigenvalue algorithms). Although…
Modeling documents with Generative Adversarial Networks
http://blog.aylien.com/modeling-documents-generative-adversarial-networks/
http://blog.aylien.com/modeling-documents-generative-adversarial-networks/
AYLIEN
Modeling documents with Generative Adversarial Networks - AYLIEN
In this post I provide a brief overview of the Generative Adversarial Networks paper and walk through some of the code.
Interpreting neurons in an LSTM network
http://yerevann.github.io/2017/06/27/interpreting-neurons-in-an-LSTM-network/
http://yerevann.github.io/2017/06/27/interpreting-neurons-in-an-LSTM-network/
Performance RNN: Generating Music with Expressive Timing and Dynamics
https://magenta.tensorflow.org/performance-rnn
https://magenta.tensorflow.org/performance-rnn
Magenta
Performance RNN: Generating Music with Expressive Timing and Dynamics
We present Performance RNN, an LSTM-based recurrent neural network designed to model polyphonic music with expressive timing and dynamics. Here’s an example...
Using keras-vis to debug self-driving model, toy example.
https://github.com/raghakot/keras-vis/blob/master/applications/self_driving/visualize_attention.ipynb
https://github.com/raghakot/keras-vis/blob/master/applications/self_driving/visualize_attention.ipynb
GitHub
raghakot/keras-vis
Neural network visualization toolkit for keras. Contribute to raghakot/keras-vis development by creating an account on GitHub.
OpenAI open sources a high-performance Python library for robotic simulation
https://blog.openai.com/faster-robot-simulation-in-python/
https://blog.openai.com/faster-robot-simulation-in-python/
OpenAI Blog
Faster Physics in Python
We're open-sourcing a high-performance Python library for robotic simulation using the MuJoCo engine, developed over our past year of robotics research. View CodeView Docs This library is one of our core tools for deep learning robotics research, which we've…
Attention in Long Short-Term Memory Recurrent Neural Networks
http://machinelearningmastery.com/attention-long-short-term-memory-recurrent-neural-networks/
http://machinelearningmastery.com/attention-long-short-term-memory-recurrent-neural-networks/
MachineLearningMastery.com
Attention in Long Short-Term Memory Recurrent Neural Networks - MachineLearningMastery.com
The Encoder-Decoder architecture is popular because it has demonstrated state-of-the-art results across a range of domains. A limitation of the architecture is that it encodes the input sequence to a fixed length internal representation. This imposes limits…
Автоэнкодеры в Keras, Часть 5: GAN(Generative Adversarial Networks) и tensorflow
https://habrahabr.ru/post/332000/
https://habrahabr.ru/post/332000/
Habr
Автоэнкодеры в Keras, Часть 5: GAN(Generative Adversarial Networks) и tensorflow
Содержание Часть 1: Введение Часть 2: Manifold learning и скрытые (latent) переменные Часть 3: Вариационные автоэнкодеры (VAE) Часть 4: Conditional VAE...
Автоэнкодеры в Keras, часть 6: VAE + GAN
https://habrahabr.ru/post/332074/
https://habrahabr.ru/post/332074/
Habr
Автоэнкодеры в Keras, часть 6: VAE + GAN
Содержание Часть 1: Введение Часть 2: Manifold learning и скрытые ( latent ) переменные Часть 3: Вариационные автоэнкодеры ( VAE ) Часть 4: Conditional VAE Часть 5: GAN (Generative Adversarial...
Gentle Introduction to the Adam Optimization Algorithm for Deep Learning
http://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/
http://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/
2nd Place Solution to 2017 Data Science Bowl
https://dhammack.github.io/kaggle-ndsb2017/
https://dhammack.github.io/kaggle-ndsb2017/
dhammack.github.io
2nd Place Solution to 2017 DSB - Daniel Hammack and Julian de Wit
Foreword
Tips for Training Recurrent Neural Networks
http://danijar.com/tips-for-training-recurrent-neural-networks/
http://danijar.com/tips-for-training-recurrent-neural-networks/
How to Visualize Your Recurrent Neural Network with Attention in Keras
https://medium.com/datalogue/attention-in-keras-1892773a4f22
https://medium.com/datalogue/attention-in-keras-1892773a4f22
Medium
How to Visualize Your Recurrent Neural Network with Attention in Keras
A technical discussion and tutorial
Improving LSTM-CTC based ASR performance in domains with limited training data
https://github.com/jb1999/eesen/blob/master/papers/LSTM-CTCperf2017.pdf
https://github.com/jb1999/eesen/blob/master/papers/LSTM-CTCperf2017.pdf
GitHub
jb1999/eesen
eesen - The official repository of the Eesen project
YellowFin: An automatic tuner for momentum SGD
http://cs.stanford.edu/~zjian/project/YellowFin/
http://cs.stanford.edu/~zjian/project/YellowFin/
Tensorflow Implementation of PathNet: Evolution Channels Gradient Descent in Super Neural Networks
https://github.com/jaesik817/pathnet
https://github.com/jaesik817/pathnet
GitHub
jaesik817/pathnet
pathnet - Tensorflow Implementation of PathNet: Evolution Channels Gradient Descent in Super Neural Networks