By getting the RNN to complete our sentences, we can effectively ask questions of the model. Ilya Sutskever and Geoff Hinton trained a character level RNN on Wikipedia, and asked it to complete the phrase “The meaning of life is”. The RNN essentially answered “human reproduction”. It’s funny that you can get an RNN to read Wikipedia for a month, and have it essentially tell you that meaning of life is to have sex. It’s probably also a correct answer from a biological perspective.
http://news.mit.edu/2015/siting-wind-farms-quickly-cheaply-0717
MIT researches forecasted better, faster and cheaper. Placement of the wind electrostation or probability of a student being banned.
MIT researches forecasted better, faster and cheaper. Placement of the wind electrostation or probability of a student being banned.
MIT News
Siting wind farms more quickly, cheaply
Researchers at MIT’s Computer Science and Artificial Intelligence Lab have devised a new statistical technique that yields better wind-speed predictions than existing techniques — even when it uses fewer data.
btw, there are 181 people reading this public chat, it's awesome
plz do not hesitate to forward messages to your friends interested in Data Science
The research of Karpathy, which he will be presenting at Re Work Deep Learning Summit in January
https://github.com/ryankiros/neural-storyteller
The recent release of a nn which can TELL STORIES ABOUT IMAGES
The recent release of a nn which can TELL STORIES ABOUT IMAGES
GitHub
GitHub - ryankiros/neural-storyteller: A recurrent neural network for generating little stories about images
A recurrent neural network for generating little stories about images - ryankiros/neural-storyteller
Torch:
Twitter Cortex releases a new Autograd package, where the granularity of automatic differentiation is at the level of torch.* tensor operations.
This finer level of granularity allows one to quickly prototype new functions without worrying about writing the gradient computations as well. The autograd package is compatible with our nn framework, so the heaviest parts of your neural networks are still very fast.
https://github.com/twitter/torch-autograd
Twitter Cortex releases a new Autograd package, where the granularity of automatic differentiation is at the level of torch.* tensor operations.
This finer level of granularity allows one to quickly prototype new functions without worrying about writing the gradient computations as well. The autograd package is compatible with our nn framework, so the heaviest parts of your neural networks are still very fast.
https://github.com/twitter/torch-autograd
GitHub
GitHub - twitter-archive/torch-autograd: Autograd automatically differentiates native Torch code
Autograd automatically differentiates native Torch code - twitter-archive/torch-autograd
http://officialblog.yelp.com/2015/08/digesting-yelp-photos-just-got-easier-now-browse-by-category.html
Now Yelp can also classify food
Now Yelp can also classify food
Yelp
Yelp Official Blog: Digesting Yelp Photos Just Got Easier: Now Browse by Category
One of the best parts about working at Yelp is knowing that every day we get to contribute to people having great experiences at amazing local businesses. Whether it’s the perfect sandwich for your lunch or the right plumber when...