#Python programming language creator retires, saying: 'It's been an amazing ride'
Guido van #Rossum, the creator of the hugely popular Python programming language, is leaving cloud file storage firm Dropbox and heading into retirement.
That ends his six and half years with the company, which hired in him in 2013 because so much of its functionality was built on Python. And, after last year stepping down from his leadership role over Python decision making, that means the Python creator is officially retiring.
His recruitment at Dropbox made sense for the tech company. Dropbox has about four million lines of Python code and it's the most heavily used language for its back-end services and desktop app.
Read the article here
🔭 @DeepGravity
Guido van #Rossum, the creator of the hugely popular Python programming language, is leaving cloud file storage firm Dropbox and heading into retirement.
That ends his six and half years with the company, which hired in him in 2013 because so much of its functionality was built on Python. And, after last year stepping down from his leadership role over Python decision making, that means the Python creator is officially retiring.
His recruitment at Dropbox made sense for the tech company. Dropbox has about four million lines of Python code and it's the most heavily used language for its back-end services and desktop app.
Read the article here
🔭 @DeepGravity
ZDNet
Python programming language creator retires, saying: 'It's been an amazing ride'
The creator of the world's most popular programming language goes into retirement.
#MIT Technology Review:
A #NeuralNet solves the three-body problem 100 million times faster
#MachineLearning provides an entirely new way to tackle one of the classic problems of applied #mathematics.
Link to the article
Link to the paper
🔭 @DeepGravity
A #NeuralNet solves the three-body problem 100 million times faster
#MachineLearning provides an entirely new way to tackle one of the classic problems of applied #mathematics.
Link to the article
Link to the paper
🔭 @DeepGravity
MIT Technology Review
A neural net solves the three-body problem 100 million times faster
Machine learning provides an entirely new way to tackle one of the classic problems of applied mathematics.
@DeepGravity - Ali Ghodsi Lectures.part1.rar
1000 MB
Ali Ghodsi (a CS prof. at University of Waterloo) is absolutely a great #AI teacher. Download some of his lectures on #DeepLearning here (part1) (part2).
If you are interested you might find all his lectures on his YouTube channel.
⚠️ Bofore downloading: the zipfiles include:
02 - Feedforward neural network
03 - Overfitting
04 - Introduction to Keras
05 - Regularization
06 - Batch Normalization
07 - Convolutional neural network and a simple implementation in Keras
08 - Recurrent neural network
09 - LSTM, GRU
10 - Variational Autoencoder
11 - Generative Adversarial Network
🔭 @DeepGravity
If you are interested you might find all his lectures on his YouTube channel.
⚠️ Bofore downloading: the zipfiles include:
02 - Feedforward neural network
03 - Overfitting
04 - Introduction to Keras
05 - Regularization
06 - Batch Normalization
07 - Convolutional neural network and a simple implementation in Keras
08 - Recurrent neural network
09 - LSTM, GRU
10 - Variational Autoencoder
11 - Generative Adversarial Network
🔭 @DeepGravity
@DeepGravity - Ali Ghodsi Lectures.part2.rar
772.6 MB
Ali Ghodsi (a CS prof. at University of Waterloo) is absolutely a great #AI teacher. Download some of his lectures on #DeepLearning here (part1) (part2).
If you are interested you might find all his lectures on his YouTube channel.
⚠️ Bofore downloading: the zipfiles include:
02 - Feedforward neural network
03 - Overfitting
04 - Introduction to Keras
05 - Regularization
06 - Batch Normalization
07 - Convolutional neural network and a simple implementation in Keras
08 - Recurrent neural network
09 - LSTM, GRU
10 - Variational Autoencoder
11 - Generative Adversarial Network
🔭 @DeepGravity
If you are interested you might find all his lectures on his YouTube channel.
⚠️ Bofore downloading: the zipfiles include:
02 - Feedforward neural network
03 - Overfitting
04 - Introduction to Keras
05 - Regularization
06 - Batch Normalization
07 - Convolutional neural network and a simple implementation in Keras
08 - Recurrent neural network
09 - LSTM, GRU
10 - Variational Autoencoder
11 - Generative Adversarial Network
🔭 @DeepGravity
#Tensorflow 2.0 coding workshop notebooks
At our meetup Data Science for Internet of Things, Dan Howarth conducted a workshop on tensorflow 2.0
we plan to convert it into another book on data science central. for a set of all previous free books see free datascience books. The notebooks are
tensorflow 2.0: Notebook 1: 'Hello World' Deep Learning with Tensor...
tensorflow 2.0: Notebook 2: Computer Vision with CNNs
tensorflow 2.0: Notebook 3: Transfer Learning
You can see the tensorflow 2.0 roadmap and overall features of tensorflow 2.0. Comments welcome. We hope you like them.
Link to the paper
🔭 @DeepGravity
At our meetup Data Science for Internet of Things, Dan Howarth conducted a workshop on tensorflow 2.0
we plan to convert it into another book on data science central. for a set of all previous free books see free datascience books. The notebooks are
tensorflow 2.0: Notebook 1: 'Hello World' Deep Learning with Tensor...
tensorflow 2.0: Notebook 2: Computer Vision with CNNs
tensorflow 2.0: Notebook 3: Transfer Learning
You can see the tensorflow 2.0 roadmap and overall features of tensorflow 2.0. Comments welcome. We hope you like them.
Link to the paper
🔭 @DeepGravity
Google
Google Colaboratory
#Netflix Open Sources Polynote to Make #DataScience Notebooks Better
The new notebook environment provides substantial improvements to streamline experimentation in #MachineLearning workflows.
Link to the paper
🔭 @DeepGravity
The new notebook environment provides substantial improvements to streamline experimentation in #MachineLearning workflows.
Link to the paper
🔭 @DeepGravity
Medium
Netflix Open Sources Polynote to Make Data Science Notebooks Better
Notebooks are the data scientist best friend and can also be a nightmare to work with. For someone accustomed to work with modern…
Don’t Ever Ignore #ReinforcementLearning Again
#Supervised or #unsupervised learning is not everything. Everyone knows that. Get started with #OpenAI #Gym.
Link to the article
🔭 @DeepGravity
#Supervised or #unsupervised learning is not everything. Everyone knows that. Get started with #OpenAI #Gym.
Link to the article
🔭 @DeepGravity
Medium
Don’t Ever Ignore Reinforcement Learning Again
Supervised or unsupervised learning is not everything. Everyone knows that. Get started with OpenAI Gym.
Top 10 roles in AI and data science
0 Data Engineer
1 Decision-Maker
2 Analyst
3 Expert Analyst
4 Statistician
5 Applied Machine Learning Engineer
6 Data Scientist
7 Analytics Manager / Data Science Leader
8 Qualitative Expert / Social Scientist
9 Researcher
10+ Additional personnel
Read the article here
🔭 @DeepGravity
0 Data Engineer
1 Decision-Maker
2 Analyst
3 Expert Analyst
4 Statistician
5 Applied Machine Learning Engineer
6 Data Scientist
7 Analytics Manager / Data Science Leader
8 Qualitative Expert / Social Scientist
9 Researcher
10+ Additional personnel
Read the article here
🔭 @DeepGravity
Hackernoon
Top 10 roles in AI and data science | HackerNoon
When you think of the perfect data science team, are you imagining 10 copies of the same professor of computer science and statistics, hands delicately stained with whiteboard marker? I hope not!
The Fundamentals of #Matplotlib
Having a good grasp of these basics will greatly ease your foray into the expansive world of data visualization.
Link to the article
🔭 @DeepGravity
Having a good grasp of these basics will greatly ease your foray into the expansive world of data visualization.
Link to the article
🔭 @DeepGravity
Medium
The Fundamentals of Matplotlib
Having a good grasp of these basics will greatly ease your foray into the expansive world of data visualization.
Postdoctoral / Research Staff Member Position at IBM Research (Zurich) on Unifying Learning and Reasoning
Post-doc position in machine listening/Inria Nancy -- Grand Est, France
PhD Position in Deep Learning for Robotics at Istanbul Technical University (Turkey), in collaboration with Halmstad University (Sweden)
Positions in Machine Learning and Game Playing
Robot/Reinforcement/Deep Learning Research Associate Positions in Cyprus
#Job
🔭 @DeepGravity
Post-doc position in machine listening/Inria Nancy -- Grand Est, France
PhD Position in Deep Learning for Robotics at Istanbul Technical University (Turkey), in collaboration with Halmstad University (Sweden)
Positions in Machine Learning and Game Playing
Robot/Reinforcement/Deep Learning Research Associate Positions in Cyprus
#Job
🔭 @DeepGravity
Ibm
IBM Research Zurich, Careers
careers, jobs, IBM research zurich
A new tool uses #AI to spot text written by AI
AI algorithms can generate #text convincing enough to fool the average human—potentially providing a way to mass-produce fake news, bogus reviews, and phony social accounts. Thankfully, AI can now be used to identify fake text, too.
Link to the article
🔭 @DeepGravity
AI algorithms can generate #text convincing enough to fool the average human—potentially providing a way to mass-produce fake news, bogus reviews, and phony social accounts. Thankfully, AI can now be used to identify fake text, too.
Link to the article
🔭 @DeepGravity
MIT Technology Review
A new tool uses AI to spot text written by AI
AI algorithms can generate text convincing enough to fool the average human—potentially providing a way to mass-produce fake news, bogus reviews, and phony social accounts. Thankfully, AI can now be used to identify fake text, too. The news: Researchers from…
Keras Tuner is a new #Keras wrapper that uses #sklearn grid search for tunning the hyperparameters of #deepLearning models
Link to Tuner documentation
Link to a related article
🔭 @DeepGravity
Link to Tuner documentation
Link to a related article
🔭 @DeepGravity
Medium
Keras Hyperparameter Tuning using Sklearn Pipelines & Grid Search with Cross Validation
Tuning Keras Models with Sklearn Grid Search
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
Link to the main paper
Link to a related article
#MuZero
#DeepMind
#ReinforcementLearning
🔭 @DeepGravity
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
Link to the main paper
Link to a related article
#MuZero
#DeepMind
#ReinforcementLearning
🔭 @DeepGravity
arXiv.org
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in...
OpenAI releases Safety Gym for reinforcement learning
To study constrained #RL for safe exploration, we developed a new set of environments and tools called #SafetyGym. By comparison to existing environments for constrained RL, Safety #Gym environments are richer and feature a wider range of difficulty and complexity.
Link to the Safety Gym
Link to a related article
#OpenAI
#ReinforcementLearning
🔭 @DeepGravity
To study constrained #RL for safe exploration, we developed a new set of environments and tools called #SafetyGym. By comparison to existing environments for constrained RL, Safety #Gym environments are richer and feature a wider range of difficulty and complexity.
Link to the Safety Gym
Link to a related article
#OpenAI
#ReinforcementLearning
🔭 @DeepGravity
Openai
Safety Gym
We’re releasing Safety Gym, a suite of environments and tools for measuring progress towards reinforcement learning agents that respect safety constraints while training.
A lot of #DataScience Cheatsheets such as #DeepLearning #Python #Docker
Link to the Github repo
🔭 @DeepGravity
Link to the Github repo
🔭 @DeepGravity
#DeepLearning with #PyTorch
Download a free copy of the book and learn how to get started with #AI / #ML development using PyTorch
#Python
🔭 @DeepGravity
Download a free copy of the book and learn how to get started with #AI / #ML development using PyTorch
#Python
🔭 @DeepGravity
#MachineLearning for Scent: Learning Generalizable Perceptual Representations of Small Molecules
Predicting the relationship between a molecule’s structure and its odor remains a difficult, decades-old task. This problem, termed quantitative structure-odor relationship (QSOR) modeling, is an important challenge in chemistry, impacting human nutrition, manufacture of synthetic fragrance, the environment, and sensory neuroscience. We propose the use of graph neural networks for QSOR, and show they significantly outperform prior methods on a novel data set labeled by olfactory experts. Additional analysis shows that the learned embeddings from graph neural networks capture a meaningful odor space representation of the underlying relationship between structure and odor, as demonstrated by a strong performance on two challenging transfer learning tasks. Machine learning has already had a large impact on the senses of sight and sound. Based on these early results with graph neural networks for molecular properties, we hope machine learning can eventually do for olfaction what it has already done for vision and hearing.
Link to the paper by #Google Research and ...
🔭 @DeepGravity
Predicting the relationship between a molecule’s structure and its odor remains a difficult, decades-old task. This problem, termed quantitative structure-odor relationship (QSOR) modeling, is an important challenge in chemistry, impacting human nutrition, manufacture of synthetic fragrance, the environment, and sensory neuroscience. We propose the use of graph neural networks for QSOR, and show they significantly outperform prior methods on a novel data set labeled by olfactory experts. Additional analysis shows that the learned embeddings from graph neural networks capture a meaningful odor space representation of the underlying relationship between structure and odor, as demonstrated by a strong performance on two challenging transfer learning tasks. Machine learning has already had a large impact on the senses of sight and sound. Based on these early results with graph neural networks for molecular properties, we hope machine learning can eventually do for olfaction what it has already done for vision and hearing.
Link to the paper by #Google Research and ...
🔭 @DeepGravity
arXiv.org
Machine Learning for Scent: Learning Generalizable Perceptual...
Predicting the relationship between a molecule's structure and its odor remains a difficult, decades-old task. This problem, termed quantitative structure-odor relationship (QSOR) modeling, is an...
3 Ways to Encode Categorical Variables for #DeepLearning
The two most popular techniques are an integer encoding and a one hot encoding, although a newer technique called learned embedding may provide a useful middle ground between these two methods
Link to the article
🔭 @DeepGravity
The two most popular techniques are an integer encoding and a one hot encoding, although a newer technique called learned embedding may provide a useful middle ground between these two methods
Link to the article
🔭 @DeepGravity
MachineLearningMastery.com
3 Ways to Encode Categorical Variables for Deep Learning - MachineLearningMastery.com
Machine learning and deep learning models, like those in Keras, require all input and output variables to be numeric. This means that if your data contains categorical data, you must encode it to numbers before you can fit and evaluate a model. The two most…