✍🏻Need a quick rewrite? Try NLPAug!
NLPAug is a Python library that allows you to increase the efficiency of neural networks in NLP tasks without changing their architecture and fine-tuning. With it, you can synthesize new text based on the available data, replacing some words with synonyms, incl. by the principle of cosine similarity in vector representations, similar to word2vec or GloVe. NLPAug also performs context-based word replacement using transformers in the form of BERT networks and makes double translation of text into another language and vice versa. https://github.com/makcedward/nlpaug
NLPAug is a Python library that allows you to increase the efficiency of neural networks in NLP tasks without changing their architecture and fine-tuning. With it, you can synthesize new text based on the available data, replacing some words with synonyms, incl. by the principle of cosine similarity in vector representations, similar to word2vec or GloVe. NLPAug also performs context-based word replacement using transformers in the form of BERT networks and makes double translation of text into another language and vice versa. https://github.com/makcedward/nlpaug
GitHub
GitHub - makcedward/nlpaug: Data augmentation for NLP
Data augmentation for NLP . Contribute to makcedward/nlpaug development by creating an account on GitHub.
☀️ML for prediction of Solar Radiation
From a practical agronomic point of view, an accurate assessment of solar radiation is vital because it is a key factor in crop development. Most existing weather stations around the world have temperature and rain sensors, but only some of them measure solar radiation. Measuring solar radiation is usually very expensive due to complex sensors (pyranometers and radiometers) and a lack of reliable data. Therefore, a group of researchers from the University of Cordoba has developed ML-models to predict solar radiation in southern Spain and the United States.
The created ML-models are based not only on actual measurements, but are enriched with data on the geoclimatic conditions of the area (aridity, distance to the sea, altitude, etc.). To estimate daily solar radiation, the proposed neural network algorithms from current data only need information about the air temperature, which is relatively cheap due to inexpensive sensors and IoT technologies. Bayesian algorithms are used to optimize hyperparameters, and the models themselves can be adapted to any terrain, depending on its aridity.
https://techxplore.com/news/2021-07-machine-based-thermal-solar.html
From a practical agronomic point of view, an accurate assessment of solar radiation is vital because it is a key factor in crop development. Most existing weather stations around the world have temperature and rain sensors, but only some of them measure solar radiation. Measuring solar radiation is usually very expensive due to complex sensors (pyranometers and radiometers) and a lack of reliable data. Therefore, a group of researchers from the University of Cordoba has developed ML-models to predict solar radiation in southern Spain and the United States.
The created ML-models are based not only on actual measurements, but are enriched with data on the geoclimatic conditions of the area (aridity, distance to the sea, altitude, etc.). To estimate daily solar radiation, the proposed neural network algorithms from current data only need information about the air temperature, which is relatively cheap due to inexpensive sensors and IoT technologies. Bayesian algorithms are used to optimize hyperparameters, and the models themselves can be adapted to any terrain, depending on its aridity.
https://techxplore.com/news/2021-07-machine-based-thermal-solar.html
Tech Xplore
Machine learning models based on thermal data predict solar radiation
A research team at the University of Córdoba has developed and evaluated models for the prediction of solar radiation in nine locations in southern Spain and North Carolina (USA).
🌦Why is it raining not as predicted and how Yandex Meteum 2.0 deals with it
The story about replacing MatrixNet with CatBoost and new datasets for training NN models. Now Meteum neural nets learn not only on data from professional weather stations, but also on information about terrain features and user messages. https://tekdeeps.com/yandex-has-launched-meteum-2-0-a-new-technology-for-weather-forecasting-based-on-machine-learning/
The story about replacing MatrixNet with CatBoost and new datasets for training NN models. Now Meteum neural nets learn not only on data from professional weather stations, but also on information about terrain features and user messages. https://tekdeeps.com/yandex-has-launched-meteum-2-0-a-new-technology-for-weather-forecasting-based-on-machine-learning/
Tek Deeps
Yandex has launched Meteum 2.0 - a new technology for weather forecasting based on machine learning
Meteum 2.0 technology will help Yandex make more accurate weather forecasts using machine learning algorithms that take into account unusual factors.
✈️AI will schedule flight crews for the US Air Force
The AI system from MIT helps US Air Force pilots plan the workload of personnel on cargo flights, based on many factors: airspace availability, pilot tolerances, work and rest requirements, etc. Combining optimization through integer programming with RL neural networks, the system generates flight schedules based on explicit and implicit constraints. https://news.mit.edu/2021/us-air-force-pilots-artificial-intelligence-assist-scheduling-aircrews-0708
The AI system from MIT helps US Air Force pilots plan the workload of personnel on cargo flights, based on many factors: airspace availability, pilot tolerances, work and rest requirements, etc. Combining optimization through integer programming with RL neural networks, the system generates flight schedules based on explicit and implicit constraints. https://news.mit.edu/2021/us-air-force-pilots-artificial-intelligence-assist-scheduling-aircrews-0708
MIT News
US Air Force pilots get an artificial intelligence assist with scheduling aircrews
MIT, Lincoln Laboratory, and the U.S. Air Force created an AI tool to automate and optimize aircrew scheduling. The tool is designed for the widely used C-17 aircraft and was developed as part of the Dept. of Air Force–MIT AI Accelerator partnership.
🔥Video translation form Yandex
On July 16, 2021, Yandex showed the world's first prototype of machine video translation based on AI technologies of biometrics, speech recognition and speech synthesis. With its help, users of the desktop Yandex Browser can already watch videos in English with voice-over translation. The product will support other languages in the future. https://yandex.ru/company/services_news/2021/2021-07-16
Video:
https://disk.yandex.ru/d/7DYUm9QSfTPn5A
https://www.youtube.com/playlist?list=PLkMNi_iVG-shtwkqd918VUJ80NIOJ2pQf
On July 16, 2021, Yandex showed the world's first prototype of machine video translation based on AI technologies of biometrics, speech recognition and speech synthesis. With its help, users of the desktop Yandex Browser can already watch videos in English with voice-over translation. The product will support other languages in the future. https://yandex.ru/company/services_news/2021/2021-07-16
Video:
https://disk.yandex.ru/d/7DYUm9QSfTPn5A
https://www.youtube.com/playlist?list=PLkMNi_iVG-shtwkqd918VUJ80NIOJ2pQf
Компания Яндекс
Яндекс закрыл сделку по покупке банка
Яндекс закрыл сделку по покупке банка «Акрополь». В результате Яндекс стал стопроцентным акционером банка и получил все лицензии «Акрополя», включая универсальную банковскую.
👀Looking for an enterprise AI solution?
Try NVIDIA's NGC ™ Catalog — a registry of GPU-optimized software for high performance computing and big data analytics across industries ranging from retail chatbots to medical imaging and recommender systems. NGC contains enterprise-grade application containers, pre-trained AI models, and industry-specific SDKs that can be deployed on-premises, in the cloud, or at the edge of the network. For example, NVIDIA TAO is a platform for training, adapting and optimizing AI models that allows you to create enterprise-grade AI applications without deep expert knowledge and large datasets to train them. https://ngc.nvidia.com/
Try NVIDIA's NGC ™ Catalog — a registry of GPU-optimized software for high performance computing and big data analytics across industries ranging from retail chatbots to medical imaging and recommender systems. NGC contains enterprise-grade application containers, pre-trained AI models, and industry-specific SDKs that can be deployed on-premises, in the cloud, or at the edge of the network. For example, NVIDIA TAO is a platform for training, adapting and optimizing AI models that allows you to create enterprise-grade AI applications without deep expert knowledge and large datasets to train them. https://ngc.nvidia.com/
😎Neural networks of the Cloud Mail.ru service help preserve memories
ML algorithms will automatically find pictures taken on a specific day and display them as stories to generate a custom photo calendar illustrating memorable events. Thanks to image recognition methods, only successful frames will be included in the result. And if the user doesn't like the picture, it can be removed directly from the video story. You can share the animated photo gallery with your friends by sending it a message or share it on VK and Instagram. The update is already available in the app for iOS and Android. https://corp.mail.ru/ru/press/releases/10947/
ML algorithms will automatically find pictures taken on a specific day and display them as stories to generate a custom photo calendar illustrating memorable events. Thanks to image recognition methods, only successful frames will be included in the result. And if the user doesn't like the picture, it can be removed directly from the video story. You can share the animated photo gallery with your friends by sending it a message or share it on VK and Instagram. The update is already available in the app for iOS and Android. https://corp.mail.ru/ru/press/releases/10947/
vk.company
VK / Нейросеть Облако Mail.ru выберет лучшие фото для соцсетей
С помощью алгоритмов машинного обучения сервис Облако автоматически отбирает снимки, сделанные в определённый день, и отображает их в привычном формате сторис. Так у пользователя формируется своеобразный фотокалендарь, иллюстрирующий памятные события.
…
…
💦3 main items to build ML-pipeline
There are only 3 basic tools to build an effective machine learning pipeline:
• Feature Store to handle offline and online feature conversions. It support the version-control and integration with data lakes and DWH. It also enables fast service and rapid deployment of code in production. For example, Tecton, Hopsworks, Michelangelo Palette, Zipline, Feature Store from Amazon SageMaker and Databricks.
• Model Store as a central registry of models and the use of experiments. It provides version reproducibility and tracking history of ML models and related artifacts such as Git commits, pickle files, scores, regression, etc. Examples: Weights and Biases, MLFlow, Neptune.ai, EthicalML, and solutions by Amazon, Azure, Google.
• Evaluation Store for monitoring and improving the performance of models. It identifies performance metrics for each ML model in any environment, from training to production, including A/B testing tools and visual dashboard. For example, Arize and Neptune.ai.
Additionally, the data annotation platforms (Appen), ML-model maintenance (Kubeflow, Algorithmia) and AI-orchestration (Spell) will be useful for the system of all teams participating in MLOps-processes.
https://towardsdatascience.com/the-only-3-ml-tools-you-need-1aa750778d33
There are only 3 basic tools to build an effective machine learning pipeline:
• Feature Store to handle offline and online feature conversions. It support the version-control and integration with data lakes and DWH. It also enables fast service and rapid deployment of code in production. For example, Tecton, Hopsworks, Michelangelo Palette, Zipline, Feature Store from Amazon SageMaker and Databricks.
• Model Store as a central registry of models and the use of experiments. It provides version reproducibility and tracking history of ML models and related artifacts such as Git commits, pickle files, scores, regression, etc. Examples: Weights and Biases, MLFlow, Neptune.ai, EthicalML, and solutions by Amazon, Azure, Google.
• Evaluation Store for monitoring and improving the performance of models. It identifies performance metrics for each ML model in any environment, from training to production, including A/B testing tools and visual dashboard. For example, Arize and Neptune.ai.
Additionally, the data annotation platforms (Appen), ML-model maintenance (Kubeflow, Algorithmia) and AI-orchestration (Spell) will be useful for the system of all teams participating in MLOps-processes.
https://towardsdatascience.com/the-only-3-ml-tools-you-need-1aa750778d33
Medium
The Only 3 ML Tools You Need
At a rapid pace, many machine learning techniques have moved from proof of concepts to powering crucial pieces of technology that people…
😎10 the most interesting DS-conferences all over the world in August 2021
09.08 – 2nd Workshop on Knowledge Guided Machine Learning (KGML2021). Online event by University of Minnesota https://sites.google.com/umn.edu/kgmlworkshop/workshop
09.08 – International Conference on Sports Analytics and Data Science. New York, United States. https://waset.org/sports-analytics-and-data-science-conference-in-august-2021-in-new-york
11.08 - ML Data Engineering Community Online-meetup by Tecton. Feature Store, Streaming Architecture, MLOps and other DS-themes. Free registration https://www.applyconf.com/
14.08 - KDD 2021, the premier interdisciplinary data science conference in Singapore. Online https://kdd.org/kdd2021/
14.08 - Fragile Earth 2021, develop radically new technological foundations for advancing and meeting the Sustainable Development Goals. Online annual workshop is part of the Earth Day events at ACM’s KDD 2021 Conference on research in Machine Learning and its applications. https://ai4good.org/fragile-earth-2021/
17.08 - Ai4 2021. Online-conference brings together business leaders and data practitioners to facilitate the adoption of AI and ML technology. https://ai4.io/2021/
19.08 - IJCAI-21: 30th International Joint Conference on Artificial Intelligence. Montreal-themed Virtual Reality, Online. https://ijcai-21.org/
25.08 – Data Science Salon, Applying ML and AI to Retail and Ecommerce. Online https://www.datascience.salon/retail-and-ecommerce/
25.08 – DataOps Virtual Event – Zaloni Company, who is the vendor of Arena DataOps platform, invites CDO and lead DataOps Engineers from AWS, KPMG, PWC and others to provide modern experience of data management and engineering in different business areas. Free registration https://www.zaloni.com/dataops-virtual-event-second-annual/
26.08 – International Conference on Smart Technologies in Data Science and Communication. Paris, France. https://waset.org/smart-technologies-in-data-science-and-communication-conference-in-august-2021-in-paris
09.08 – 2nd Workshop on Knowledge Guided Machine Learning (KGML2021). Online event by University of Minnesota https://sites.google.com/umn.edu/kgmlworkshop/workshop
09.08 – International Conference on Sports Analytics and Data Science. New York, United States. https://waset.org/sports-analytics-and-data-science-conference-in-august-2021-in-new-york
11.08 - ML Data Engineering Community Online-meetup by Tecton. Feature Store, Streaming Architecture, MLOps and other DS-themes. Free registration https://www.applyconf.com/
14.08 - KDD 2021, the premier interdisciplinary data science conference in Singapore. Online https://kdd.org/kdd2021/
14.08 - Fragile Earth 2021, develop radically new technological foundations for advancing and meeting the Sustainable Development Goals. Online annual workshop is part of the Earth Day events at ACM’s KDD 2021 Conference on research in Machine Learning and its applications. https://ai4good.org/fragile-earth-2021/
17.08 - Ai4 2021. Online-conference brings together business leaders and data practitioners to facilitate the adoption of AI and ML technology. https://ai4.io/2021/
19.08 - IJCAI-21: 30th International Joint Conference on Artificial Intelligence. Montreal-themed Virtual Reality, Online. https://ijcai-21.org/
25.08 – Data Science Salon, Applying ML and AI to Retail and Ecommerce. Online https://www.datascience.salon/retail-and-ecommerce/
25.08 – DataOps Virtual Event – Zaloni Company, who is the vendor of Arena DataOps platform, invites CDO and lead DataOps Engineers from AWS, KPMG, PWC and others to provide modern experience of data management and engineering in different business areas. Free registration https://www.zaloni.com/dataops-virtual-event-second-annual/
26.08 – International Conference on Smart Technologies in Data Science and Communication. Paris, France. https://waset.org/smart-technologies-in-data-science-and-communication-conference-in-august-2021-in-paris
Google
Workshop
Background Call for Posters Agenda Confirmed Speakers Organizers Inaugural Workshop Register HERE!
Quicklinks to session details: Opening Session (ML1) Weather and Climate Aquatic Sciences Hydrology…
Quicklinks to session details: Opening Session (ML1) Weather and Climate Aquatic Sciences Hydrology…
🙌🏻🚗On July 22, 2021, Yandex opened the world's largest dataset of self-driving vehicles: more than 1600 hours of movement, divided into 600,000 marked-up fragments of trips on the roads of Russia, Israel and the United States in different weather conditions. The dataset was published for the Shifts Challenge at the international conference NeurIPS 2021 in order to draw attention to the problem of "data shift" in machine learning and reduce the uncertainty of applying ML-models in new conditions. All data are depersonalized. The dataset contains high-precision route maps and tracks of all surrounding cars and pedestrians (their position, speed, acceleration, etc.), without personal data (car numbers or faces of people). Participants have to train ML-algorithms on the provided data and check the quality of their work under shear conditions. Algorithm developers with the best quality will receive cash prizes of 5, 3 and 1 thousand dollars.
https://research.yandex.com/shifts
https://github.com/yandex-research/shifts
https://research.yandex.com/shifts
https://github.com/yandex-research/shifts
Shifts Challenge: Robustness and Uncertainty under Real-World Distributional Shift
We invite researchers and machine learning practitioners from all over the world to participate in our NeurIPS 2021 Shifts Challenge on robustness and uncertainty under real-world distributional shift.
👆🏻What is AUC - ROC Curve and why it is so important to evaluate quality of ML-model?
Area Under the Receiver Operating Characteristics is evaluation metric is used to check or visualize the performance of the multi-class classification problem.
AUC - ROC curve measures a performance of the classification at various threshold settings. ROC is a probability curve and AUC represents the degree or measure of separability. It tells how much the model is capable of distinguishing between classes. Higher the AUC, the better the model is at predicting 0 classes as 0 and 1 classes as 1. Higher the AUC, the better the model is at distinguishing between patients with the disease and no disease.
An excellent model has AUC near to the 1 which means it has a good measure of separability. A poor model has an AUC near 0 which means it has the worst measure of separability. In fact, it means it is reciprocating the result. It is predicting 0s as 1s and 1s as 0s. And when AUC is 0.5, it means the model has no class separation capacity whatsoever.
Sensitivity and Specificity are inversely proportional to each other. So when we increase Sensitivity, Specificity decreases, and vice versa. When we decrease the threshold, we get more positive values thus it increases the sensitivity and decreasing the specificity. Similarly, when we increase the threshold, we get more negative values thus we get higher specificity and lower sensitivity.
In a multi-class model, we can plot the N number of AUC ROC Curves for N number classes using the One vs ALL methodology. So for example, If you have three classes named X, Y, and Z, you will have one ROC for X classified against Y and Z, another ROC for Y classified against X and Z, and the third one of Z classified against Y and X.
https://towardsdatascience.com/understanding-auc-roc-curve-68b2303cc9c5
Area Under the Receiver Operating Characteristics is evaluation metric is used to check or visualize the performance of the multi-class classification problem.
AUC - ROC curve measures a performance of the classification at various threshold settings. ROC is a probability curve and AUC represents the degree or measure of separability. It tells how much the model is capable of distinguishing between classes. Higher the AUC, the better the model is at predicting 0 classes as 0 and 1 classes as 1. Higher the AUC, the better the model is at distinguishing between patients with the disease and no disease.
An excellent model has AUC near to the 1 which means it has a good measure of separability. A poor model has an AUC near 0 which means it has the worst measure of separability. In fact, it means it is reciprocating the result. It is predicting 0s as 1s and 1s as 0s. And when AUC is 0.5, it means the model has no class separation capacity whatsoever.
Sensitivity and Specificity are inversely proportional to each other. So when we increase Sensitivity, Specificity decreases, and vice versa. When we decrease the threshold, we get more positive values thus it increases the sensitivity and decreasing the specificity. Similarly, when we increase the threshold, we get more negative values thus we get higher specificity and lower sensitivity.
In a multi-class model, we can plot the N number of AUC ROC Curves for N number classes using the One vs ALL methodology. So for example, If you have three classes named X, Y, and Z, you will have one ROC for X classified against Y and Z, another ROC for Y classified against X and Z, and the third one of Z classified against Y and X.
https://towardsdatascience.com/understanding-auc-roc-curve-68b2303cc9c5
Medium
Understanding AUC - ROC Curve
In Machine Learning, performance measurement is an essential task. So when it comes to a classification problem, we can count on an AUC - ROC Curve. When we need to check or visualize the performance…
🚗Yandex robots will deliver food to American students
On July 6, 2021, Yandex entered into a cooperation agreement with the American food delivery service Grubhub to deliver food to US student campuses using Rovers. Developed by Yandex, these autonomous courier robots are based on self-driving car technology and can operate in any weather 24/7. Rovers drive on sidewalks and cross the roads at pedestrian crossings. Since the beginning of 2021, in Russia robots have brought thousands of orders from Yandex.Food and Yandex.Lavka. And since April, they have been delivering orders from restaurants in the American city of Ann Arbor, Michigan.
https://yandex.ru/company/press_releases/2021/07-06-2021
On July 6, 2021, Yandex entered into a cooperation agreement with the American food delivery service Grubhub to deliver food to US student campuses using Rovers. Developed by Yandex, these autonomous courier robots are based on self-driving car technology and can operate in any weather 24/7. Rovers drive on sidewalks and cross the roads at pedestrian crossings. Since the beginning of 2021, in Russia robots have brought thousands of orders from Yandex.Food and Yandex.Lavka. And since April, they have been delivering orders from restaurants in the American city of Ann Arbor, Michigan.
https://yandex.ru/company/press_releases/2021/07-06-2021
Компания Яндекс
Роботы Яндекса займутся доставкой еды в кампусах американских университетов
Яндекс заключил соглашение о сотрудничестве с американским сервисом доставки еды Grubhub. Компания станет партнёром Grubhub по роботизированной доставке в кампусах — студенческих городках при колледжах и университетах в США. Осуществлять доставку будут Роверы…
✈️2nd release TF-Ranking by Google AI
In December 2018, Google AI introduced TF-Ranking, an open-source library based on TensorFlow for developing scalable neural ranking models (LTR, learning-to-rank) that help get an ordered list of items in response to a user queries. Unlike standard classification models, which classify one item at a time, LTR models take a complete list of items as input and look for an order that maximizes the usefulness of the entire list. These LTR models are most common in search and recommendation systems, but TF-Ranking is also used in e-commerce, building smart spaces and cities.
In May 2021, Google AI released its second TF-Ranking release, which provides full support for built-in LTR model building using Keras, the high-level TensorFlow 2 API. The Keras ranking model has a new workflow design, incl. flexible ModelBuilder and DatasetBuilder for customizing the training set, and a pipeline for training the model. Also this version of TF-Ranking supports RaggedTensors, Orbit training library and many more improvements.
And thanks to an in-depth study of the capabilities of the TF-Ranking library, the Google AI team has created a Data Augmented Self-Attentive Latent Cross (DASALC) model that combines transformation of neural network features with data enrichment, ensemble methods, and loss ranking. DASALC eliminates the disadvantages of LTR models and gradient boosting decision trees, while retaining the advantages of these methods.
https://ai.googleblog.com/2021/07/advances-in-tf-ranking.html
https://research.google/pubs/pub50030/
In December 2018, Google AI introduced TF-Ranking, an open-source library based on TensorFlow for developing scalable neural ranking models (LTR, learning-to-rank) that help get an ordered list of items in response to a user queries. Unlike standard classification models, which classify one item at a time, LTR models take a complete list of items as input and look for an order that maximizes the usefulness of the entire list. These LTR models are most common in search and recommendation systems, but TF-Ranking is also used in e-commerce, building smart spaces and cities.
In May 2021, Google AI released its second TF-Ranking release, which provides full support for built-in LTR model building using Keras, the high-level TensorFlow 2 API. The Keras ranking model has a new workflow design, incl. flexible ModelBuilder and DatasetBuilder for customizing the training set, and a pipeline for training the model. Also this version of TF-Ranking supports RaggedTensors, Orbit training library and many more improvements.
And thanks to an in-depth study of the capabilities of the TF-Ranking library, the Google AI team has created a Data Augmented Self-Attentive Latent Cross (DASALC) model that combines transformation of neural network features with data enrichment, ensemble methods, and loss ranking. DASALC eliminates the disadvantages of LTR models and gradient boosting decision trees, while retaining the advantages of these methods.
https://ai.googleblog.com/2021/07/advances-in-tf-ranking.html
https://research.google/pubs/pub50030/
research.google
Advances in TF-Ranking
Posted by Michael Bendersky and Xuanhui Wang, Software Engineers, Google Research In December 2018, we introduced TF-Ranking, an open-source T...
💦🏸What is multi-task machine learning?
Usually one ML-model solves one problem, for example, image classification or text synthesis. This is called single-task learning (STL). But some models allow you to make several types of predictions on one sample, for example, image classification and semantic segmentation. This is already multitasking training (MTL, Multi-task learning). The main advantages of MTL are as follows:
• smaller training sample for each of the individual tasks due to the enlargement of the total data set;
• improved generalization of the model - information from related tasks increases the ability of the model to extract useful data from the dataset and reduces overfitting;
• shortening the training duration - instead of wasting time training multiple models to solve multiple problems, a single model is trained;
• reduced requirements for hardware resources - ML-models have many parameters that need to be stored in RAM. Therefore, for devices with limited computing power, IoT, it is better to have one MTL with some common parameters, rather than several STL models that perform a number of related tasks.
The downside to these benefits is performance degradation. During MTL training, tasks can compete with each other. For example, when instance segmentation (segmentation of a separate mask for each distinct object in an image) is trained along with semantic segmentation (classification of objects at the pixel level), the latter task often dominates unless a task balancing mechanism is used.
In addition, the MTL loss function is more complex as a result of summing the individual losses, making optimization difficult. This is where the so-called negative transmission effect occurs when performing multiple tasks, and separate STL models can perform better than a single MTL.
Looking ahead, multitasking machine learning is great for natural language processing and medical research, but the current implementation of this approach does not fully cover its current drawbacks.
https://thegradient.pub/how-to-do-multi-task-learning-intelligently/
Usually one ML-model solves one problem, for example, image classification or text synthesis. This is called single-task learning (STL). But some models allow you to make several types of predictions on one sample, for example, image classification and semantic segmentation. This is already multitasking training (MTL, Multi-task learning). The main advantages of MTL are as follows:
• smaller training sample for each of the individual tasks due to the enlargement of the total data set;
• improved generalization of the model - information from related tasks increases the ability of the model to extract useful data from the dataset and reduces overfitting;
• shortening the training duration - instead of wasting time training multiple models to solve multiple problems, a single model is trained;
• reduced requirements for hardware resources - ML-models have many parameters that need to be stored in RAM. Therefore, for devices with limited computing power, IoT, it is better to have one MTL with some common parameters, rather than several STL models that perform a number of related tasks.
The downside to these benefits is performance degradation. During MTL training, tasks can compete with each other. For example, when instance segmentation (segmentation of a separate mask for each distinct object in an image) is trained along with semantic segmentation (classification of objects at the pixel level), the latter task often dominates unless a task balancing mechanism is used.
In addition, the MTL loss function is more complex as a result of summing the individual losses, making optimization difficult. This is where the so-called negative transmission effect occurs when performing multiple tasks, and separate STL models can perform better than a single MTL.
Looking ahead, multitasking machine learning is great for natural language processing and medical research, but the current implementation of this approach does not fully cover its current drawbacks.
https://thegradient.pub/how-to-do-multi-task-learning-intelligently/
The Gradient
How to Do Multi-Task Learning Intelligently
On new multi-task learning methods that automatically learn what to learn together
🙌🏻Apache Superset - open source framework for BI and DS analysis
Becoming an Apache project in 2017, Superset is a powerful BI big data visualization tool that allows users to quickly and easily create dashboards using simple code, a free visualization designer, and the advanced SQL editor. For corporate use, it is especially important to support different authentication backends (OpenID, LDAP, OAuth, REMOTE_USER) and integration with many SQL-based DBMSs through the SQLAlchemy library. Superset is based on Python, so to use it, you should first install the Anaconda distribution, which includes a set of required DS libraries. Airbnb, Netflix, Twitter, Yahoo! and many other companies, including Superset in their DS projects. https://superset.apache.org/
Becoming an Apache project in 2017, Superset is a powerful BI big data visualization tool that allows users to quickly and easily create dashboards using simple code, a free visualization designer, and the advanced SQL editor. For corporate use, it is especially important to support different authentication backends (OpenID, LDAP, OAuth, REMOTE_USER) and integration with many SQL-based DBMSs through the SQLAlchemy library. Superset is based on Python, so to use it, you should first install the Anaconda distribution, which includes a set of required DS libraries. Airbnb, Netflix, Twitter, Yahoo! and many other companies, including Superset in their DS projects. https://superset.apache.org/
superset.apache.org
Welcome | Superset
Community website for Apache Superset™, a data visualization and data exploration platform
🎯TOP 3 papers from the International Conference on Learning Representations 2021: a brief overview from Zeta Alpha
With the help of its own AI Research Navigator, Zeta Alpha compiled a snippet of over 800 ICLR 2021 reports based on citation and author popularity.
1. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Alexey Dosovitsky, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, etc.). Transformers applied directly to slices of images and pretrained on large datasets work well for image classification and can outperform the best CNNs on large images. https://openreview.net/forum?id=YicbFdNTTy
2. Rethinking Attention with Performers (Krzysztof Choromanski, Valery Likhosherstov, David Dohan, Sinu Song, Andrea Gein, Tamas Sarlos, Peter Hawkins, Jared Davis, etc.). Performers, full rank and attention linear transformers using provable random feature approximation methods, work efficiently without relying on sparsity or low rank. The authors propose a matrix decomposition of the self-attention mechanism into matrices below, which have a combined complexity that is linear with respect to. the length of the sequence L: O (Ld²log (d)) instead of O (L²d). https://openreview.net/forum?id=Ua6zuk0WRH
3. PMI-Masking: Principled masking of correlated spans (Yoav Levin et al.). Co-masking correlated tokens significantly speeds up and improves BERT pre-learning. Instead of randomly masking tokens, the authors identify - using only corpus statistics - token ranges that are highly correlated. To do this, they expand the point mutual information between pairs of tokens to gaps of arbitrary length and show how BERT training for this purpose is trained more efficiently than alternatives such as uniform masking, whole-word masking, random range masking, etc. This strategy works by not allowing models to predict masked words, but by forcing it to use very shallow word correlations that often appear next to each other in order to increase the degree of learning deeper correlations in natural language. https://openreview.net/forum?id=3Aoft6NWFej
Full overview of the ICLR from Zeta Alpha is here: https://www.zeta-alpha.com/post/iclr-2021-10-papers-you-shouldn-t-miss
With the help of its own AI Research Navigator, Zeta Alpha compiled a snippet of over 800 ICLR 2021 reports based on citation and author popularity.
1. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Alexey Dosovitsky, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, etc.). Transformers applied directly to slices of images and pretrained on large datasets work well for image classification and can outperform the best CNNs on large images. https://openreview.net/forum?id=YicbFdNTTy
2. Rethinking Attention with Performers (Krzysztof Choromanski, Valery Likhosherstov, David Dohan, Sinu Song, Andrea Gein, Tamas Sarlos, Peter Hawkins, Jared Davis, etc.). Performers, full rank and attention linear transformers using provable random feature approximation methods, work efficiently without relying on sparsity or low rank. The authors propose a matrix decomposition of the self-attention mechanism into matrices below, which have a combined complexity that is linear with respect to. the length of the sequence L: O (Ld²log (d)) instead of O (L²d). https://openreview.net/forum?id=Ua6zuk0WRH
3. PMI-Masking: Principled masking of correlated spans (Yoav Levin et al.). Co-masking correlated tokens significantly speeds up and improves BERT pre-learning. Instead of randomly masking tokens, the authors identify - using only corpus statistics - token ranges that are highly correlated. To do this, they expand the point mutual information between pairs of tokens to gaps of arbitrary length and show how BERT training for this purpose is trained more efficiently than alternatives such as uniform masking, whole-word masking, random range masking, etc. This strategy works by not allowing models to predict masked words, but by forcing it to use very shallow word correlations that often appear next to each other in order to increase the degree of learning deeper correlations in natural language. https://openreview.net/forum?id=3Aoft6NWFej
Full overview of the ICLR from Zeta Alpha is here: https://www.zeta-alpha.com/post/iclr-2021-10-papers-you-shouldn-t-miss
openreview.net
An Image is Worth 16x16 Words: Transformers for Image Recognition...
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied...
Forwarded from Big Data Science [RU]
Reminder!
Сегодня! 12 августа в 18:00 будет проходить первый митап из серии Citymobil Data Meet-up!
Будем говорить про логистику, городские данные и технологии умных городов, обсудим роль геоданных и проблемы, которые с ними возникают.
Присоединяйтесь к нам, будем разбираться вместе))
Выступают:
- Артем Солоухин (Ситимобил)
- Андрей Критилин (ЦИАН)
- Фёдор Лаврентьев (Яндекс Go)
Не забудьте подготовить вопросы спикерам, после докладов будем общаться и у вас будет возможность их задать 🙂
Ссылка: https://tulu.la/chat/city-mobil-00002d/meetup-0002fv
Сегодня! 12 августа в 18:00 будет проходить первый митап из серии Citymobil Data Meet-up!
Будем говорить про логистику, городские данные и технологии умных городов, обсудим роль геоданных и проблемы, которые с ними возникают.
Присоединяйтесь к нам, будем разбираться вместе))
Выступают:
- Артем Солоухин (Ситимобил)
- Андрей Критилин (ЦИАН)
- Фёдор Лаврентьев (Яндекс Go)
Не забудьте подготовить вопросы спикерам, после докладов будем общаться и у вас будет возможность их задать 🙂
Ссылка: https://tulu.la/chat/city-mobil-00002d/meetup-0002fv
How to tune hyperparameters to reliably improve ML model accuracy: a detailed guide
The ML model and its preprocessing are individual for each project: the hyperparameters depend on the data. For example, in the logistic regression algorithm there are different hyperparameters (solver, C, penalty), different combinations of which give different results. Similarly, there are tunable support vector machine parameters: gamma, C. These algorithm hyperparameters are available on the Sklearn free Python library site. However, often a developer has to create their own solutions without relying on ready-made recommendations in order to develop an ML-model with high accuracy, which depends on the best combination of hyperparameters. Read the article about testing various combinations of Grid search with and without the Sklearn library, checking the results with cross-validation and conclusions about the efficiency of utilizing CPU. https://towardsdatascience.com/evaluating-all-possible-combinations-of-hyperparameter
The ML model and its preprocessing are individual for each project: the hyperparameters depend on the data. For example, in the logistic regression algorithm there are different hyperparameters (solver, C, penalty), different combinations of which give different results. Similarly, there are tunable support vector machine parameters: gamma, C. These algorithm hyperparameters are available on the Sklearn free Python library site. However, often a developer has to create their own solutions without relying on ready-made recommendations in order to develop an ML-model with high accuracy, which depends on the best combination of hyperparameters. Read the article about testing various combinations of Grid search with and without the Sklearn library, checking the results with cross-validation and conclusions about the efficiency of utilizing CPU. https://towardsdatascience.com/evaluating-all-possible-combinations-of-hyperparameter
✍🏻SoundStream: An End-to-End Neural Audio Codec by Google AI
SoundStream is the first neural network codec to work on speech and music, while being able to run in real-time on a smartphone CPU. It is able to deliver state-of-the-art quality over a broad range of bitrates with a single trained model, which represents a significant advance in learnable codecs.
The main technical ingredient of SoundStream is a neural network, consisting of an encoder, decoder and quantizer, all of which are trained end-to-end. The encoder converts the input audio stream into a coded signal, which is compressed using the quantizer and then converted back to audio using the decoder. SoundStream leverages state-of-the-art solutions in the field of neural audio synthesis to deliver audio at high perceptual quality, by training a discriminator that computes a combination of adversarial and reconstruction loss functions that induce the reconstructed audio to sound like the uncompressed original input. Once trained, the encoder and decoder can be run on separate clients to efficiently transmit high-quality audio over a network. Evaluate SoundStream and learn more about it here
https://ai.googleblog.com/2021/08/soundstream-end-to-end-neural-audio.html
SoundStream is the first neural network codec to work on speech and music, while being able to run in real-time on a smartphone CPU. It is able to deliver state-of-the-art quality over a broad range of bitrates with a single trained model, which represents a significant advance in learnable codecs.
The main technical ingredient of SoundStream is a neural network, consisting of an encoder, decoder and quantizer, all of which are trained end-to-end. The encoder converts the input audio stream into a coded signal, which is compressed using the quantizer and then converted back to audio using the decoder. SoundStream leverages state-of-the-art solutions in the field of neural audio synthesis to deliver audio at high perceptual quality, by training a discriminator that computes a combination of adversarial and reconstruction loss functions that induce the reconstructed audio to sound like the uncompressed original input. Once trained, the encoder and decoder can be run on separate clients to efficiently transmit high-quality audio over a network. Evaluate SoundStream and learn more about it here
https://ai.googleblog.com/2021/08/soundstream-end-to-end-neural-audio.html
research.google
SoundStream: An End-to-End Neural Audio Codec
Posted by Neil Zeghidour, Research Scientist and Marco Tagliasacchi, Staff Research Scientist, Google Research Audio codecs are used to efficiently...
✈️New algorithm to manage drones by MIT
Aerospace engineers at MIT have devised an algorithm that helps drones find the fastest route around obstacles without crashing. The new algorithm combines simulations of a drone flying through a virtual obstacle course with data from experiments of a real drone flying through the same course in a physical space.
The researchers found that a drone trained with their algorithm flew through a simple obstacle course up to 20 percent faster than a drone trained on conventional planning algorithms. Interestingly, the new algorithm didn’t always keep a drone ahead of its competitor throughout the course. In some cases, it chose to slow a drone down to handle a tricky curve, or save its energy in order to speed up and ultimately overtake its rival.
https://news.mit.edu/2021/drones-speed-route-system-0810
Aerospace engineers at MIT have devised an algorithm that helps drones find the fastest route around obstacles without crashing. The new algorithm combines simulations of a drone flying through a virtual obstacle course with data from experiments of a real drone flying through the same course in a physical space.
The researchers found that a drone trained with their algorithm flew through a simple obstacle course up to 20 percent faster than a drone trained on conventional planning algorithms. Interestingly, the new algorithm didn’t always keep a drone ahead of its competitor throughout the course. In some cases, it chose to slow a drone down to handle a tricky curve, or save its energy in order to speed up and ultimately overtake its rival.
https://news.mit.edu/2021/drones-speed-route-system-0810
MIT News
System trains drones to fly around obstacles at high speeds
A new algorithm helps drones find the fastest route around obstacles without crashing. The MIT system could enable fast, nimble drones for time-critical operations such as search and rescue.
🏸FastMoE: A Fast Mixture-of-Expert Training System
Mixture-of-Expert (MoE) presents a strong potential in enlarging the size of language model to trillions of parameters. However, training trillion-scale MoE requires algorithm and system co-design for a well-tuned high performance distributed training system. Unfortunately, the only existing platform that meets the requirements strongly depends on Google's hardware (TPU) and software (Mesh Tensorflow) stack, and is not open and available to the public, especially GPU and PyTorch communities.
The FastMoE – the distributed open-source MoE training system based on PyTorch with common accelerators. The system provides a hierarchical interface for both flexible model design and easy adaption to different applications, such as Transformer-XL and Megatron-LM. Different from direct implementation of MoE models using PyTorch, the training speed is highly optimized in FastMoE by sophisticated high-performance acceleration skills. The system supports placing different experts on multiple GPUs across multiple nodes, enabling enlarging the number of experts linearly against the number of GPUs.
https://github.com/laekov/fastmoe
https://arxiv.org/abs/2103.13262
Mixture-of-Expert (MoE) presents a strong potential in enlarging the size of language model to trillions of parameters. However, training trillion-scale MoE requires algorithm and system co-design for a well-tuned high performance distributed training system. Unfortunately, the only existing platform that meets the requirements strongly depends on Google's hardware (TPU) and software (Mesh Tensorflow) stack, and is not open and available to the public, especially GPU and PyTorch communities.
The FastMoE – the distributed open-source MoE training system based on PyTorch with common accelerators. The system provides a hierarchical interface for both flexible model design and easy adaption to different applications, such as Transformer-XL and Megatron-LM. Different from direct implementation of MoE models using PyTorch, the training speed is highly optimized in FastMoE by sophisticated high-performance acceleration skills. The system supports placing different experts on multiple GPUs across multiple nodes, enabling enlarging the number of experts linearly against the number of GPUs.
https://github.com/laekov/fastmoe
https://arxiv.org/abs/2103.13262
GitHub
GitHub - laekov/fastmoe: A fast MoE impl for PyTorch
A fast MoE impl for PyTorch. Contribute to laekov/fastmoe development by creating an account on GitHub.