Big Data Science – Telegram
Big Data Science
3.75K subscribers
65 photos
9 videos
12 files
637 links
Big Data Science channel gathers together all interesting facts about Data Science.
For cooperation: a.chernobrovov@gmail.com
💼https://news.1rj.ru/str/bds_job — channel about Data Science jobs and career
💻https://news.1rj.ru/str/bdscience_ru — Big Data Science [RU]
Download Telegram
🏸Game theory as an engine for large-scale data analysis
A new look at principal components analysis as a competitive game, where each approximate eigenvector is controlled by the player, whose goal is to maximize his own utility function. As a multi-agent perspective, it has allowed the development of new ideas and algorithms with the efficient use of the latest computing resources, globally scaling datasets. Brief overview https://deepmind.com/blog/article/EigenGame and detailed article https://openreview.net/forum?id=NzTU59SYbNq
🥁NetHack Challenge at NeurIPS 2021 from Facebook: open-source project for Reinforcement Learning (RL) as a game
Many advances in RL have been achieved through simulation environments in games such as Dota 2, Minecraft, and StarCraft II. But this requires a lot of computation on thousands of GPUs at a time for just one experiment. To reduce the cost of RL modeling, Facebook in 2020 initiated the development of the open-source NetHack Learning Environment project. And in 2021, Facebook announced the NeurIPS 2021 competition as part of the NetHack Challenge in conjunction with AIcrowd, an AI crowdsourcing organization. The competition runs from early June to October 15, 2021, and the winners will be announced on NeurIPS in December.
The NetHack game has actually existed since the 1980s. It is visually straightforward and completely free to play, but more complicated than StarCraft II due to the very confusing interaction of players with their environment and related objects, users have to think outside the box or refer to the NetHack Wiki. And the main difficulty of NetHack is that after the death of a character, the game session of this player ends. Therefore, within this RL environment, researchers hope to find new ways to control agents so that in the future, AI can think creatively in difficult situations, helping people. Because NetHack runs on a terminal, players can quickly simulate gameplay by training billions of agents a day on just 2 GPUs. This is how the NetHack Challenge tests the latest AI techniques in a complex environment without the enormous power of a supercomputer. https://techxplore.com/news/2021-06-facebook-nethack-neurips.html
✍🏻Need a quick rewrite? Try NLPAug!
NLPAug is a Python library that allows you to increase the efficiency of neural networks in NLP tasks without changing their architecture and fine-tuning. With it, you can synthesize new text based on the available data, replacing some words with synonyms, incl. by the principle of cosine similarity in vector representations, similar to word2vec or GloVe. NLPAug also performs context-based word replacement using transformers in the form of BERT networks and makes double translation of text into another language and vice versa. https://github.com/makcedward/nlpaug
☀️ML for prediction of Solar Radiation
From a practical agronomic point of view, an accurate assessment of solar radiation is vital because it is a key factor in crop development. Most existing weather stations around the world have temperature and rain sensors, but only some of them measure solar radiation. Measuring solar radiation is usually very expensive due to complex sensors (pyranometers and radiometers) and a lack of reliable data. Therefore, a group of researchers from the University of Cordoba has developed ML-models to predict solar radiation in southern Spain and the United States.
The created ML-models are based not only on actual measurements, but are enriched with data on the geoclimatic conditions of the area (aridity, distance to the sea, altitude, etc.). To estimate daily solar radiation, the proposed neural network algorithms from current data only need information about the air temperature, which is relatively cheap due to inexpensive sensors and IoT technologies. Bayesian algorithms are used to optimize hyperparameters, and the models themselves can be adapted to any terrain, depending on its aridity.
https://techxplore.com/news/2021-07-machine-based-thermal-solar.html
🌦Why is it raining not as predicted and how Yandex Meteum 2.0 deals with it
The story about replacing MatrixNet with CatBoost and new datasets for training NN models. Now Meteum neural nets learn not only on data from professional weather stations, but also on information about terrain features and user messages. https://tekdeeps.com/yandex-has-launched-meteum-2-0-a-new-technology-for-weather-forecasting-based-on-machine-learning/
✈️AI will schedule flight crews for the US Air Force
The AI system from MIT helps US Air Force pilots plan the workload of personnel on cargo flights, based on many factors: airspace availability, pilot tolerances, work and rest requirements, etc. Combining optimization through integer programming with RL neural networks, the system generates flight schedules based on explicit and implicit constraints. https://news.mit.edu/2021/us-air-force-pilots-artificial-intelligence-assist-scheduling-aircrews-0708
👀Looking for an enterprise AI solution?
Try NVIDIA's NGC Catalog — a registry of GPU-optimized software for high performance computing and big data analytics across industries ranging from retail chatbots to medical imaging and recommender systems. NGC contains enterprise-grade application containers, pre-trained AI models, and industry-specific SDKs that can be deployed on-premises, in the cloud, or at the edge of the network. For example, NVIDIA TAO is a platform for training, adapting and optimizing AI models that allows you to create enterprise-grade AI applications without deep expert knowledge and large datasets to train them. https://ngc.nvidia.com/
😎Neural networks of the Cloud Mail.ru service help preserve memories
ML algorithms will automatically find pictures taken on a specific day and display them as stories to generate a custom photo calendar illustrating memorable events. Thanks to image recognition methods, only successful frames will be included in the result. And if the user doesn't like the picture, it can be removed directly from the video story. You can share the animated photo gallery with your friends by sending it a message or share it on VK and Instagram. The update is already available in the app for iOS and Android. https://corp.mail.ru/ru/press/releases/10947/
💦3 main items to build ML-pipeline
There are only 3 basic tools to build an effective machine learning pipeline:
Feature Store to handle offline and online feature conversions. It support the version-control and integration with data lakes and DWH. It also enables fast service and rapid deployment of code in production. For example, Tecton, Hopsworks, Michelangelo Palette, Zipline, Feature Store from Amazon SageMaker and Databricks.
Model Store as a central registry of models and the use of experiments. It provides version reproducibility and tracking history of ML models and related artifacts such as Git commits, pickle files, scores, regression, etc. Examples: Weights and Biases, MLFlow, Neptune.ai, EthicalML, and solutions by Amazon, Azure, Google.
Evaluation Store for monitoring and improving the performance of models. It identifies performance metrics for each ML model in any environment, from training to production, including A/B testing tools and visual dashboard. For example, Arize and Neptune.ai.
Additionally, the data annotation platforms (Appen), ML-model maintenance (Kubeflow, Algorithmia) and AI-orchestration (Spell) will be useful for the system of all teams participating in MLOps-processes.
https://towardsdatascience.com/the-only-3-ml-tools-you-need-1aa750778d33
😎10 the most interesting DS-conferences all over the world in August 2021
09.08
– 2nd Workshop on Knowledge Guided Machine Learning (KGML2021). Online event by University of Minnesota https://sites.google.com/umn.edu/kgmlworkshop/workshop
09.08 – International Conference on Sports Analytics and Data Science. New York, United States. https://waset.org/sports-analytics-and-data-science-conference-in-august-2021-in-new-york
11.08 - ML Data Engineering Community Online-meetup by Tecton. Feature Store, Streaming Architecture, MLOps and other DS-themes. Free registration https://www.applyconf.com/
14.08 - KDD 2021, the premier interdisciplinary data science conference in Singapore. Online https://kdd.org/kdd2021/
14.08 - Fragile Earth 2021, develop radically new technological foundations for advancing and meeting the Sustainable Development Goals. Online annual workshop is part of the Earth Day events at ACM’s KDD 2021 Conference on research in Machine Learning and its applications. https://ai4good.org/fragile-earth-2021/
17.08 - Ai4 2021. Online-conference brings together business leaders and data practitioners to facilitate the adoption of AI and ML technology. https://ai4.io/2021/
19.08 - IJCAI-21: 30th International Joint Conference on Artificial Intelligence. Montreal-themed Virtual Reality, Online. https://ijcai-21.org/
25.08 – Data Science Salon, Applying ML and AI to Retail and Ecommerce. Online https://www.datascience.salon/retail-and-ecommerce/
25.08 – DataOps Virtual Event – Zaloni Company, who is the vendor of Arena DataOps platform, invites CDO and lead DataOps Engineers from AWS, KPMG, PWC and others to provide modern experience of data management and engineering in different business areas. Free registration https://www.zaloni.com/dataops-virtual-event-second-annual/
26.08 – International Conference on Smart Technologies in Data Science and Communication. Paris, France. https://waset.org/smart-technologies-in-data-science-and-communication-conference-in-august-2021-in-paris
🙌🏻🚗On July 22, 2021, Yandex opened the world's largest dataset of self-driving vehicles: more than 1600 hours of movement, divided into 600,000 marked-up fragments of trips on the roads of Russia, Israel and the United States in different weather conditions. The dataset was published for the Shifts Challenge at the international conference NeurIPS 2021 in order to draw attention to the problem of "data shift" in machine learning and reduce the uncertainty of applying ML-models in new conditions. All data are depersonalized. The dataset contains high-precision route maps and tracks of all surrounding cars and pedestrians (their position, speed, acceleration, etc.), without personal data (car numbers or faces of people). Participants have to train ML-algorithms on the provided data and check the quality of their work under shear conditions. Algorithm developers with the best quality will receive cash prizes of 5, 3 and 1 thousand dollars.
https://research.yandex.com/shifts
https://github.com/yandex-research/shifts
👆🏻What is AUC - ROC Curve and why it is so important to evaluate quality of ML-model?
Area Under the Receiver Operating Characteristics is evaluation metric is used to check or visualize the performance of the multi-class classification problem.
AUC - ROC curve measures a performance of the classification at various threshold settings. ROC is a probability curve and AUC represents the degree or measure of separability. It tells how much the model is capable of distinguishing between classes. Higher the AUC, the better the model is at predicting 0 classes as 0 and 1 classes as 1. Higher the AUC, the better the model is at distinguishing between patients with the disease and no disease.
An excellent model has AUC near to the 1 which means it has a good measure of separability. A poor model has an AUC near 0 which means it has the worst measure of separability. In fact, it means it is reciprocating the result. It is predicting 0s as 1s and 1s as 0s. And when AUC is 0.5, it means the model has no class separation capacity whatsoever.
Sensitivity and Specificity are inversely proportional to each other. So when we increase Sensitivity, Specificity decreases, and vice versa. When we decrease the threshold, we get more positive values thus it increases the sensitivity and decreasing the specificity. Similarly, when we increase the threshold, we get more negative values thus we get higher specificity and lower sensitivity.
In a multi-class model, we can plot the N number of AUC ROC Curves for N number classes using the One vs ALL methodology. So for example, If you have three classes named X, Y, and Z, you will have one ROC for X classified against Y and Z, another ROC for Y classified against X and Z, and the third one of Z classified against Y and X.
https://towardsdatascience.com/understanding-auc-roc-curve-68b2303cc9c5
🚗Yandex robots will deliver food to American students
On July 6, 2021, Yandex entered into a cooperation agreement with the American food delivery service Grubhub to deliver food to US student campuses using Rovers. Developed by Yandex, these autonomous courier robots are based on self-driving car technology and can operate in any weather 24/7. Rovers drive on sidewalks and cross the roads at pedestrian crossings. Since the beginning of 2021, in Russia robots have brought thousands of orders from Yandex.Food and Yandex.Lavka. And since April, they have been delivering orders from restaurants in the American city of Ann Arbor, Michigan.
https://yandex.ru/company/press_releases/2021/07-06-2021
✈️2nd release TF-Ranking by Google AI
In December 2018, Google AI introduced TF-Ranking, an open-source library based on TensorFlow for developing scalable neural ranking models (LTR, learning-to-rank) that help get an ordered list of items in response to a user queries. Unlike standard classification models, which classify one item at a time, LTR models take a complete list of items as input and look for an order that maximizes the usefulness of the entire list. These LTR models are most common in search and recommendation systems, but TF-Ranking is also used in e-commerce, building smart spaces and cities.
In May 2021, Google AI released its second TF-Ranking release, which provides full support for built-in LTR model building using Keras, the high-level TensorFlow 2 API. The Keras ranking model has a new workflow design, incl. flexible ModelBuilder and DatasetBuilder for customizing the training set, and a pipeline for training the model. Also this version of TF-Ranking supports RaggedTensors, Orbit training library and many more improvements.
And thanks to an in-depth study of the capabilities of the TF-Ranking library, the Google AI team has created a Data Augmented Self-Attentive Latent Cross (DASALC) model that combines transformation of neural network features with data enrichment, ensemble methods, and loss ranking. DASALC eliminates the disadvantages of LTR models and gradient boosting decision trees, while retaining the advantages of these methods.
https://ai.googleblog.com/2021/07/advances-in-tf-ranking.html
https://research.google/pubs/pub50030/
💦🏸What is multi-task machine learning?
Usually one ML-model solves one problem, for example, image classification or text synthesis. This is called single-task learning (STL). But some models allow you to make several types of predictions on one sample, for example, image classification and semantic segmentation. This is already multitasking training (MTL, Multi-task learning). The main advantages of MTL are as follows:
smaller training sample for each of the individual tasks due to the enlargement of the total data set;
improved generalization of the model - information from related tasks increases the ability of the model to extract useful data from the dataset and reduces overfitting;
shortening the training duration - instead of wasting time training multiple models to solve multiple problems, a single model is trained;
reduced requirements for hardware resources - ML-models have many parameters that need to be stored in RAM. Therefore, for devices with limited computing power, IoT, it is better to have one MTL with some common parameters, rather than several STL models that perform a number of related tasks.
The downside to these benefits is performance degradation. During MTL training, tasks can compete with each other. For example, when instance segmentation (segmentation of a separate mask for each distinct object in an image) is trained along with semantic segmentation (classification of objects at the pixel level), the latter task often dominates unless a task balancing mechanism is used.
In addition, the MTL loss function is more complex as a result of summing the individual losses, making optimization difficult. This is where the so-called negative transmission effect occurs when performing multiple tasks, and separate STL models can perform better than a single MTL.
Looking ahead, multitasking machine learning is great for natural language processing and medical research, but the current implementation of this approach does not fully cover its current drawbacks.
https://thegradient.pub/how-to-do-multi-task-learning-intelligently/
🙌🏻Apache Superset - open source framework for BI and DS analysis
Becoming an Apache project in 2017, Superset is a powerful BI big data visualization tool that allows users to quickly and easily create dashboards using simple code, a free visualization designer, and the advanced SQL editor. For corporate use, it is especially important to support different authentication backends (OpenID, LDAP, OAuth, REMOTE_USER) and integration with many SQL-based DBMSs through the SQLAlchemy library. Superset is based on Python, so to use it, you should first install the Anaconda distribution, which includes a set of required DS libraries. Airbnb, Netflix, Twitter, Yahoo! and many other companies, including Superset in their DS projects. https://superset.apache.org/
🎯TOP 3 papers from the International Conference on Learning Representations 2021: a brief overview from Zeta Alpha
With the help of its own AI Research Navigator, Zeta Alpha compiled a snippet of over 800 ICLR 2021 reports based on citation and author popularity.
1. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Alexey Dosovitsky, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, etc.). Transformers applied directly to slices of images and pretrained on large datasets work well for image classification and can outperform the best CNNs on large images. https://openreview.net/forum?id=YicbFdNTTy
2. Rethinking Attention with Performers (Krzysztof Choromanski, Valery Likhosherstov, David Dohan, Sinu Song, Andrea Gein, Tamas Sarlos, Peter Hawkins, Jared Davis, etc.). Performers, full rank and attention linear transformers using provable random feature approximation methods, work efficiently without relying on sparsity or low rank. The authors propose a matrix decomposition of the self-attention mechanism into matrices below, which have a combined complexity that is linear with respect to. the length of the sequence L: O (Ld²log (d)) instead of O (L²d). https://openreview.net/forum?id=Ua6zuk0WRH
3. PMI-Masking: Principled masking of correlated spans (Yoav Levin et al.). Co-masking correlated tokens significantly speeds up and improves BERT pre-learning. Instead of randomly masking tokens, the authors identify - using only corpus statistics - token ranges that are highly correlated. To do this, they expand the point mutual information between pairs of tokens to gaps of arbitrary length and show how BERT training for this purpose is trained more efficiently than alternatives such as uniform masking, whole-word masking, random range masking, etc. This strategy works by not allowing models to predict masked words, but by forcing it to use very shallow word correlations that often appear next to each other in order to increase the degree of learning deeper correlations in natural language. https://openreview.net/forum?id=3Aoft6NWFej
Full overview of the ICLR from Zeta Alpha is here: https://www.zeta-alpha.com/post/iclr-2021-10-papers-you-shouldn-t-miss
Forwarded from Big Data Science [RU]
Reminder!
Сегодня! 12 августа в 18:00 будет проходить первый митап из серии Citymobil Data Meet-up!
Будем говорить про логистику, городские данные и технологии умных городов, обсудим роль геоданных и проблемы, которые с ними возникают.
Присоединяйтесь к нам, будем разбираться вместе))
Выступают:
- Артем Солоухин (Ситимобил)
- Андрей Критилин (ЦИАН)
- Фёдор Лаврентьев (Яндекс Go)
Не забудьте подготовить вопросы спикерам, после докладов будем общаться и у вас будет возможность их задать 🙂
Ссылка: https://tulu.la/chat/city-mobil-00002d/meetup-0002fv
How to tune hyperparameters to reliably improve ML model accuracy: a detailed guide
The ML model and its preprocessing are individual for each project: the hyperparameters depend on the data. For example, in the logistic regression algorithm there are different hyperparameters (solver, C, penalty), different combinations of which give different results. Similarly, there are tunable support vector machine parameters: gamma, C. These algorithm hyperparameters are available on the Sklearn free Python library site. However, often a developer has to create their own solutions without relying on ready-made recommendations in order to develop an ML-model with high accuracy, which depends on the best combination of hyperparameters. Read the article about testing various combinations of Grid search with and without the Sklearn library, checking the results with cross-validation and conclusions about the efficiency of utilizing CPU. https://towardsdatascience.com/evaluating-all-possible-combinations-of-hyperparameter
✍🏻SoundStream: An End-to-End Neural Audio Codec by Google AI
SoundStream is the first neural network codec to work on speech and music, while being able to run in real-time on a smartphone CPU. It is able to deliver state-of-the-art quality over a broad range of bitrates with a single trained model, which represents a significant advance in learnable codecs.
The main technical ingredient of SoundStream is a neural network, consisting of an encoder, decoder and quantizer, all of which are trained end-to-end. The encoder converts the input audio stream into a coded signal, which is compressed using the quantizer and then converted back to audio using the decoder. SoundStream leverages state-of-the-art solutions in the field of neural audio synthesis to deliver audio at high perceptual quality, by training a discriminator that computes a combination of adversarial and reconstruction loss functions that induce the reconstructed audio to sound like the uncompressed original input. Once trained, the encoder and decoder can be run on separate clients to efficiently transmit high-quality audio over a network. Evaluate SoundStream and learn more about it here
https://ai.googleblog.com/2021/08/soundstream-end-to-end-neural-audio.html