Data Science by ODS.ai 🦜 – Telegram
Data Science by ODS.ai 🦜
44.5K subscribers
844 photos
92 videos
7 files
1.91K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @malev
Download Telegram
​​First movie ever upscaled and enhanced by couple of neural networks

Arrival of a Train at La Ciotat upscaled and upscaled to 4K 60 FPS

Algorithms that were used:
* Gigapixel AI by Topaz Labs for upscale
* FPS enhancement — Dain

Author on YouTube promises to experiment on the colorization and to release the update later.

YouTube: https://m.youtube.com/watch?v=3RYNThid23g
Author’s channel (in Russian): @denissexy

#upscale #dl #videoprocessing
​​Using ‘radioactive data’ to detect if a data set was used for training

The authors have developed a new technique to mark the images in a data set so that researchers can determine whether a particular machine learning model has been trained using those images. This can help researchers and engineers to keep track of which data set was used to train a model so they can better understand how various data sets affect the performance of different neural networks.

The key points:
- the marks are harmless and have no impact on the classification accuracy of models, but are detectable with high confidence in a neural network;
- the image features are moved in a particular direction (the carrier) that has been sampled randomly and independently of the data
- after a model is trained on such data, its classifier will align with the direction of the carrier
- the method works in such a way that it is difficult to detect whether a data set is radioactive and to remove the marks from the trained model.

blogpost: https://ai.facebook.com/blog/using-radioactive-data-to-detect-if-a-data-set-was-used-for-training/
paper: https://arxiv.org/abs/2002.00937

#cv #cnn #datavalidation #image #data
ODS breakfast in Paris! ☕️ 🇫🇷 See you this Saturday at 10:30 (some people come around 11:00) at Malongo Café, 50 Rue Saint-André des Arts. We are expecting from 9 to 19 people. Tableoverflow 💥 is possible.
​​REST: Robust and Efficient Neural Networks for Sleep Monitoring in the Wild

New approach for sleep monitoring.

Nowadays a lot of people suffer from sleep disorders thataffects their daily functioning, long-term health and longevity. Thelong-term effects of sleep deprivation and sleep disorders includean increased risk of hypertension, diabetes, obesity, depression, heart attack, and stroke. As a result sleep monitoring is a very important topic.
Currently automatical documentation of sleep stages isn't robust against noises (which can be introduced by electrical interferences (e.g., power-line) and user motions (e.g., muscle contraction, respiration)) and isn't computationaly efficient enough for fast calculations on user devices.

The authors offer the following improvenents:
- adversarial training and spectral regularization to improve robustness to noise
- sparsity regularization to improve energy and computational efficiency

Rest models achieves a macro-F1 score of 0.67 vs. 0.39 for the state-of-the-art model in the presence of Gaussian noise, with 19×parameter and 15×MFLOPS reduction.
The model is also deployed onto a Pixel 2 smartphone. It achieves 17x energy reduction and 9x faster inference compared to uncompressed models.

Paper: https://arxiv.org/abs/2001.11363
Code: https://github.com/duggalrahul/REST


#deeplearning #compression #adversarial #sleepstaging
1
​​📹How Tesla self-driving AI sees the world

#Tesla #selfdriving #cv #dl #video #Autonomous #video
Please vote in our Mega Imprtant Audience Research!

So far we have collected 384 responses, which is really cool!

But we need more filled questinnaires to know YOU and YOUR PREFERENCES better.

Some to-date data about residency:

* 🇮🇹 There are 4.5% of people who reside in Italy
* 🇧🇷 Brazil — 3.2%
* 🇫🇷 France — 1.9%
* 🇳🇬 Nigeria — 1.1%
* 🇪🇸 Spain — 5.6%

Please, fill in the form https://forms.gle/GGNgukYNQbAZPtmk8 to help us provide better and more relevant content for you!
Please, vote: https://forms.gle/GGNgukYNQbAZPtmk8 (this is a scheduled message, we hopefuly have more than 400 responses by now)
​​CCMatrix: A billion-scale bitext data set for training translation models

The authors show that margin-based bitext mining in LASER's multilingual sentence space can be applied to monolingual corpora of billions of sentences.

They are using 10 snapshots of a curated common crawl corpus CCNet totaling 32.7 billion unique sentences. Using one unified approach for 38 languages, they were able to mine 3.5 billion parallel sentences, out of which 661 million are aligned with English. 17 language pairs have more than 30 million parallel sentences, 82 more than 10 million, and most more than one million, including direct alignments between many European or Asian languages.

They train NMT systems for most of the language pairs and evaluate them on TED, WMT and WAT test sets. Also, they achieve a new SOTA for a single system on the WMT'19 test set for translation between English and German, Russian and Chinese, as well as German/French.

But, they will soon provide a noscript to extract the parallel data from this corpus

blog post: https://ai.facebook.com/blog/ccmatrix-a-billion-scale-bitext-data-set-for-training-translation-models/
paper: https://arxiv.org/abs/1911.04944.pdf
github: https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix

#nlp #multilingual #laser #data #monolingual
​​TyDi QA: A Multilingual Question Answering Benchmark

it's a q&a corpus covering 11 Typologically Diverse languages: russian, english, arabic, bengali, finnish, indonesian, japanese, kiswahili, korean, telugu, thai.

the authors collected questions from people who wanted an answer but did not know the answer yet.
they showed people an interesting passage from Wikipedia written in their native language and then had them ask a question, any question, as long as it was not answered by the passage and they actually wanted to know the answer.

blog post: https://ai.googleblog.com/2020/02/tydi-qa-multilingual-question-answering.html?m=1
paper: only pdf

#nlp #qa #multilingual #data
​​DEEP DOUBLE DESCENT
where bigger models and more data hurt

it's really cool & interesting research about where we watch that the performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. but this effect is often avoided through careful regularization.

some conclusions from research:
– there is a regime where bigger models are worse
– there is a regime where more samples hurt
– there is a regime where training longer reverses overfitting

blog post: https://openai.com/blog/deep-double-descent/
paper: https://arxiv.org/abs/1912.02292

#deep #train #size #openai
Data Science by ODS.ai 🦜
🔝Great OpenDataScience Channel Audience Research The first audience research was done on 25.06.18 and it is time to update our knowledge on what are we. Please fill in this form: https://forms.gle/GGNgukYNQbAZPtmk8 all the collected data will be used to…
☺️526 responses collected thanks to you!

Now we are looking for a volunteer to perform an #exploratory analysis of responses an publish it as a an example on github in a form of #jupyter notebook. If you are familiar with git, jupyter, basics of #exploratory analysis and want to help, write to @opendatasciencebot bot (make sure you include your username, so we can reach you back).

In the mean time, please spend some free weekend time to fill in the questionnaire form if you haven’t filled it yet: https://forms.gle/GGNgukYNQbAZPtmk8 This will help us to make channel better for you.

2020 questionnaire link: https://forms.gle/GGNgukYNQbAZPtmk8
​​Three challenges of Deep Learning according to Yann LeCun
​​Few-shot Video-to-Video Synthesis

it's the pytorch implementation for few-shot photorealistic video-to-video (vid2vid) translation.
it can be used for generating human motions from poses, synthesizing people talking from edge maps, or turning semantic label maps into photo-realistic videos.
the core of vid2vid translation is image-to-image translation.

blog post: https://nvlabs.github.io/few-shot-vid2vid/
paper: https://arxiv.org/abs/1910.12713
youtube: https://youtu.be/8AZBuyEuDqc
github: https://github.com/NVlabs/few-shot-vid2vid

#cv #nips #neurIPS #pattern #recognition #vid2vid #synthesis
1
Data Science by ODS.ai 🦜
​​Three challenges of Deep Learning according to Yann LeCun
Yann LeCun's talk slides and video

Slides: https://drive.google.com/file/d/1r-mDL4IX_hzZLDBKp8_e8VZqD7fOzBkF/view

Video of the talks: https://vimeo.com/390347111
- 1:10 in for Geoff Hinton's keynote,
- 1:44 for Yann LeCunn's,
- 2:18 for Yoshua Bengio's,
- 2:51 for the panel discussion moderated by Leslie Pack Kaelbling

#talk #meta #master
​​Neighbourhood Components Analysis
a PyTorch implementation of Neighbourhood Components Analysis

NCA learns a linear transformation of the dataset such that the expected leave-one-out performance of kNN in the transformed space is maximized.

The authors propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score on the training set.

It can also learn low-dimensional linear embedding of labeled data that can be used for data visualization and fast classification. Unlike other methods, this classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them.

The performance of the method is demonstrated on several data sets, both for metric learning and linear dimensionality reduction.

paper (only pdf): https://www.cs.toronto.edu/~hinton/absps/nca.pdf
github: https://github.com/kevinzakka/nca

#kNN #pca #nca #PyTorch
​​OpenCV ‘dnn’ with NVIDIA GPUs: 1.549% faster YOLO, SSD, and Mask R-CNN

- Object detection and segmentation
- Working Python implementations of each
- Includes pre-trained models

tutorial: https://t.co/Wt0IrJObcE?amp=1

#OpenCV #dl #nvidia
Knowledge Graphs @ AAAI 2020

overview of several topics:
- KG-Augmented Language Models: in different flavours
- Entity Matching in Heterogeneous KGs: finally no manual mappings
- KG Completion and Link Prediction: neuro-symbolic and temporal KGs
- KG-based Conversational AI and Question Answering: going big

Link: https://medium.com/@mgalkin/knowledge-graphs-aaai-2020-c457ad5aafc0

#AAAI2020 #KnowledgeGraph #graph #kg
ODS breakfast in Paris! ☕️ 🇫🇷 See you this Saturday at 10:30 (some people come around 11:00) at Malongo Café, 50 Rue Saint-André des Arts. We are expecting from 6 to 17 people.
​​BERT-of-Theseus: Compressing BERT by Progressive Module Replacing

tl;dr
[ONE loss] + [ONE hyperparameter] + [NO external data] = GREAT PERFORMANCE
with a huggingface – compatible weights

take original BERT, replace some of his layers with new (smaller) ones randomly during the distillation. the probability of replacing the module will increase over time, resulting in a small model at the end.
them approach leverages only one loss function and one hyper-parameter, liberating human effort from hyper-parameter tuning.
also, they outperform existing knowledge distillation approaches on GLUE benchmark, showing a new perspective of model compression

paper: https://arxiv.org/abs/2002.02925
github: https://github.com/JetRunner/BERT-of-Theseus

#nlp #compressing #knowledge #distillation #bert