Визуализируя нейронный машинный перевод (seq2seq модели с механизмом внимания)
https://habr.com/ru/post/486158/
https://habr.com/ru/post/486158/
Хабр
Визуализируя нейронный машинный перевод (seq2seq модели с механизмом внимания)
Привет, Хабр! Представляю вашему вниманию перевод статьи "Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention)" автора Jay...
Transformers 2.4.0 is out 🤗
- Training transformers from scratch is now supported
- New models, including *FlauBERT*, Dutch BERT, *UmBERTo*
- Revamped documentation
- First multi-modal model, MMBT from @facebookai, text & images
Bye bye Python 2 🙃
https://t.co/16M3eqAIcy
- Training transformers from scratch is now supported
- New models, including *FlauBERT*, Dutch BERT, *UmBERTo*
- Revamped documentation
- First multi-modal model, MMBT from @facebookai, text & images
Bye bye Python 2 🙃
https://t.co/16M3eqAIcy
GitHub
Releases · huggingface/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. - huggingface/transformers
Открытый курс «Deep Learning in NLP» от создателей DeepPavlov на базе курса cs224n
https://habr.com/ru/company/ods/blog/487172/
https://habr.com/ru/company/ods/blog/487172/
Хабр
Материалы NLP курса от DeepPavlov
В этой статье вы найдете материалы очных курсов «Deep Learning in NLP», которые запускались командой DeepPavlov в 2018-2019 годах и которые являлись частичной адаптацией Stanford NLP course — cs224n ....
TyDi QA: A Multilingual Question Answering Benchmark
https://ai.googleblog.com/2020/02/tydi-qa-multilingual-question-answering.html?m=1
https://ai.googleblog.com/2020/02/tydi-qa-multilingual-question-answering.html?m=1
blog.research.google
TyDi QA: A Multilingual Question Answering Benchmark
BERT, ELMo, & GPT-2: How contextual are contextualized word representations?
https://kawine.github.io/blog/nlp/2020/02/03/contextual.html
https://kawine.github.io/blog/nlp/2020/02/03/contextual.html
Kawin Ethayarajh
BERT, ELMo, & GPT-2: How contextual are contextualized word representations?
Incorporating context into word embeddings - as exemplified by BERT, ELMo, and GPT-2 - has proven to be a watershed idea in NLP. Replacing static vectors (e.g., word2vec) with contextualized word representations has led to significant improvements on virtually…
5 New Features in pandas 1.0 You Should Know About
https://towardsdatascience.com/5-new-features-in-pandas-1-0-you-should-know-about-fc31c83e396b
https://towardsdatascience.com/5-new-features-in-pandas-1-0-you-should-know-about-fc31c83e396b
Medium
5 New Features in pandas 1.0 You Should Know About
Dynamic window functions, faster apply and more.
Natural Language Processing. Итоги 2019 и тренды на 2020
https://habr.com/ru/company/huawei/blog/487730/
https://habr.com/ru/company/huawei/blog/487730/
Хабр
Natural Language Processing. Итоги 2019 и тренды на 2020
Всем привет. С некоторым запозданием я решил опубликовать эту статью. Каждый год я стараюсь подвести итоги произошедшего в области обработки естественного языка (natural language processing). Не стал...
Growing Neural Cellular Automata
Differentiable Model of Morphogenesis
https://distill.pub/2020/growing-ca/
Differentiable Model of Morphogenesis
https://distill.pub/2020/growing-ca/
Distill
Growing Neural Cellular Automata
Training an end-to-end differentiable, self-organising cellular automata model of morphogenesis, able to both grow and regenerate specific patterns.
How to train a new language model from scratch using Transformers and Tokenizers
https://huggingface.co/blog/how-to-train
https://huggingface.co/blog/how-to-train
huggingface.co
How to train a new language model from scratch using Transformers and Tokenizers
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
The Annotated GPT-2
https://amaarora.github.io/2020/02/18/annotatedGPT2.html
https://amaarora.github.io/2020/02/18/annotatedGPT2.html
Committed towards better future
The Annotated GPT-2
Introduction Prerequisites Language Models are Unsupervised Multitask Learners Abstract Model Architecture (GPT-2) Model Specifications (GPT) Imports Transformer Decoder inside GPT-2 CONV1D Layer Explained FEEDFORWARD Layer Explained ATTENTION Layer Explained…
Смитсонианский институт (США) предоставил для свободного доступа почти 3 млн изображений. На самом деле, выложено больше, но примерно столько имеют пометку «свободное использование», есть и 3D-изображения, и платформа для доступа к ним.
https://www.smithsonianmag.com/smithsonian-institution/smithsonian-releases-28-million-images-public-domain-180974263/
https://www.smithsonianmag.com/smithsonian-institution/smithsonian-releases-28-million-images-public-domain-180974263/
Smithsonian Magazine
Smithsonian Releases 2.8 Million Images Into Public Domain
The launch of a new open access platform ushers in a new era of accessibility for the Institution
8 Creators and Core Contributors Talk About Their Model Training Libraries From PyTorch Ecosystem
https://neptune.ai/blog/model-training-libraries-pytorch-ecosystem
https://neptune.ai/blog/model-training-libraries-pytorch-ecosystem
neptune.ai
8 Creators and Core Contributors Talk About Their Model Training Libraries From PyTorch Ecosystem
Insights from core contributors on leveraging model training libraries in the PyTorch ecosystem, with an in-depth analysis