Measuring Information Propagation in Literary Social Network
Annotated dataset of 100 works of fiction to support tasks in natural language processing and the computational humanities.
Code: https://github.com/dbamman/litbank
Paper: https://arxiv.org/pdf/2004.13980v1.pdf
Annotated dataset of 100 works of fiction to support tasks in natural language processing and the computational humanities.
Code: https://github.com/dbamman/litbank
Paper: https://arxiv.org/pdf/2004.13980v1.pdf
📈 Learning Convolutional Neural Networks with Interactive Visualization
Interactive visualization tool designed for non-experts to learn and examine convolutional neural networks (CNNs), a foundational deep learning model architecture.
Video: https://www.youtube.com/watch?v=HnWIHWFbuUQ&feature=youtu.be
Demo: https://poloclub.github.io/cnn-explainer/
Github: https://github.com/poloclub/cnn-explainer
Paper: https://arxiv.org/abs/2004.15004v1
Interactive visualization tool designed for non-experts to learn and examine convolutional neural networks (CNNs), a foundational deep learning model architecture.
Video: https://www.youtube.com/watch?v=HnWIHWFbuUQ&feature=youtu.be
Demo: https://poloclub.github.io/cnn-explainer/
Github: https://github.com/poloclub/cnn-explainer
Paper: https://arxiv.org/abs/2004.15004v1
YouTube
Demo Video "CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization"
This is a demo video for the manunoscript: "CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization"
For a live demo, visit: https://poloclub.github.io/cnn-explainer/
Music: Carefree by Kevin MacLeod
Link: https://filmmusic.io/song/3476…
For a live demo, visit: https://poloclub.github.io/cnn-explainer/
Music: Carefree by Kevin MacLeod
Link: https://filmmusic.io/song/3476…
This media is not supported in your browser
VIEW IN TELEGRAM
NUBIA (NeUral Based Interchangeability Assessor) is a new SoTA evaluation metric for text generation
Methodology to build automatic evaluation metrics for text generation using only machine learning models as core components
https://wl-research.github.io/blog/
Github: https://github.com/wl-research/nubia
Paper: https://arxiv.org/abs/2004.14667v1
Colab: https://colab.research.google.com/drive/1_K8pOB8fRRnkBPwlcmvUNHgCr4ur8rFg
Methodology to build automatic evaluation metrics for text generation using only machine learning models as core components
https://wl-research.github.io/blog/
Github: https://github.com/wl-research/nubia
Paper: https://arxiv.org/abs/2004.14667v1
Colab: https://colab.research.google.com/drive/1_K8pOB8fRRnkBPwlcmvUNHgCr4ur8rFg
Why We Need DevOps for ML Data
https://tecton.ai/blog/devops-ml-data/
Хабр: https://habr.com/ru/company/itsumma/blog/500272/
https://tecton.ai/blog/devops-ml-data/
Хабр: https://habr.com/ru/company/itsumma/blog/500272/
👍1
This media is not supported in your browser
VIEW IN TELEGRAM
An Implementation of ERNIE For Language Understanding (including Pre-training models and Fine-tuning tools)
ERNIE 2.0 is a continual pre-training framework for language understanding in which pre-training tasks can be incrementally built and learned through multi-task learning.
ERNIE 2.0 from Baidu: https://github.com/PaddlePaddle/ERNIE
Dataset: https://gluebenchmark.com/tasks
Understanding Language using XLNet with autoregressive pre-training
https://medium.com/@zxiao2015/understanding-language-using-xlnet-with-autoregressive-pre-training-9c86e5bea443
ERNIE 2.0 is a continual pre-training framework for language understanding in which pre-training tasks can be incrementally built and learned through multi-task learning.
ERNIE 2.0 from Baidu: https://github.com/PaddlePaddle/ERNIE
Dataset: https://gluebenchmark.com/tasks
Understanding Language using XLNet with autoregressive pre-training
https://medium.com/@zxiao2015/understanding-language-using-xlnet-with-autoregressive-pre-training-9c86e5bea443
The Best Deep Learning Papers from the ICLR 2020 Conference
https://neptune.ai/blog/iclr-2020-deep-learning
https://neptune.ai/blog/iclr-2020-deep-learning
neptune.ai
Blog - neptune.ai
Blog for ML/AI practicioners with articles about LLMOps. You'll find here guides, tutorials, case studies, tools reviews, and more.
Beneath the Tip of the Iceberg: Current Challenges and New Directions in Sentiment Analysis Research
Awesome Sentiment Analysis papers: https://github.com/declare-lab/awesome-sentiment-analysis
Paper: https://arxiv.org/abs/2005.00357v1
Awesome Sentiment Analysis papers: https://github.com/declare-lab/awesome-sentiment-analysis
Paper: https://arxiv.org/abs/2005.00357v1
GitHub
GitHub - declare-lab/awesome-sentiment-analysis: Reading list for Awesome Sentiment Analysis papers
Reading list for Awesome Sentiment Analysis papers - declare-lab/awesome-sentiment-analysis
Global explanations for discovering bias in data
Github: https://github.com/agamiko/gebi
Code: https://github.com/AgaMiko/GEBI/blob/master/notebooks/GEBI.ipynb
Paper: https://arxiv.org/abs/2005.02269v1
Github: https://github.com/agamiko/gebi
Code: https://github.com/AgaMiko/GEBI/blob/master/notebooks/GEBI.ipynb
Paper: https://arxiv.org/abs/2005.02269v1
Set of Machine Learning Python plugins for GIMP
This paper introduces GIMP-ML, a set of Python plugins for the widely popular GNU Image Manipulation Program (GIMP). It enables the use of recent advances in computer vision to the conventional image editing pipeline.
Github: https://github.com/kritiksoman/GIMP-ML
Paper: https://arxiv.org/abs/2004.13060
Demo: https://www.youtube.com/watch?v=HVwISLRow_0
This paper introduces GIMP-ML, a set of Python plugins for the widely popular GNU Image Manipulation Program (GIMP). It enables the use of recent advances in computer vision to the conventional image editing pipeline.
Github: https://github.com/kritiksoman/GIMP-ML
Paper: https://arxiv.org/abs/2004.13060
Demo: https://www.youtube.com/watch?v=HVwISLRow_0
TK & TKL - Efficient Transformer-based neural re-ranking models
TK employs a small number of low-dimensional Transformer layers to contextualize query and document word embeddings. TK scores the interactions of the contextualized representations with simple, yet effective soft-histograms based on the kernel-pooling technique .
Github: https://github.com/sebastian-hofstaetter/transformer-kernel-ranking
Paper: https://arxiv.org/abs/2005.04908v1
The Neural-IR-Explorer is a interactive exploration tool. It allows you to browse around the actual results of a neural re-ranking run
https://neural-ir-explorer.ec.tuwien.ac.at/
TK employs a small number of low-dimensional Transformer layers to contextualize query and document word embeddings. TK scores the interactions of the contextualized representations with simple, yet effective soft-histograms based on the kernel-pooling technique .
Github: https://github.com/sebastian-hofstaetter/transformer-kernel-ranking
Paper: https://arxiv.org/abs/2005.04908v1
The Neural-IR-Explorer is a interactive exploration tool. It allows you to browse around the actual results of a neural re-ranking run
https://neural-ir-explorer.ec.tuwien.ac.at/
👍1
Fine-tuning ResNet with Keras, TensorFlow, and Deep Learning
In this tutorial, you will learn how to fine-tune ResNet using Keras, TensorFlow, and Deep Learning.
https://www.pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/
In this tutorial, you will learn how to fine-tune ResNet using Keras, TensorFlow, and Deep Learning.
https://www.pyimagesearch.com/2020/04/27/fine-tuning-resnet-with-keras-tensorflow-and-deep-learning/
Little Ball of Fur
Little Ball of Fur consists of methods to do sampling of graph structured data
Documentation : https://little-ball-of-fur.readthedocs.io/en/latest/#little-ball-of-fur-documentation
github: https://github.com/benedekrozemberczki/littleballoffur
paper: https://arxiv.org/abs/2005.05257v1
Little Ball of Fur consists of methods to do sampling of graph structured data
Documentation : https://little-ball-of-fur.readthedocs.io/en/latest/#little-ball-of-fur-documentation
github: https://github.com/benedekrozemberczki/littleballoffur
paper: https://arxiv.org/abs/2005.05257v1
GitHub
GitHub - benedekrozemberczki/littleballoffur: Little Ball of Fur - A graph sampling extension library for NetworKit and NetworkX…
Little Ball of Fur - A graph sampling extension library for NetworKit and NetworkX (CIKM 2020) - benedekrozemberczki/littleballoffur
1008 machine translation models, covering of 140 different languages
https://huggingface.co/models?search=Helsinki-NLP%2Fopus-mt
https://huggingface.co/models?search=Helsinki-NLP%2Fopus-mt
huggingface.co
Models - Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
FlowTron: Improved Text to Speech Engine from NVIDIA
Paper: https://arxiv.org/abs/2005.05957
Code: https://github.com/NVIDIA/flowtron
Paper: https://arxiv.org/abs/2005.05957
Code: https://github.com/NVIDIA/flowtron
GitHub
GitHub - NVIDIA/flowtron: Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control…
Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer - NVIDIA/flowtron
PyTorch version of Stable Baselines, improved implementations of reinforcement learning algorithms.
https://towardsdatascience.com/stable-baselines-a-fork-of-openai-baselines-reinforcement-learning-made-easy-df87c4b2fc82
Documentation: https://stable-baselines3.readthedocs.io
Githab: https://github.com/DLR-RM/stable-baselines3
Paper: https://arxiv.org/abs/2005.05719v1
https://towardsdatascience.com/stable-baselines-a-fork-of-openai-baselines-reinforcement-learning-made-easy-df87c4b2fc82
Documentation: https://stable-baselines3.readthedocs.io
Githab: https://github.com/DLR-RM/stable-baselines3
Paper: https://arxiv.org/abs/2005.05719v1
Medium
Stable Baselines: a Fork of OpenAI Baselines — Reinforcement Learning Made Easy
After several weeks of hard work, we are happy to announce the release of Stable Baselines, a set of implementations of Reinforcement Learning (RL) algorithms with a common interface, based on OpenAI…
An Ethical Application of Computer Vision and Deep Learning — Identifying Child Soldiers Through Automatic Age and Military Fatigue Detection
https://www.pyimagesearch.com/2020/05/11/an-ethical-application-of-computer-vision-and-deep-learning-identifying-child-soldiers-through-automatic-age-and-military-fatigue-detection/
https://www.pyimagesearch.com/2020/05/11/an-ethical-application-of-computer-vision-and-deep-learning-identifying-child-soldiers-through-automatic-age-and-military-fatigue-detection/
This media is not supported in your browser
VIEW IN TELEGRAM
Objects are the secret key to revealing the world between vision and language
https://www.microsoft.com/en-us/research/blog/objects-are-the-secret-key-to-revealing-the-world-between-vision-and-language/
https://www.microsoft.com/en-us/research/blog/objects-are-the-secret-key-to-revealing-the-world-between-vision-and-language/
This media is not supported in your browser
VIEW IN TELEGRAM
Single-Stage Semantic Segmentation from Image Labels
Github: https://github.com/visinf/1-stage-wseg
Paper: https://arxiv.org/abs/2005.08104
Github: https://github.com/visinf/1-stage-wseg
Paper: https://arxiv.org/abs/2005.08104
How to Use Quantile Transforms for Machine Learning
https://machinelearningmastery.com/quantile-transforms-for-machine-learning/
https://machinelearningmastery.com/quantile-transforms-for-machine-learning/
MachineLearningMastery.com
How to Use Quantile Transforms for Machine Learning - MachineLearningMastery.com
Numerical input variables may have a highly skewed or non-standard distribution. This could be caused by outliers in the data, multi-modal distributions, highly exponential distributions, and more. Many machine learning algorithms prefer or perform better…
👄 Lip2Wav
Generate high quality speech from only lip movements. This code is part of the paper: Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis
Demo: https://www.youtube.com/watch?v=HziA-jmlk_4
Github: https://github.com/Rudrabha/Lip2Wav
Paper: https://arxiv.org/abs/2005.08209v1
Generate high quality speech from only lip movements. This code is part of the paper: Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis
Demo: https://www.youtube.com/watch?v=HziA-jmlk_4
Github: https://github.com/Rudrabha/Lip2Wav
Paper: https://arxiv.org/abs/2005.08209v1
YouTube
[CVPR, 2020] Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis (CVPR, 2020)
This is a demonstration video for the following research paper.
Paper noscript: Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis.
Authors: Prajwal K R*, Rudrabha Mukhopadhyay*, Vinay Namboodiri, C V Jawahar.
* both authors have an equal…
Paper noscript: Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis.
Authors: Prajwal K R*, Rudrabha Mukhopadhyay*, Vinay Namboodiri, C V Jawahar.
* both authors have an equal…