ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
4.01K photos
229 videos
23 files
4.32K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
👍3
Bilingual Corpus Mining and Multistage Fine-Tuning for Improving Machine Translation of Lecture Trannoscripts

🖥 Github: https://github.com/shyyhs/CourseraParallelCorpusMining

📕 Paper: https://arxiv.org/abs/2311.03696v1

🔥 Datasets: https://paperswithcode.com/dataset/aspec

https://news.1rj.ru/str/DataScienceT
👍1
Large Language Models (in 2023)

An excellent summary of the research progress and developments in LLMs.

Hyung Won chung, OpenAI (ex.Google and MIT Alumni) made this content publicly available. It's a great way to catch up on some important themes like scaling and optimizing LLMs.

Watch his talk here and Slides shared here.

https://news.1rj.ru/str/DataScienceT
👍31
🚀 Whisper-V3 / Consistency Decoder

Improved decoding for stable diffusion vaes.

- Whisper paper: https://arxiv.org/abs/2212.04356
- Whisper-V3 checkpoint: https://github.com/openai/whisper/discussions/1762
- Consistency Models: https://arxiv.org/abs/2303.01469
- Consistency Decoder release: https://github.com/openai/consistencydecoder

https://news.1rj.ru/str/DataScienceT
👍2
This media is not supported in your browser
VIEW IN TELEGRAM
NVIDIA just made Pandas 150x faster with zero code changes.

All you have to do is:
%load_ext cudf.pandas
import pandas as pd


Their RAPIDS library will automatically know if you're running on GPU or CPU and speed up your processing.

You can try it in this colab notebook

GitHub repo: https://github.com/rapidsai/cudf

https://news.1rj.ru/str/DataScienceT
👍72
🪞 Mirror: A Universal Framework for Various Information Extraction Tasks

🖥 Github: https://github.com/Spico197/Mirror

📕 Paper: https://arxiv.org/abs/2311.05419v1

🌐 Dataset: https://paperswithcode.com/dataset/glue

https://news.1rj.ru/str/DataScienceT
👍62
⚡️ LCM-LoRA: A Universal Stable-Diffusion Acceleration Module

Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference.

pip install diffusers transformers accelerate gradio==3.48.0

🖥 Github: https://github.com/luosiallen/latent-consistency-model

📕 Paper: https://arxiv.org/abs/2311.05556v1

🌐 Project: https://latent-consistency-models.github.io

🤗 Demo: https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model

https://news.1rj.ru/str/DataScienceT
👍3
Do you enjoy reading this channel?

Perhaps you have thought about placing ads on it?

To do this, follow three simple steps:

1) Sign up: https://telega.io/c/dataScienceT
2) Top up the balance in a convenient way
3) Create an advertising post

If the topic of your post fits our channel, we will publish it with pleasure.
👍4
🔊 Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models

Сhat & pretrained large audio language model proposed by Alibaba Cloud.

🐱 Github: https://github.com/qwenlm/qwen-audio

🚀 Demo: https://qwen-audio.github.io/Qwen-Audio/

📕 Paper: https://arxiv.org/abs/2311.07919v1

Dataset: https://paperswithcode.com/dataset/vocalsound

https://news.1rj.ru/str/DataScienceT
👍3
🌴Data Science A-Z: Hands-On Exercises & ChatGPT Bonus [2023]🌴

Learn
Data Science step by step through real Analytics examples. Data Mining, Modeling, Tableau Visualization and more!

Price :- 120$ - 20$

Price: 20$

Contact @hussein_sheikho
👍2
Inherently Interpretable Time Series Classification via Multiple Instance Learning (MILLET)

🖥 Github: https://github.com/jaearly/miltimeseriesclassification

📕 Paper: https://arxiv.org/pdf/2311.10049v1.pdf

Tasks: https://paperswithcode.com/task/decision-making

https://news.1rj.ru/str/DataScienceT
👍4
SA-Med2D-20M Dataset: Segment Anything in 2D Medical Imaging with 20 Million masks

🖥 Github: https://github.com/OpenGVLab/SAM-Med2D

🖥 Colab: https://colab.research.google.com/github/OpenGVLab/SAM-Med2D/blob/main/predictor_example.ipynb

📕 Paper: https://arxiv.org/abs/2311.11969v1

⭐️ Dataset: https://arxiv.org/abs/2311.11969
👍72