ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
4K photos
227 videos
23 files
4.3K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
You can now download and watch all paid data science courses for free by subscribing to our new channel

https://news.1rj.ru/str/udemy13
👍2❤‍🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🧔 4DHumans: Reconstructing and Tracking Humans with Transformers

Fully "transformerized" version of a network for human mesh recovery.

🖥 Github: https://github.com/shubham-goel/4D-Humans

⭐️ Colab: https://colab.research.google.com/drive/1Ex4gE5v1bPR3evfhtG7sDHxQGsWwNwby?usp=sharing

📕 Paper: https://arxiv.org/pdf/2305.20091.pdf

🔗 Project: https://shubham-goel.github.io/4dhumans/

https://news.1rj.ru/str/DataScienceT
❤‍🔥2
Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement
at 100k Steps-Per-Second

🖥 Github: https://github.com/facebookresearch/galactic

Paper: https://arxiv.org/pdf/2306.07552v1.pdf

💨 Dataset: https://paperswithcode.com/dataset/vizdoom

https://news.1rj.ru/str/DataScienceT
❤‍🔥41
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration

Macaw-LLM is a model of its kind, bringing together state-of-the-art models for processing visual, auditory, and textual information, namely CLIP, Whisper, and LLaMA.

🖥 Github: https://github.com/lyuchenyang/macaw-llm

⭐️ Model: https://tinyurl.com/yem9m4nf

📕 Paper: https://tinyurl.com/4rsexudv

🔗 Dataset: https://github.com/lyuchenyang/Macaw-LLM/blob/main/data

https://news.1rj.ru/str/DataScienceT
👍42❤‍🔥1
Semi-supervised learning made simple with self-supervised clustering [CVPR 2023]

🖥 Github: https://github.com/pietroastolfi/suave-daino

Paper: https://arxiv.org/pdf/2306.07483v1.pdf

💨 Dataset: https://paperswithcode.com/dataset/imagenet

https://news.1rj.ru/str/DataScienceT
❤‍🔥21👍1
How do Transformers work?

All
the Transformer models mentioned above (GPT, BERT, BART, T5, etc.) have been trained as language models. This means they have been trained on large amounts of raw text in a self-supervised fashion. Self-supervised learning is a type of training in which the objective is automatically computed from the inputs of the model. That means that humans are not needed to label the data!

This type of model develops a statistical understanding of the language it has been trained on, but it’s not very useful for specific practical tasks. Because of this, the general pretrained model then goes through a process called transfer learning. During this process, the model is fine-tuned in a supervised way — that is, using human-annotated labels — on a given task

🔗 Read More

🌸 https://news.1rj.ru/str/DataScienceT
👍32❤‍🔥2
Data Science With Python Workflow Cheat Sheet

Creator: business Science
Stars ⭐️: 75
Forked By: 38

https://github.com/business-science/cheatsheets/blob/master/Data_Science_With_Python_Workflow.pdf

https://news.1rj.ru/str/DataScienceT
👍53
80+ Jupyter Notebook tutorials on image classification, object detection and image segmentation in various domains
📌 Agriculture and Food
📌 Medical and Healthcare
📌 Satellite
📌 Security and Surveillance
📌 ADAS and Self Driving Cars
📌 Retail and E-Commerce
📌 Wildlife

Classification library
https://github.com/Tessellate-Imaging/monk_v1

Notebooks - https://github.com/Tessellate-Imaging/monk_v1/tree/master/study_roadmaps/4_image_classification_zoo

Detection and Segmentation Library
https://github.com/Tessellate-Imaging/

Monk_Object_Detection
Notebooks: https://github.com/Tessellate-Imaging/Monk_Object_Detection/tree/master/application_model_zoo

https://news.1rj.ru/str/DataScienceT
👍7❤‍🔥31
Choose JOBITT! Receive +10% of your first salary as a bonus from JOBITT!
Find your dream job with JOBITT! Get more, starting with your first paycheck! Find many job options on our Telegram channel: https://news.1rj.ru/str/ujobit
2❤‍🔥2👍2
📌 LOMO: LOw-Memory Optimization

New optimizer, LOw-Memory Optimization enables the full parameter fine-tuning of a 7B model on a single RTX 3090, or a 65B model on a single machine with 8×RTX 3090, each with 24GB memory.

🖥 Github: https://github.com/OpenLMLab/LOMO/tree/main

📕 Paper: https://arxiv.org/pdf/2306.09782.pdf

🔗 Dataset: https://paperswithcode.com/dataset/superglue

https://news.1rj.ru/str/DataScienceT
2❤‍🔥1
Jumanji: a Diverse Suite of Scalable Reinforcement Learning Environments in JAX

Jumanji is helping pioneer a new wave of hardware-accelerated research and development in the field of RL.

🖥 Github: https://github.com/instadeepai/jumanji

📕 Paper: https://arxiv.org/abs/2306.09884v1

🔗 Dataset: https://paperswithcode.com/dataset/mujoco

https://news.1rj.ru/str/DataScienceT
2👍2❤‍🔥1
Google just dropped Generative AI learning path with 9 courses:

🤖: Intro to Generative AI
🤖: Large Language Models
🤖: Responsible AI
🤖: Image Generation
🤖: Encoder-Decoder
🤖: Attention Mechanism
🤖: Transformers and BERT Models
🤖: Create Image Captioning Models
🤖: Intro to Gen AI Studio

🌐 Link: https://www.cloudskillsboost.google/paths/118

https://news.1rj.ru/str/DataScienceT
❤‍🔥33👍1