ML Research Hub – Telegram
ML Research Hub
32.7K subscribers
4.09K photos
237 videos
23 files
4.41K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
🔹 Title: AgentFold: Long-Horizon Web Agents with Proactive Context Management

🔹 Publication Date: Published on Oct 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24699
• PDF: https://arxiv.org/pdf/2510.24699
• Github: https://github.com/Alibaba-NLP/DeepResearch

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
👍1
🔹 Title: WebLeaper: Empowering Efficiency and Efficacy in WebAgent via Enabling Info-Rich Seeking

🔹 Publication Date: Published on Oct 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24697
• PDF: https://arxiv.org/pdf/2510.24697
• Github: https://github.com/Alibaba-NLP/DeepResearch

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: AgentFrontier: Expanding the Capability Frontier of LLM Agents with ZPD-Guided Data Synthesis

🔹 Publication Date: Published on Oct 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24695
• PDF: https://arxiv.org/pdf/2510.24695
• Github: https://github.com/Alibaba-NLP/DeepResearch

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: Repurposing Synthetic Data for Fine-grained Search Agent Supervision

🔹 Publication Date: Published on Oct 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24694
• PDF: https://arxiv.org/pdf/2510.24694
• Github: https://github.com/Alibaba-NLP/DeepResearch

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling

🔹 Publication Date: Published on Oct 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24645
• PDF: https://arxiv.org/pdf/2510.24645

🔹 Datasets citing this paper:
https://huggingface.co/datasets/Bingguang/FunReason-MT

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: Game-TARS: Pretrained Foundation Models for Scalable Generalist Multimodal Game Agents

🔹 Publication Date: Published on Oct 27

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.23691
• PDF: https://arxiv.org/pdf/2510.23691
• Project Page: https://seed-tars.com/game-tars

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: Generalization or Memorization: Dynamic Decoding for Mode Steering

🔹 Publication Date: Published on Oct 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.22099
• PDF: https://arxiv.org/pdf/2510.22099

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: Rethinking Visual Intelligence: Insights from Video Pretraining

🔹 Publication Date: Published on Oct 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24448
• PDF: https://arxiv.org/pdf/2510.24448

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: InteractComp: Evaluating Search Agents With Ambiguous Queries

🔹 Publication Date: Published on Oct 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24668
• PDF: https://arxiv.org/pdf/2510.24668

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: Group Relative Attention Guidance for Image Editing

🔹 Publication Date: Published on Oct 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24657
• PDF: https://arxiv.org/pdf/2510.24657
• Project Page: https://little-misfit.github.io/GRAG-Image-Editing/
• Github: https://github.com/little-misfit/GRAG-Image-Editing

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: VisCoder2: Building Multi-Language Visualization Coding Agents

🔹 Publication Date: Published on Oct 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.23642
• PDF: https://arxiv.org/pdf/2510.23642
• Project Page: https://tiger-ai-lab.github.io/VisCoder2/
• Github: https://github.com/TIGER-AI-Lab/VisCoder2

🔹 Datasets citing this paper:
https://huggingface.co/datasets/TIGER-Lab/VisPlotBench
https://huggingface.co/datasets/TIGER-Lab/VisCode-Multi-679K

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: STAR-Bench: Probing Deep Spatio-Temporal Reasoning as Audio 4D Intelligence

🔹 Publication Date: Published on Oct 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24693
• PDF: https://arxiv.org/pdf/2510.24693
• Project Page: https://internlm.github.io/StarBench/
• Github: https://github.com/InternLM/StarBench

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: ATLAS: Adaptive Transfer Scaling Laws for Multilingual Pretraining, Finetuning, and Decoding the Curse of Multilinguality

🔹 Publication Date: Published on Oct 24

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.22037
• PDF: https://arxiv.org/pdf/2510.22037

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: Uniform Discrete Diffusion with Metric Path for Video Generation

🔹 Publication Date: Published on Oct 28

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.24717
• PDF: https://arxiv.org/pdf/2510.24717
• Github: https://github.com/baaivision/URSA

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
💡 ViT for Fashion MNIST Classification

This lesson demonstrates how to use a pre-trained Vision Transformer (ViT) to classify an image from the Fashion MNIST dataset. ViT treats an image as a sequence of patches, similar to how language models treat sentences, making it a powerful architecture for computer vision tasks. We will use a model from the Hugging Face Hub that is already fine-tuned for this specific dataset.

from transformers import ViTImageProcessor, ViTForImageClassification
from datasets import load_dataset
import torch

# 1. Load a model fine-tuned on Fashion MNIST and its processor
model_name = "abhishek/autotrain-fashion-mnist-283834433"
processor = ViTImageProcessor.from_pretrained(model_name)
model = ViTForImageClassification.from_pretrained(model_name)

# 2. Load the dataset and get a sample image
dataset = load_dataset("fashion_mnist", split="test")
image = dataset[100]['image'] # Get the 100th image

# 3. Preprocess the image and prepare it for the model
inputs = processor(images=image, return_tensors="pt")

# 4. Perform inference to get the classification logits
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits

# 5. Get the predicted class and its label
predicted_class_idx = logits.argmax(-1).item()
predicted_class = model.config.id2label[predicted_class_idx]

print(f"Image is a: {dataset[100]['label']}")
print(f"Model predicted: {predicted_class}")


Code explanation: This noscript uses the transformers library to load a ViT model specifically fine-tuned for Fashion MNIST classification. It then loads the dataset, selects a single sample image, and uses the model's processor to convert it into the correct input format. The model performs inference, and the noscript identifies the most likely class from the output logits, printing the final human-readable prediction.

#Python #MachineLearning #ViT #ComputerVision #HuggingFace

━━━━━━━━━━━━━━━
By: @DataScienceT
💡 ViT for Fashion MNIST Classification

This lesson demonstrates how to use a pre-trained Vision Transformer (ViT) to classify an image from the Fashion MNIST dataset. ViT treats an image as a sequence of patches, similar to how language models treat sentences, making it a powerful architecture for computer vision tasks. We will use a model from the Hugging Face Hub that is already fine-tuned for this specific dataset.

from transformers import ViTImageProcessor, ViTForImageClassification
from datasets import load_dataset
import torch

# 1. Load a model fine-tuned on Fashion MNIST and its processor
model_name = "abhishek/autotrain-fashion-mnist-283834433"
processor = ViTImageProcessor.from_pretrained(model_name)
model = ViTForImageClassification.from_pretrained(model_name)

# 2. Load the dataset and get a sample image
dataset = load_dataset("fashion_mnist", split="test")
image = dataset[100]['image'] # Get the 100th image

# 3. Preprocess the image and prepare it for the model
inputs = processor(images=image, return_tensors="pt")

# 4. Perform inference to get the classification logits
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits

# 5. Get the predicted class and its label
predicted_class_idx = logits.argmax(-1).item()
predicted_class = model.config.id2label[predicted_class_idx]

print(f"Image is a: {dataset[100]['label']}")
print(f"Model predicted: {predicted_class}")


Code explanation: This noscript uses the transformers library to load a ViT model specifically fine-tuned for Fashion MNIST classification. It then loads the dataset, selects a single sample image, and uses the model's processor to convert it into the correct input format. The model performs inference, and the noscript identifies the most likely class from the output logits, printing the final human-readable prediction.

#Python #MachineLearning #ViT #ComputerVision #HuggingFace

━━━━━━━━━━━━━━━
By: @DataScienceT
🔹 Title: PartNeXt: A Next-Generation Dataset for Fine-Grained and Hierarchical 3D Part Understanding

🔹 Publication Date: Published on Oct 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.20155
• PDF: https://arxiv.org/pdf/2510.20155
• Project Page: https://authoritywang.github.io/partnext/
• Github: https://github.com/AuthorityWang/PartNeXt

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: VisJudge-Bench: Aesthetics and Quality Assessment of Visualizations

🔹 Publication Date: Published on Oct 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.22373
• PDF: https://arxiv.org/pdf/2510.22373
• Github: https://github.com/HKUSTDial/VisJudgeBench

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: PatenTEB: A Comprehensive Benchmark and Model Family for Patent Text Embedding

🔹 Publication Date: Published on Oct 25

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.22264
• PDF: https://arxiv.org/pdf/2510.22264
• Github: https://github.com/iliass-y/patenteb

🔹 Datasets citing this paper:
https://huggingface.co/datasets/datalyes/class_bloom
https://huggingface.co/datasets/datalyes/class_nli_oldnew
https://huggingface.co/datasets/datalyes/clusters_ext_full_ipc
https://huggingface.co/datasets/datalyes/class_text2ipc3

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: From Spatial to Actions: Grounding Vision-Language-Action Model in Spatial Foundation Priors

🔹 Publication Date: Published on Oct 20

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.17439
• PDF: https://arxiv.org/pdf/2510.17439
• Github: https://falcon-vla.github.io/

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT
🔹 Title: UltraHR-100K: Enhancing UHR Image Synthesis with A Large-Scale High-Quality Dataset

🔹 Publication Date: Published on Oct 23

🔹 Paper Links:
• arXiv Page: https://arxiv.org/abs/2510.20661
• PDF: https://arxiv.org/pdf/2510.20661

🔹 Datasets citing this paper:
No datasets found

🔹 Spaces citing this paper:
No spaces found
==================================

For more data science resources:
https://news.1rj.ru/str/DataScienceT