Hugging Face – Telegram
Hugging Face
147 subscribers
1.3K photos
399 videos
2.13K links
Download Telegram
Hugging Face (Twitter)

RT @HuggingPapers: Meta just released Action100M on Hugging Face

A massive video dataset with 100M+ hierarchical action annotations.
Every video includes tree-of-captions with action labels, brief and detailed summaries.
Hugging Face (Twitter)

RT @lhoestq: Crazy ! 🤯

I was sceptical so I had to check the dataset

See for yourself, the truth is here: there are indeed white spaces indicating the true answer in some examples

(thanks to the AI assistant on @huggingface for the SQL query) https://twitter.com/fujikanaeda/status/2011565035408277996#m
Hugging Face (Twitter)

RT @calebfahlgren: Benchmark Leaderboards seen in the wild on @huggingface 👀🏆
Hugging Face (Twitter)

RT @RisingSayak: In case anyone missed -- new models were shipped in Diffusers this week.

1️⃣ Flux.2 Klein - significantly consumer-friendlier than Flux.2

2️⃣ GLM Image - AR + Diffusion Decoder

Check'em out!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @mervenoyann: real-time vision in your browser 🔥

try out YOLO26 for pose estimation and detection built on WebGPU ⚡️
Hugging Face (Twitter)

RT @staghado: 🚀 LightOnOCR-2-1B 🦉 is out, a major update to LightOnOCR.
1B parameters, end-to-end multilingual OCR, and it beats models 9× larger on OlmOCR-Bench while being much faster.
PDF/page in, clean ordered Markdown out, with optional image localization (bbox variants).
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @evalstate: Open Weights & Open Source v Claude Code. How does [Toad's] fractal on the noscript page work? Speeded up 4x. Mixture of zai-org/GLM-4.7 and openai/gpt-oss via Hugging Face Inference providers. Join Toad Explorers and get $20 of inference provider credits
Hugging Face (Twitter)

RT @ariG23498: We (/w @RisingSayak) are going to represent @huggingface at the PyTorch Day India event this year.

Here is the plan, we go to Bengaluru, talk about Transformers and Kernels, coversate with like minded folks and then leave. Short and sweet!

If you are still on the edge of deciding whether to attend or not, we give you another good reason to attend. 🤗

See you there.
Hugging Face (Twitter)

RT @RisingSayak: Some notes on what someone can do for building chops in ML x open source x modeling:

• Take a popular pre-trained model implementation, profile it, spot the bottlenecks, & try to improve its speed-memory trade-off -- it's a valuable skill that any sane hiring manager should understand and credit (they are probably not legit if they don't). is a good example of this.

• GPUs are in short supply. So, try reimplementing it in JAX, leveraging its strengths. Make it run on TPUs, blazing fast 🔥 -- this will help you establish that you care about performance and are comfortable switching stacks when needed. https://github.com/sanchit-gandhi/whisper-jax is an amazing example of this.

• In the context of an organization, communication is the key. Make sure you document your experience in an easily digestible way so most folks would understand what you achieved. Provide numbers on benchmarks, mention assumptions, and whatever limitations you faced and how you approached them.

• Get a pro subnoscription to whatever AI coding assistant you think works the best for your stuff. Make it a part of your workflow, but DO NOT become overly reliant on it. Have enough juice in the process so that you can build muscle memory and objective evidence of your intellect over time.

• Have fun!

It gives me a sense of joy and relief to know that back in the days, we did all of it happily WITHOUT any AI coding assistance. Lots of fun, despair, and anxiety -- but all worth it; 10/10 -- would do it again!
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @ltx_model: 2,000,000 @huggingface downloads!

LTX-2 reached this milestone the way we believe it should. Built in the open, shaped by real-world use, and driven by the community.

Thank you to everyone experimenting, building, and pushing it forward.

Looking ahead to what’s next.
This media is not supported in your browser
VIEW IN TELEGRAM
Hugging Face (Twitter)

RT @overworld_ai: Step in, move around, and see the world update as you act.

We’ve put up a live Hugging Face demo of our real-time world model so you can check it out.
Hugging Face (Twitter)

RT @nvidianewsroom: 🌍 Weather forecasting has always relied on powerful supercomputers running physics-based models.

We are proud to announce the NVIDIA Earth-2 family of open models — the world’s first fully open, accelerated AI weather stack — saving computational time and costs, and enabling more nations, enterprises, and businesses to run application-specific forecasting systems.

Weather AI is now accessible worldwide at every stage. #AMS2026

Read more: nvda.ws/4sWQ2B4
Hugging Face (Twitter)

RT @LysandreJik: Transformers v5's FINAL, stable release is out 🔥 Transformers' biggest release.

The big Ws of this release:
- Performance, especially for MoE (6x-11x speedups)
- No more slow/fast tokenizers -> way simpler API, explicit backends, better performance
- dynamic weight loading: way faster, and enabling: MoE now working w/ {quants, tp, peft, ...}

We have a migration guide on the main branch; please take a look at it in case you run into issues. Come in our GH issues if you still do after reading it 😀
Hugging Face (Twitter)

RT @allen_ai: Molmo 2 (8B) is now available via @huggingface Inference Providers, courtesy of Public AI.

State-of-the-art video understanding with pointing, counting, & multi-frame reasoning. Track objects through scenes and identify where + when events occur. 🧵