DoomPosting – Telegram
DoomPosting
7.72K subscribers
73.5K photos
20.8K videos
6 files
67.2K links
Degens Deteriorating
Download Telegram
The H1B crackdown is 100k per visa per year. Absolutely nuked lmao

GGs scammers

The indians are not happy

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
❤‍🔥73😁2🔥1👀1
Reddit is also not happy. They are blaming the indians

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
😁81
This media is not supported in your browser
VIEW IN TELEGRAM
Trump signing the H-1B reform Executive Order. Howard Lutnick notes: the $100k fee for an H-1B Visa must be paid PER YEAR

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
5🔥2
🔥8
NEW Bloomberg: Trump to Add New $100,000 Fee for H-1B Visas in Latest Crackdown

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
👏8
H-1B Visa fraud is a massive problem in America

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
💯7
This media is not supported in your browser
VIEW IN TELEGRAM
More extreme leftist groups to be designated terrorist groups,

"We have others we're going to designate too, but we're going to look at the people that funded Antifa, see who they are, where they came from and why they did it."

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
4🔥4💯3
This media is not supported in your browser
VIEW IN TELEGRAM
Candace Owens says she has now viewed the footage from the camera positioned behind Charlie Kirk at the moment he was taken out.

She claims the video is not gruesome and, from that angle, shows no blood at all.

Owens adds that she saw no sign of a bullet exiting from behind Charlie, leaving her even more confused.

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
😡5👏1
This media is not supported in your browser
VIEW IN TELEGRAM
Running Qwen3 8B thinking on an iPhone Air with MLX.

The model is quantized to 4-bit and runs pretty well.

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
😨2
Finetune DeepSeek with two Mac Studios + MLX

We use pipeline parallelism to split the full 671GB model across two devices connected by a single TB5 cable.

LoRA reduces the number of parameters to train from 671 billion down to 37 million, reducing the memory overhead from gradients and optimizer. After training, these small LoRA adapters can be imported into the EXO 1.0 (open source coming soonTM) inference engine, demonstrating our successful finetune.

Using the most basic pipeline scheduling, we see a train throughput of ~30tok/s. Our 2.5M token dataset completed training in about a day.

There’s a lot more we can do to improve throughput. Better pipeline scheduling will allow us to overlap batches, bringing benefit from having more devices. Tensor parallel will bring near-linear scaling in number of nodes.

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🔥3