(Only Russian speaking people would understand)
Эппл изобрела архиватор Бабушкина для LLM 💀💀💀
https://neolurk.org/wiki/%D0%90%D0%BB%D0%B5%D0%BA%D1%81%D0%B5%D0%B9_%D0%91%D0%B0%D0%B1%D1%83%D1%88%D0%BA%D0%B8%D0%BD#%D0%90%D1%80%D1%85%D0%B8%D0%B2%D0%B0%D1%82%D0%BE%D1%80
https://machinelearning.apple.com/research/seedlm-compressing
Эппл изобрела архиватор Бабушкина для LLM 💀💀💀
https://neolurk.org/wiki/%D0%90%D0%BB%D0%B5%D0%BA%D1%81%D0%B5%D0%B9_%D0%91%D0%B0%D0%B1%D1%83%D1%88%D0%BA%D0%B8%D0%BD#%D0%90%D1%80%D1%85%D0%B8%D0%B2%D0%B0%D1%82%D0%BE%D1%80
https://machinelearning.apple.com/research/seedlm-compressing
Apple Machine Learning Research
SeedLM: Compressing LLM Weights into Seeds of Pseudo-Random Generators
Large Language Models (LLMs) have transformed natural language processing, but face significant challenges in widespread deployment due to…
🤣5🤯1🍌1
The craziest fact is not that Meta rigged public benchmark results for Llama 4 but rather that people voted for the option on the left as a better one 💀💀💀
https://www.theverge.com/meta/645012/meta-llama-4-maverick-benchmarks-gaming
https://www.theverge.com/meta/645012/meta-llama-4-maverick-benchmarks-gaming
😁9🍌3