Linkstream – Telegram
Linkstream
173 subscribers
32 photos
3 videos
2 files
898 links
Various links I find interesting. Mostly hardcore tech :) // by @oleksandr_now. See @notatky for the personal stuff
Download Telegram
xz/libzlma backdoor!
the infosec world is getting more and more interesting

https://www.openwall.com/lists/oss-security/2024/03/29/4
🤯3
https://arxiv.org/abs/2404.09937v1

Compression Represents Intelligence Linearly

There is a belief that learning to compress well will lead to intelligence. Recently, language modeling has been shown to be equivalent to compression, which offers a compelling rationale for the success of large language models (LLMs): the development of more advanced language models is essentially enhancing compression which facilitates intelligence.
(...)
Given the abstract concept of "intelligence", we adopt the average downstream benchmark scores as a surrogate, specifically targeting intelligence related to knowledge and commonsense, coding, and mathematical reasoning. Across 12 benchmarks, our study brings together 30 public LLMs that originate from diverse organizations. Remarkably, we find that LLMs' intelligence -- reflected by average benchmark scores -- almost linearly correlates with their ability to compress external text corpora.

These results provide concrete evidence supporting the belief that superior compression indicates greater intelligence.

Furthermore, our findings suggest that compression efficiency, as an unsupervised metric derived from raw text corpora, serves as a reliable evaluation measure that is linearly associated with the model capabilities. We open-source our compression datasets as well as our data collection pipelines to facilitate future researchers to assess compression properly.
👾1
Llama3 in your browser via WebGPU, client-side!
(don't forget to pick the Llama3 bc default is TinyLlama)

https://secretllama.com/
👍21
every business process can be improved by speeding it up; then these speedups accumulate and cause phase changes, often irreversible; beautifully described by Tiago here

https://wz.ax/tiago/the-throughput-of-learning
👀1
🤣6
Kaggle Expert level reached with the end-to-end competition solver
(the paper says grandmaster but ahem not quite)
https://wz.ax/agent-k/2411.03562
🤷‍♂1
Updated Alibaba's Qwen 2.5 Coder model is
a) solid GPT4 level in code generation (okay),
but also
b) does that in less than 0.1x the resources compared to SOTA (llama3.1) just 3 months ago 🤯 (tested both myself)
https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct
🔥1🤯1