Linkstream – Telegram
Linkstream
173 subscribers
32 photos
3 videos
2 files
898 links
Various links I find interesting. Mostly hardcore tech :) // by @oleksandr_now. See @notatky for the personal stuff
Download Telegram
TIL this is possible in the general case. Neat!

> SQL-99 allows for nested subqueries at nearly all places within a query.
From a user’s point of view, nested queries can greatly simplify the formulation of complex queries.

However, nested queries that are correlated with the outer queries frequently lead to dependent joins with nested loops evaluations and thus poor performance.

We present a generic approach for unnesting arbitrary SQL queries. As a result, the de-correlated queries allow for much simpler and much more efficient query evaluation.

https://btw-2015.informatik.uni-hamburg.de/res/proceedings/Hauptband/Wiss/Neumann-Unnesting_Arbitrary_Querie.pdf
👍1
TRUFFLE–1 $ 1,299
Truffle-1 is an AI inference engine designed to run opensource models at home, on 60 Watts.

https://preorder.itsalltruffles.com/features
xz/libzlma backdoor!
the infosec world is getting more and more interesting

https://www.openwall.com/lists/oss-security/2024/03/29/4
🤯3
https://arxiv.org/abs/2404.09937v1

Compression Represents Intelligence Linearly

There is a belief that learning to compress well will lead to intelligence. Recently, language modeling has been shown to be equivalent to compression, which offers a compelling rationale for the success of large language models (LLMs): the development of more advanced language models is essentially enhancing compression which facilitates intelligence.
(...)
Given the abstract concept of "intelligence", we adopt the average downstream benchmark scores as a surrogate, specifically targeting intelligence related to knowledge and commonsense, coding, and mathematical reasoning. Across 12 benchmarks, our study brings together 30 public LLMs that originate from diverse organizations. Remarkably, we find that LLMs' intelligence -- reflected by average benchmark scores -- almost linearly correlates with their ability to compress external text corpora.

These results provide concrete evidence supporting the belief that superior compression indicates greater intelligence.

Furthermore, our findings suggest that compression efficiency, as an unsupervised metric derived from raw text corpora, serves as a reliable evaluation measure that is linearly associated with the model capabilities. We open-source our compression datasets as well as our data collection pipelines to facilitate future researchers to assess compression properly.
👾1
Llama3 in your browser via WebGPU, client-side!
(don't forget to pick the Llama3 bc default is TinyLlama)

https://secretllama.com/
👍21