insecure boot, huh
https://arstechnica.com/security/2024/02/critical-vulnerability-affecting-most-linux-distros-allows-for-bootkits/
https://arstechnica.com/security/2024/02/critical-vulnerability-affecting-most-linux-distros-allows-for-bootkits/
Ars Technica
Critical vulnerability affecting most Linux distros allows for bootkits
Buffer overflow in bootloader shim allows attackers to run code each time devices boot up.
😁2😱2
TIL this is possible in the general case. Neat!
> SQL-99 allows for nested subqueries at nearly all places within a query.
From a user’s point of view, nested queries can greatly simplify the formulation of complex queries.
However, nested queries that are correlated with the outer queries frequently lead to dependent joins with nested loops evaluations and thus poor performance.
We present a generic approach for unnesting arbitrary SQL queries. As a result, the de-correlated queries allow for much simpler and much more efficient query evaluation.
https://btw-2015.informatik.uni-hamburg.de/res/proceedings/Hauptband/Wiss/Neumann-Unnesting_Arbitrary_Querie.pdf
> SQL-99 allows for nested subqueries at nearly all places within a query.
From a user’s point of view, nested queries can greatly simplify the formulation of complex queries.
However, nested queries that are correlated with the outer queries frequently lead to dependent joins with nested loops evaluations and thus poor performance.
We present a generic approach for unnesting arbitrary SQL queries. As a result, the de-correlated queries allow for much simpler and much more efficient query evaluation.
https://btw-2015.informatik.uni-hamburg.de/res/proceedings/Hauptband/Wiss/Neumann-Unnesting_Arbitrary_Querie.pdf
👍1
TRUFFLE–1 $ 1,299
Truffle-1 is an AI inference engine designed to run opensource models at home, on 60 Watts.
https://preorder.itsalltruffles.com/features
Truffle-1 is an AI inference engine designed to run opensource models at home, on 60 Watts.
https://preorder.itsalltruffles.com/features
super detailed explanation of the CVE-2024-1086 Linux v5.14-v6.7 privilege escalation exploit
https://pwning.tech/nftables/
I hope beginners will learn from my VR workflow and the seasoned researchers will learn from my techniques.
https://pwning.tech/nftables/
Pwning Tech
Flipping Pages: An analysis of a new Linux vulnerability in nf_tables and hardened exploitation techniques
A tale about exploiting KernelCTF Mitigation, Debian, and Ubuntu instances with a double-free in nf_tables in the Linux kernel, using novel techniques like Dirty Pagedirectory. All without even having to recompile the exploit for different kernel targets…
🤔1
xz/libzlma backdoor!
the infosec world is getting more and more interesting
https://www.openwall.com/lists/oss-security/2024/03/29/4
the infosec world is getting more and more interesting
https://www.openwall.com/lists/oss-security/2024/03/29/4
🤯3
block-traffic-we-cant-analyze /sigh/
https://community.cloudflare.com/t/russia-blocks-tls-v1-2-requests-to-cloudflare-edges/636460
https://community.cloudflare.com/t/russia-blocks-tls-v1-2-requests-to-cloudflare-edges/636460
Cloudflare Community
Russia blocks TLS v1.2 requests to cloudflare edges
There is a lot of reports about connection issues from russia when: a) Connecting to a cloudflare-proxied website that has TLS v1.3 explicitly disabled in cloudflare dashboard (examples: app.plex.tv, vrchat.com) b) Using specific network stacks like .NET’s…
😱1
Llama 3 released today
https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md
UPD: it seems quantized versions for llama.cpp are already available, though surprisingly not from TheBloke %)
https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct-GGUF
https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md
UPD: it seems quantized versions for llama.cpp are already available, though surprisingly not from TheBloke %)
https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct-GGUF
GitHub
llama3/MODEL_CARD.md at main · meta-llama/llama3
The official Meta Llama 3 GitHub site. Contribute to meta-llama/llama3 development by creating an account on GitHub.
interesting. TLDR:
they say mimesis ~ conformism enables cooperation under uncertainty; asperger ~ nonconformism enables diversity; both seem to be important for the civilization to operate
https://twitter.com/Altimor/status/1780846658387124551
they say mimesis ~ conformism enables cooperation under uncertainty; asperger ~ nonconformism enables diversity; both seem to be important for the civilization to operate
https://twitter.com/Altimor/status/1780846658387124551
X (formerly Twitter)
Flo Crivello (@Altimor) on X
Many people are hating on this video, but I actually think it's a fascinating display of the two very distinct modes that exist to relate with reality: mimesis vs. first principles thinking.
95% of people operate by mimesis. Truth doesn't matter to them…
95% of people operate by mimesis. Truth doesn't matter to them…
👍3
https://arxiv.org/abs/2404.09937v1
Compression Represents Intelligence Linearly
There is a belief that learning to compress well will lead to intelligence. Recently, language modeling has been shown to be equivalent to compression, which offers a compelling rationale for the success of large language models (LLMs): the development of more advanced language models is essentially enhancing compression which facilitates intelligence.
(...)
Given the abstract concept of "intelligence", we adopt the average downstream benchmark scores as a surrogate, specifically targeting intelligence related to knowledge and commonsense, coding, and mathematical reasoning. Across 12 benchmarks, our study brings together 30 public LLMs that originate from diverse organizations. Remarkably, we find that LLMs' intelligence -- reflected by average benchmark scores -- almost linearly correlates with their ability to compress external text corpora.
These results provide concrete evidence supporting the belief that superior compression indicates greater intelligence.
Furthermore, our findings suggest that compression efficiency, as an unsupervised metric derived from raw text corpora, serves as a reliable evaluation measure that is linearly associated with the model capabilities. We open-source our compression datasets as well as our data collection pipelines to facilitate future researchers to assess compression properly.
Compression Represents Intelligence Linearly
There is a belief that learning to compress well will lead to intelligence. Recently, language modeling has been shown to be equivalent to compression, which offers a compelling rationale for the success of large language models (LLMs): the development of more advanced language models is essentially enhancing compression which facilitates intelligence.
(...)
Given the abstract concept of "intelligence", we adopt the average downstream benchmark scores as a surrogate, specifically targeting intelligence related to knowledge and commonsense, coding, and mathematical reasoning. Across 12 benchmarks, our study brings together 30 public LLMs that originate from diverse organizations. Remarkably, we find that LLMs' intelligence -- reflected by average benchmark scores -- almost linearly correlates with their ability to compress external text corpora.
These results provide concrete evidence supporting the belief that superior compression indicates greater intelligence.
Furthermore, our findings suggest that compression efficiency, as an unsupervised metric derived from raw text corpora, serves as a reliable evaluation measure that is linearly associated with the model capabilities. We open-source our compression datasets as well as our data collection pipelines to facilitate future researchers to assess compression properly.
arXiv.org
Compression Represents Intelligence Linearly
There is a belief that learning to compress well will lead to intelligence. Recently, language modeling has been shown to be equivalent to compression, which offers a compelling rationale for the...
👾1
Wolfram's writings about irreducible complexity are irreducible and hence un-abstractable :)
https://writings.stephenwolfram.com/2024/05/why-does-biological-evolution-work-a-minimal-model-for-biological-evolution-and-other-adaptive-processes/
https://writings.stephenwolfram.com/2024/05/why-does-biological-evolution-work-a-minimal-model-for-biological-evolution-and-other-adaptive-processes/
Stephenwolfram
Why Does Biological Evolution Work? A Minimal Model for Biological Evolution and Other Adaptive Processes
Stephen Wolfram explores simple models of biological organisms as computational systems. A study of progressive development, multiway graphs of all possible paths and the need for narrowing the framework space.
Llama3 in your browser via WebGPU, client-side!
(don't forget to pick the Llama3 bc default is TinyLlama)
https://secretllama.com/
(don't forget to pick the Llama3 bc default is TinyLlama)
https://secretllama.com/
👍2⚡1
every business process can be improved by speeding it up; then these speedups accumulate and cause phase changes, often irreversible; beautifully described by Tiago here
https://wz.ax/tiago/the-throughput-of-learning
https://wz.ax/tiago/the-throughput-of-learning
👀1
you do want to learn how to build apps that work without internet, right? also hilarious slides inside
https://www.youtube.com/watch?v=EAxnA9L5rS8
https://www.youtube.com/watch?v=EAxnA9L5rS8
YouTube
!!Con 2020 - 89 characters of base-11?! Mobile networking in rural Ethiopia! by Ben Kuhn
89 characters of base-11?! Mobile networking in rural Ethiopia! by Ben Kuhn
Suppose you’re trying to build a client-server app that works in rural Ethiopia. Mobile data there doesn’t work most of the time! Of course, you’re not going to let that stop you……
Suppose you’re trying to build a client-server app that works in rural Ethiopia. Mobile data there doesn’t work most of the time! Of course, you’re not going to let that stop you……
👍1