TRUFFLE–1 $ 1,299
Truffle-1 is an AI inference engine designed to run opensource models at home, on 60 Watts.
https://preorder.itsalltruffles.com/features
Truffle-1 is an AI inference engine designed to run opensource models at home, on 60 Watts.
https://preorder.itsalltruffles.com/features
super detailed explanation of the CVE-2024-1086 Linux v5.14-v6.7 privilege escalation exploit
https://pwning.tech/nftables/
I hope beginners will learn from my VR workflow and the seasoned researchers will learn from my techniques.
https://pwning.tech/nftables/
Pwning Tech
Flipping Pages: An analysis of a new Linux vulnerability in nf_tables and hardened exploitation techniques
A tale about exploiting KernelCTF Mitigation, Debian, and Ubuntu instances with a double-free in nf_tables in the Linux kernel, using novel techniques like Dirty Pagedirectory. All without even having to recompile the exploit for different kernel targets…
🤔1
xz/libzlma backdoor!
the infosec world is getting more and more interesting
https://www.openwall.com/lists/oss-security/2024/03/29/4
the infosec world is getting more and more interesting
https://www.openwall.com/lists/oss-security/2024/03/29/4
🤯3
block-traffic-we-cant-analyze /sigh/
https://community.cloudflare.com/t/russia-blocks-tls-v1-2-requests-to-cloudflare-edges/636460
https://community.cloudflare.com/t/russia-blocks-tls-v1-2-requests-to-cloudflare-edges/636460
Cloudflare Community
Russia blocks TLS v1.2 requests to cloudflare edges
There is a lot of reports about connection issues from russia when: a) Connecting to a cloudflare-proxied website that has TLS v1.3 explicitly disabled in cloudflare dashboard (examples: app.plex.tv, vrchat.com) b) Using specific network stacks like .NET’s…
😱1
Llama 3 released today
https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md
UPD: it seems quantized versions for llama.cpp are already available, though surprisingly not from TheBloke %)
https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct-GGUF
https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md
UPD: it seems quantized versions for llama.cpp are already available, though surprisingly not from TheBloke %)
https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct-GGUF
GitHub
llama3/MODEL_CARD.md at main · meta-llama/llama3
The official Meta Llama 3 GitHub site. Contribute to meta-llama/llama3 development by creating an account on GitHub.
interesting. TLDR:
they say mimesis ~ conformism enables cooperation under uncertainty; asperger ~ nonconformism enables diversity; both seem to be important for the civilization to operate
https://twitter.com/Altimor/status/1780846658387124551
they say mimesis ~ conformism enables cooperation under uncertainty; asperger ~ nonconformism enables diversity; both seem to be important for the civilization to operate
https://twitter.com/Altimor/status/1780846658387124551
X (formerly Twitter)
Flo Crivello (@Altimor) on X
Many people are hating on this video, but I actually think it's a fascinating display of the two very distinct modes that exist to relate with reality: mimesis vs. first principles thinking.
95% of people operate by mimesis. Truth doesn't matter to them…
95% of people operate by mimesis. Truth doesn't matter to them…
👍3
https://arxiv.org/abs/2404.09937v1
Compression Represents Intelligence Linearly
There is a belief that learning to compress well will lead to intelligence. Recently, language modeling has been shown to be equivalent to compression, which offers a compelling rationale for the success of large language models (LLMs): the development of more advanced language models is essentially enhancing compression which facilitates intelligence.
(...)
Given the abstract concept of "intelligence", we adopt the average downstream benchmark scores as a surrogate, specifically targeting intelligence related to knowledge and commonsense, coding, and mathematical reasoning. Across 12 benchmarks, our study brings together 30 public LLMs that originate from diverse organizations. Remarkably, we find that LLMs' intelligence -- reflected by average benchmark scores -- almost linearly correlates with their ability to compress external text corpora.
These results provide concrete evidence supporting the belief that superior compression indicates greater intelligence.
Furthermore, our findings suggest that compression efficiency, as an unsupervised metric derived from raw text corpora, serves as a reliable evaluation measure that is linearly associated with the model capabilities. We open-source our compression datasets as well as our data collection pipelines to facilitate future researchers to assess compression properly.
Compression Represents Intelligence Linearly
There is a belief that learning to compress well will lead to intelligence. Recently, language modeling has been shown to be equivalent to compression, which offers a compelling rationale for the success of large language models (LLMs): the development of more advanced language models is essentially enhancing compression which facilitates intelligence.
(...)
Given the abstract concept of "intelligence", we adopt the average downstream benchmark scores as a surrogate, specifically targeting intelligence related to knowledge and commonsense, coding, and mathematical reasoning. Across 12 benchmarks, our study brings together 30 public LLMs that originate from diverse organizations. Remarkably, we find that LLMs' intelligence -- reflected by average benchmark scores -- almost linearly correlates with their ability to compress external text corpora.
These results provide concrete evidence supporting the belief that superior compression indicates greater intelligence.
Furthermore, our findings suggest that compression efficiency, as an unsupervised metric derived from raw text corpora, serves as a reliable evaluation measure that is linearly associated with the model capabilities. We open-source our compression datasets as well as our data collection pipelines to facilitate future researchers to assess compression properly.
arXiv.org
Compression Represents Intelligence Linearly
There is a belief that learning to compress well will lead to intelligence. Recently, language modeling has been shown to be equivalent to compression, which offers a compelling rationale for the...
👾1
Wolfram's writings about irreducible complexity are irreducible and hence un-abstractable :)
https://writings.stephenwolfram.com/2024/05/why-does-biological-evolution-work-a-minimal-model-for-biological-evolution-and-other-adaptive-processes/
https://writings.stephenwolfram.com/2024/05/why-does-biological-evolution-work-a-minimal-model-for-biological-evolution-and-other-adaptive-processes/
Stephenwolfram
Why Does Biological Evolution Work? A Minimal Model for Biological Evolution and Other Adaptive Processes
Stephen Wolfram explores simple models of biological organisms as computational systems. A study of progressive development, multiway graphs of all possible paths and the need for narrowing the framework space.
Llama3 in your browser via WebGPU, client-side!
(don't forget to pick the Llama3 bc default is TinyLlama)
https://secretllama.com/
(don't forget to pick the Llama3 bc default is TinyLlama)
https://secretllama.com/
👍2⚡1
every business process can be improved by speeding it up; then these speedups accumulate and cause phase changes, often irreversible; beautifully described by Tiago here
https://wz.ax/tiago/the-throughput-of-learning
https://wz.ax/tiago/the-throughput-of-learning
👀1
you do want to learn how to build apps that work without internet, right? also hilarious slides inside
https://www.youtube.com/watch?v=EAxnA9L5rS8
https://www.youtube.com/watch?v=EAxnA9L5rS8
YouTube
!!Con 2020 - 89 characters of base-11?! Mobile networking in rural Ethiopia! by Ben Kuhn
89 characters of base-11?! Mobile networking in rural Ethiopia! by Ben Kuhn
Suppose you’re trying to build a client-server app that works in rural Ethiopia. Mobile data there doesn’t work most of the time! Of course, you’re not going to let that stop you……
Suppose you’re trying to build a client-server app that works in rural Ethiopia. Mobile data there doesn’t work most of the time! Of course, you’re not going to let that stop you……
👍1
How academia collapses into a self-repeating echo chamber
https://www.writingruxandrabio.com/p/the-weird-nerd-comes-with-trade-offs
https://www.writingruxandrabio.com/p/the-weird-nerd-comes-with-trade-offs
Writingruxandrabio
The Weird Nerd comes with trade-offs
A metascience post of sorts that argues we should take human capital more seriously
🫡1
exolabs is teasing running an AI cluster that consists from your Macs and iPhones
https://fxtwitter.com/mo_baioumy/status/1801322369434173860
https://fxtwitter.com/mo_baioumy/status/1801322369434173860
FxTwitter / FixupX
Mohamed Baioumy (@mo_baioumy)
One more Apple announcement this week: you can now run your personal AI cluster using Apple devices @exolabs_
h/t @awnihannun
h/t @awnihannun
🌚1
the brain activity when we talk is better explained by embeddings than with sounds or words
https://www.cell.com/neuron/fulltext/S0896-6273(24)00460-4
https://www.cell.com/neuron/fulltext/S0896-6273(24)00460-4
Neuron
A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations
Zada et al. use contextual embeddings from large language models to capture linguistic
information transmitted from the speaker’s brain to the listener’s brain in real-time,
dyadic conversations.
information transmitted from the speaker’s brain to the listener’s brain in real-time,
dyadic conversations.
🔥1🤯1👀1
critical tcp/ip RCE (CVSS 9.8) in windows
https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-38063
https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-38063
🤣6