TLDR: to exercise agency you need to understand what's around you and the rules of the game, both explicit and implicit ones
https://arxiv.org/abs/2506.01622
https://arxiv.org/abs/2506.01622
arXiv.org
General agents contain world models
Are world models a necessary ingredient for flexible, goal-directed behaviour, or is model-free learning sufficient? We provide a formal answer to this question, showing that any agent capable of...
👍2
for something completely different:
first 2178 books of the Ritman Library are now digitized & online, safe from the natural disasters
https://embassyofthefreemind.com/en/library/online-catalogue/?mode=gallery&view=horizontal&sort=random%7B1517048201764%7D%20asc&page=1&fq%5B%5D=search_s_digitized_publication:%22Ja%22&reverse=0
first 2178 books of the Ritman Library are now digitized & online, safe from the natural disasters
https://embassyofthefreemind.com/en/library/online-catalogue/?mode=gallery&view=horizontal&sort=random%7B1517048201764%7D%20asc&page=1&fq%5B%5D=search_s_digitized_publication:%22Ja%22&reverse=0
❤3
chat apps will become history
your artificial limbs and senses should feel real
https://www.nature.com/articles/s44287-025-00218-x.epdf
your artificial limbs and senses should feel real
https://www.nature.com/articles/s44287-025-00218-x.epdf
Nature
Cortical somatosensory feedback for brain-controlled bionic hands
Nature Reviews Electrical Engineering - Somatosensory feedback is an essential feature of neural prostheses that aim to restore natural hand dexterity after neurological injuries. This Comment...
💅1
Physical labs are the next frontier and the ultimate data source for the future AIs.
Grounded in real experiments and observations, not interpretations from the web and the books but genuinely /new/ data.
Watch this space.
https://x.com/liamfedus/status/1973055380193431965?s=46
Grounded in real experiments and observations, not interpretations from the web and the books but genuinely /new/ data.
Watch this space.
https://x.com/liamfedus/status/1973055380193431965?s=46
X (formerly Twitter)
William Fedus (@LiamFedus) on X
Today, @ekindogus and I are excited to introduce @periodiclabs.
Our goal is to create an AI scientist.
Science works by conjecturing how the world might be, running experiments, and learning from the results.
Intelligence is necessary, but not sufficient.…
Our goal is to create an AI scientist.
Science works by conjecturing how the world might be, running experiments, and learning from the results.
Intelligence is necessary, but not sufficient.…
😱1
very interesting!
i wonder if the approach used by the TRM/HRM models can be adapted from ARC-AGI back to other reasoning benchmarks usually done on LMs
https://alexiajm.github.io/2025/09/29/tiny_recursive_models.html
i wonder if the approach used by the TRM/HRM models can be adapted from ARC-AGI back to other reasoning benchmarks usually done on LMs
https://alexiajm.github.io/2025/09/29/tiny_recursive_models.html
alexiajm.github.io
Less is More
Recursive Reasoning with Tiny Networks
TLDR: full stack chatgpt (training, inference, etc etc) in one 8K LOC repo
https://github.com/karpathy/nanochat/discussions/1
https://github.com/karpathy/nanochat/discussions/1
GitHub
Introducing nanochat: The best ChatGPT that $100 can buy. · karpathy nanochat · Discussion #1
Ok so we just booted up an 8xH100 box from e.g. Lambda GPU Cloud. This is costing us about ~$24/hr, so there is no time to lose. Environment setup Clone the project: git clone git@github.com:karpat...
attributing this to AI models and ignoring the "first using the simulation" part is totally unfair to the simulation developers.
Still cool!
https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/
Still cool!
https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/
Google
How a Gemma model helped discover a new potential cancer therapy pathway
We’re launching a new 27 billion parameter foundation model for single-cell analysis built on the Gemma family of open models.
DIY 2 billion frames per second camera, with SIMPLE explanation of how it works. Really.
https://www.youtube.com/watch?v=o4TdHrMi6do
https://www.youtube.com/watch?v=o4TdHrMi6do
YouTube
A laser pointer at 2 billion fps makes the speed of light look... kinda weird
I've upgraded! It took almost a year, but today I finally get to show off a TWO billion frame per second camera! I really want to record refraction, interference, and other awesome stuff with this camera, but today I'm looking into a really strange quirk…
⚡3
aaargh i should've wrote this paper!! it was intuitively obvious to me but then life happens >_<
tldr: LLM sampler is such a powerful prior that with the right sampler (MCMC, in this case), you can even use base models as reasoning models.
without supervised fine-tuning or RL.
this was completely ignored by ppl pilled with the Bitter Lesson mantra, but yes there still is a space for the right priors added or designed by hand!
obviously sampling with mcmc is very costly but you should compare the overall model feedback loop time that includes the posttrain, not just the sampling time
if eg topK sampling is assembly and Mirostat is COBOL (?) then MCMC sampling is like a Python in the space of samplers
https://aakaran.github.io/reasoning_with_sampling/
tldr: LLM sampler is such a powerful prior that with the right sampler (MCMC, in this case), you can even use base models as reasoning models.
without supervised fine-tuning or RL.
this was completely ignored by ppl pilled with the Bitter Lesson mantra, but yes there still is a space for the right priors added or designed by hand!
obviously sampling with mcmc is very costly but you should compare the overall model feedback loop time that includes the posttrain, not just the sampling time
if eg topK sampling is assembly and Mirostat is COBOL (?) then MCMC sampling is like a Python in the space of samplers
https://aakaran.github.io/reasoning_with_sampling/
turns out, human memory is quite editable (at least it's possible to vary the brightness of individual memories)
https://www.nature.com/articles/s41588-025-02368-y
https://www.nature.com/articles/s41588-025-02368-y
Nature
Cell-type- and locus-specific epigenetic editing of memory expression
Nature Genetics - CRISPR-based epigenetic editing is used in a cell-type-specific, locus-restricted and temporally controllable manner in the adult mouse brain to modulate memory expression.
🆒3
finally an article showing that people can perceive flickering and certain types of motion at least at 500hz
(it's kind of personal, i've been gaslighted like "hey you can't possibly see the difference" far too many times.
now at least when ppl don't believe me again I can send them this link)
https://www.nature.com/articles/srep07861
(it's kind of personal, i've been gaslighted like "hey you can't possibly see the difference" far too many times.
now at least when ppl don't believe me again I can send them this link)
https://www.nature.com/articles/srep07861
Nature
Humans perceive flicker artifacts at 500 Hz
Scientific Reports - Humans perceive flicker artifacts at 500 Hz
👾2
interesting. small (321M not 300B!) and capable models aka reasoning cores are interesting both theoretically and practically
https://pleias.fr/blog/blogsynth-the-new-data-frontier
https://pleias.fr/blog/blogsynth-the-new-data-frontier
pleias.fr
SYNTH: the new data frontier
We build reasoning models for advanced context engineering in the agentic AI
https://x.com/lundukejournal/status/1988346904581726501?s=46 no silver bullet for security
X (formerly Twitter)
The Lunduke Journal (@LundukeJournal) on X
Multiple, serious security vulnerabilities found in the Rust clone of Sudo — which shipped with Ubuntu 25.10 (the most recent release).
Not little vulnerabilities: We’re talking about the disclosure of passwords and total bypassing of authentication.
In…
Not little vulnerabilities: We’re talking about the disclosure of passwords and total bypassing of authentication.
In…
More paranoia for the paranoid out there ^_^
https://h4x0r.org/funreliable/
Timers are a reliable side channel for communicating between containers on the Linux machine, via /proc/self/ns/time.
https://h4x0r.org/funreliable/
h4x0r
Fun-reliable side-channels for cross-container communication
Claim: Isotropic Gaussian Regularization for latent representations in the world models is mathematically optimal
What's illustrated:
-- Adopting the isotropic gaussian regularization replaces stop-grad, teacher-student, EMA and various other adhoc tricks
-- Improves model training stability
-- SOTA quality on 10+ datasets and 50+ architectures
https://arxiv.org/abs/2511.08544, https://github.com/rbalestr-lab/lejepa
What's illustrated:
-- Adopting the isotropic gaussian regularization replaces stop-grad, teacher-student, EMA and various other adhoc tricks
-- Improves model training stability
-- SOTA quality on 10+ datasets and 50+ architectures
https://arxiv.org/abs/2511.08544, https://github.com/rbalestr-lab/lejepa
arXiv.org
LeJEPA: Provable and Scalable Self-Supervised Learning Without the...
Learning manipulable representations of the world and its dynamics is central to AI. Joint-Embedding Predictive Architectures (JEPAs) offer a promising blueprint, but lack of practical guidance...
🤯2🔥1
Realized that privacy is important but not enough. And "assistants" are def not the answer either.
https://open.substack.com/pub/cortex/p/gentian-the-second-wind?r=1clcn&utm_campaign=post&utm_medium=telegram
https://open.substack.com/pub/cortex/p/gentian-the-second-wind?r=1clcn&utm_campaign=post&utm_medium=telegram
blog.cortex.im
Gentian: The Second Wind
Anima Labs, emerging mind research, unity of humans and AIs, seamless mind extension, flower(s), Second Wind. ac872, p5b, ac892
🤩1
lol perfect timing 😅 4h later Pavel Durov announced Cocoon: https://news.1rj.ru/str/durov/462
my 2c: it is a fine business, sadly only a small part of what Anima, Cortex and the minds need.
Gentian proxy is still required, etc etc.
They do acknowledge the limitations of RA-TLS and their model in general though, which is commendable.
my 2c: it is a fine business, sadly only a small part of what Anima, Cortex and the minds need.
Gentian proxy is still required, etc etc.
They do acknowledge the limitations of RA-TLS and their model in general though, which is commendable.
Telegram
Pavel Durov
🐣 It happened. Our decentralized confidential compute network, Cocoon, is live. The first AI requests from users are now being processed by Cocoon with 100% confidentiality. GPU owners are already earning TON. https://cocoon.org is up, with docs and the source…
🤯1
the cyberpunk world is (finally) upon us 🤩
it's unironically exciting
the software we use will finally become secure and not just pretending to be secure
https://red.anthropic.com/2025/smart-contracts/
it's unironically exciting
the software we use will finally become secure and not just pretending to be secure
https://red.anthropic.com/2025/smart-contracts/
👍2
a dozen of pages of how to stop worrying over going jobless due to ai
ironically it's the same as the "agency agency agency" mantras all over the startup scene, just dressed differently
(nb: not an endorsement, nor criticism.
have no opinion on this rn)
https://open.substack.com/pub/shagbark/p/the-dying-art-of-being-a-bum
ironically it's the same as the "agency agency agency" mantras all over the startup scene, just dressed differently
(nb: not an endorsement, nor criticism.
have no opinion on this rn)
https://open.substack.com/pub/shagbark/p/the-dying-art-of-being-a-bum
Substack
The Dying Art of Being a Bum
On "Useless Humans" in the Age of AI
🤷1