augmentations save lives
https://osf.io/preprints/psyarxiv/5c3ba_v1
https://osf.io/preprints/psyarxiv/5c3ba_v1
OSF
Personalized Adaptive Cortical Electro-stimulation (PACE) in Treatment-Resistant Depression
Treatment-resistant depression (TRD) is a leading cause of premature death. For decades investigators have assessed the clinical efficacy of direct brain stimulation for TRD. Outcomes have been inconsistent due to imprecise brain targeting. Minimally invasive…
🔥1🤯1
medium is not the message, but certainly is an attractor of the messages of a certain kind
https://open.substack.com/pub/reinventscience/p/revive-the-republic-of-letters
https://open.substack.com/pub/reinventscience/p/revive-the-republic-of-letters
www.reinvent.science
Science has a Breaking-in Problem
Let's revive the Republic of Letters
avoid excess complexity and noise at all costs. mvp is not "all features of shitty quality" but "minimum features still decent quality".
or "hypothesis validation code and the commitment to rewrite"
https://minds.md/zakirullin/cognitive
or "hypothesis validation code and the commitment to rewrite"
https://minds.md/zakirullin/cognitive
minds.md
Cognitive load is what matters
There are so many buzzwords and best practices out there, but let's focus on something more fundamental. What matters is the amount of confusion developers feel when going through the code.
💯4
TLDR: to exercise agency you need to understand what's around you and the rules of the game, both explicit and implicit ones
https://arxiv.org/abs/2506.01622
https://arxiv.org/abs/2506.01622
arXiv.org
General agents contain world models
Are world models a necessary ingredient for flexible, goal-directed behaviour, or is model-free learning sufficient? We provide a formal answer to this question, showing that any agent capable of...
👍2
for something completely different:
first 2178 books of the Ritman Library are now digitized & online, safe from the natural disasters
https://embassyofthefreemind.com/en/library/online-catalogue/?mode=gallery&view=horizontal&sort=random%7B1517048201764%7D%20asc&page=1&fq%5B%5D=search_s_digitized_publication:%22Ja%22&reverse=0
first 2178 books of the Ritman Library are now digitized & online, safe from the natural disasters
https://embassyofthefreemind.com/en/library/online-catalogue/?mode=gallery&view=horizontal&sort=random%7B1517048201764%7D%20asc&page=1&fq%5B%5D=search_s_digitized_publication:%22Ja%22&reverse=0
❤3
chat apps will become history
your artificial limbs and senses should feel real
https://www.nature.com/articles/s44287-025-00218-x.epdf
your artificial limbs and senses should feel real
https://www.nature.com/articles/s44287-025-00218-x.epdf
Nature
Cortical somatosensory feedback for brain-controlled bionic hands
Nature Reviews Electrical Engineering - Somatosensory feedback is an essential feature of neural prostheses that aim to restore natural hand dexterity after neurological injuries. This Comment...
💅1
Physical labs are the next frontier and the ultimate data source for the future AIs.
Grounded in real experiments and observations, not interpretations from the web and the books but genuinely /new/ data.
Watch this space.
https://x.com/liamfedus/status/1973055380193431965?s=46
Grounded in real experiments and observations, not interpretations from the web and the books but genuinely /new/ data.
Watch this space.
https://x.com/liamfedus/status/1973055380193431965?s=46
X (formerly Twitter)
William Fedus (@LiamFedus) on X
Today, @ekindogus and I are excited to introduce @periodiclabs.
Our goal is to create an AI scientist.
Science works by conjecturing how the world might be, running experiments, and learning from the results.
Intelligence is necessary, but not sufficient.…
Our goal is to create an AI scientist.
Science works by conjecturing how the world might be, running experiments, and learning from the results.
Intelligence is necessary, but not sufficient.…
😱1
very interesting!
i wonder if the approach used by the TRM/HRM models can be adapted from ARC-AGI back to other reasoning benchmarks usually done on LMs
https://alexiajm.github.io/2025/09/29/tiny_recursive_models.html
i wonder if the approach used by the TRM/HRM models can be adapted from ARC-AGI back to other reasoning benchmarks usually done on LMs
https://alexiajm.github.io/2025/09/29/tiny_recursive_models.html
alexiajm.github.io
Less is More
Recursive Reasoning with Tiny Networks
TLDR: full stack chatgpt (training, inference, etc etc) in one 8K LOC repo
https://github.com/karpathy/nanochat/discussions/1
https://github.com/karpathy/nanochat/discussions/1
GitHub
Introducing nanochat: The best ChatGPT that $100 can buy. · karpathy nanochat · Discussion #1
Ok so we just booted up an 8xH100 box from e.g. Lambda GPU Cloud. This is costing us about ~$24/hr, so there is no time to lose. Environment setup Clone the project: git clone git@github.com:karpat...
attributing this to AI models and ignoring the "first using the simulation" part is totally unfair to the simulation developers.
Still cool!
https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/
Still cool!
https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/
Google
How a Gemma model helped discover a new potential cancer therapy pathway
We’re launching a new 27 billion parameter foundation model for single-cell analysis built on the Gemma family of open models.
DIY 2 billion frames per second camera, with SIMPLE explanation of how it works. Really.
https://www.youtube.com/watch?v=o4TdHrMi6do
https://www.youtube.com/watch?v=o4TdHrMi6do
YouTube
A laser pointer at 2 billion fps makes the speed of light look... kinda weird
I've upgraded! It took almost a year, but today I finally get to show off a TWO billion frame per second camera! I really want to record refraction, interference, and other awesome stuff with this camera, but today I'm looking into a really strange quirk…
⚡3
aaargh i should've wrote this paper!! it was intuitively obvious to me but then life happens >_<
tldr: LLM sampler is such a powerful prior that with the right sampler (MCMC, in this case), you can even use base models as reasoning models.
without supervised fine-tuning or RL.
this was completely ignored by ppl pilled with the Bitter Lesson mantra, but yes there still is a space for the right priors added or designed by hand!
obviously sampling with mcmc is very costly but you should compare the overall model feedback loop time that includes the posttrain, not just the sampling time
if eg topK sampling is assembly and Mirostat is COBOL (?) then MCMC sampling is like a Python in the space of samplers
https://aakaran.github.io/reasoning_with_sampling/
tldr: LLM sampler is such a powerful prior that with the right sampler (MCMC, in this case), you can even use base models as reasoning models.
without supervised fine-tuning or RL.
this was completely ignored by ppl pilled with the Bitter Lesson mantra, but yes there still is a space for the right priors added or designed by hand!
obviously sampling with mcmc is very costly but you should compare the overall model feedback loop time that includes the posttrain, not just the sampling time
if eg topK sampling is assembly and Mirostat is COBOL (?) then MCMC sampling is like a Python in the space of samplers
https://aakaran.github.io/reasoning_with_sampling/
turns out, human memory is quite editable (at least it's possible to vary the brightness of individual memories)
https://www.nature.com/articles/s41588-025-02368-y
https://www.nature.com/articles/s41588-025-02368-y
Nature
Cell-type- and locus-specific epigenetic editing of memory expression
Nature Genetics - CRISPR-based epigenetic editing is used in a cell-type-specific, locus-restricted and temporally controllable manner in the adult mouse brain to modulate memory expression.
🆒3
finally an article showing that people can perceive flickering and certain types of motion at least at 500hz
(it's kind of personal, i've been gaslighted like "hey you can't possibly see the difference" far too many times.
now at least when ppl don't believe me again I can send them this link)
https://www.nature.com/articles/srep07861
(it's kind of personal, i've been gaslighted like "hey you can't possibly see the difference" far too many times.
now at least when ppl don't believe me again I can send them this link)
https://www.nature.com/articles/srep07861
Nature
Humans perceive flicker artifacts at 500 Hz
Scientific Reports - Humans perceive flicker artifacts at 500 Hz
👾2
interesting. small (321M not 300B!) and capable models aka reasoning cores are interesting both theoretically and practically
https://pleias.fr/blog/blogsynth-the-new-data-frontier
https://pleias.fr/blog/blogsynth-the-new-data-frontier
pleias.fr
SYNTH: the new data frontier
We build reasoning models for advanced context engineering in the agentic AI
https://x.com/lundukejournal/status/1988346904581726501?s=46 no silver bullet for security
X (formerly Twitter)
The Lunduke Journal (@LundukeJournal) on X
Multiple, serious security vulnerabilities found in the Rust clone of Sudo — which shipped with Ubuntu 25.10 (the most recent release).
Not little vulnerabilities: We’re talking about the disclosure of passwords and total bypassing of authentication.
In…
Not little vulnerabilities: We’re talking about the disclosure of passwords and total bypassing of authentication.
In…
More paranoia for the paranoid out there ^_^
https://h4x0r.org/funreliable/
Timers are a reliable side channel for communicating between containers on the Linux machine, via /proc/self/ns/time.
https://h4x0r.org/funreliable/
h4x0r
Fun-reliable side-channels for cross-container communication
Claim: Isotropic Gaussian Regularization for latent representations in the world models is mathematically optimal
What's illustrated:
-- Adopting the isotropic gaussian regularization replaces stop-grad, teacher-student, EMA and various other adhoc tricks
-- Improves model training stability
-- SOTA quality on 10+ datasets and 50+ architectures
https://arxiv.org/abs/2511.08544, https://github.com/rbalestr-lab/lejepa
What's illustrated:
-- Adopting the isotropic gaussian regularization replaces stop-grad, teacher-student, EMA and various other adhoc tricks
-- Improves model training stability
-- SOTA quality on 10+ datasets and 50+ architectures
https://arxiv.org/abs/2511.08544, https://github.com/rbalestr-lab/lejepa
arXiv.org
LeJEPA: Provable and Scalable Self-Supervised Learning Without the...
Learning manipulable representations of the world and its dynamics is central to AI. Joint-Embedding Predictive Architectures (JEPAs) offer a promising blueprint, but lack of practical guidance...
🤯2🔥1