Linkstream – Telegram
Linkstream
173 subscribers
32 photos
3 videos
2 files
899 links
Various links I find interesting. Mostly hardcore tech :) // by @oleksandr_now. See @notatky for the personal stuff
Download Telegram
nvidia might be the king of the hill right now, but the future of AI is reconfigurable analog-like electronics (~100x more energy efficient already, which will take Moore’s law at least another 10 years for silicon)

Caveat: no backprop :P forward-forward and other algorithms exist though

https://www.nature.com/articles/s41928-023-01042-7
🔥2
generalization, continued:
> We argue that Transformers will generalize to harder instances on algorithmic tasks iff the algorithm can be written in the RASP-L programming language (Weiss et al). By design, each line of RASP-L code can be compiled into weights of 1 Transformer layer.
https://arxiv.org/abs/2310.16028
now these are really hallucinations lol
> ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Real Image
https://kylesargent.github.io/zeronvs/
👍1
LLMs are far from being the first technology met with fear, uncertainty and doubt
https://pessimistsarchive.org
big if works well: first paper that claims relatively efficient search on encrypted data without revealing what’s being searched

https://eprint.iacr.org/2022/1703
1🔥1🤩1
related: the value of privacy (2006) in plain English
https://www.schneier.com/blog/archives/2006/05/the_value_of_pr.html
R-Tuning: Teaching Large Language Models to Refuse Unknown Questions
TLDR: LLMs "hallucinate" because the training datasets never included the "I don't know" answer 🤷

https://arxiv.org/pdf/2311.09677.pdf
System 2 Attention (S2A).
- Soft attention in Transformers is susceptible to irrelevant/biased info
- S2A uses LLM reasoning to generate what to attend to
Improves factuality & objectivity, decreases sycophancy.
https://arxiv.org/abs/2311.11829
In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.

https://arxiv.org/abs/2304.03442
https://github.com/joonspk-research/generative_agents
👾1
https://github.com/comfyanonymous/ComfyUI
if you are into Stable Diffusion
This media is not supported in your browser
VIEW IN TELEGRAM
insanely detailed LLM inference visualization from Brendan Bycroft
https://bbycroft.net/llm
🔥2
Nuclear Reactor Simulation (interactive!)

https://dalton-nrs.manchester.ac.uk/
3
https://twitter.com/MistralAI/status/1733150512395038967
beautiful. on friday. even more beautiful.
👍2
ChatGPT: sometimes “hallucinates” (tries to guess the details not in the training set).
OpenAI: tries to counter that
Google: hold my beer, let’s hallucinate the actual Gemini model presentation!

https://arstechnica.com/information-technology/2023/12/google-admits-it-fudged-a-gemini-ai-demo-video-which-critics-say-misled-viewers/
👍2
Google has good researchers and not so good product managers, as always.
Loosely related: Terence Tao was saying "LLMs help me with math" for a while already.

“The FunSearch paper by DeepMind that was used to discover new mathematics is an example of searching through generative patterns and employing evolutionary methods to creatively conjure up new solutions. This is a very general principle that lies at the core of creativity.”
https://www.nature.com/articles/d41586-023-04043-w
🔥1