Linkstream – Telegram
Linkstream
173 subscribers
32 photos
3 videos
2 files
899 links
Various links I find interesting. Mostly hardcore tech :) // by @oleksandr_now. See @notatky for the personal stuff
Download Telegram
ASMesh: Anonymous and Secure Messaging in Mesh Networks Using Stronger, Anonymous Double Ratchet
https://eprint.iacr.org/2023/1053

interesting; though unclear re:traffic/metadata analysis and power consumption as the protocol is rather chatty
NExT-GPT: Any-to-Any Multimodal LLM
TLDR: Vicuna (fine-tuned Llama) extended with basic image, audio and video understanding and generation. Impressive demo videos, the Gradio-based demo is currently broken though. Anybody up to deploy this somewhere to play with?
https://next-gpt.github.io/ + https://arxiv.org/pdf/2309.05519.pdf
1
A stunning example of how efficient market and arbitrage trades enabled by mobile phones help to avoid waste, stabilize prices and increase welfare (Fish markets in Kerala, India, 1997-2001)
https://www.jstor.org/stable/25098864
Oh. https://ml-jku.github.io/hopfield-layers/
via @kuu_channel

It’s beautiful. I wonder where’s the catch. I.e. why Llama et al don’t use Hopfield layers
Reviewer #2, step back!
82% of authors found the GPT-4 feedback more useful than feedback from (at least some) human reviewers

https://arxiv.org/abs/2310.01783
nvidia might be the king of the hill right now, but the future of AI is reconfigurable analog-like electronics (~100x more energy efficient already, which will take Moore’s law at least another 10 years for silicon)

Caveat: no backprop :P forward-forward and other algorithms exist though

https://www.nature.com/articles/s41928-023-01042-7
🔥2
generalization, continued:
> We argue that Transformers will generalize to harder instances on algorithmic tasks iff the algorithm can be written in the RASP-L programming language (Weiss et al). By design, each line of RASP-L code can be compiled into weights of 1 Transformer layer.
https://arxiv.org/abs/2310.16028
now these are really hallucinations lol
> ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Real Image
https://kylesargent.github.io/zeronvs/
👍1
LLMs are far from being the first technology met with fear, uncertainty and doubt
https://pessimistsarchive.org
big if works well: first paper that claims relatively efficient search on encrypted data without revealing what’s being searched

https://eprint.iacr.org/2022/1703
1🔥1🤩1
related: the value of privacy (2006) in plain English
https://www.schneier.com/blog/archives/2006/05/the_value_of_pr.html
R-Tuning: Teaching Large Language Models to Refuse Unknown Questions
TLDR: LLMs "hallucinate" because the training datasets never included the "I don't know" answer 🤷

https://arxiv.org/pdf/2311.09677.pdf
System 2 Attention (S2A).
- Soft attention in Transformers is susceptible to irrelevant/biased info
- S2A uses LLM reasoning to generate what to attend to
Improves factuality & objectivity, decreases sycophancy.
https://arxiv.org/abs/2311.11829
In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.

https://arxiv.org/abs/2304.03442
https://github.com/joonspk-research/generative_agents
👾1