All about AI, Web 3.0, BCI – Telegram
All about AI, Web 3.0, BCI
3.22K subscribers
724 photos
26 videos
161 files
3.08K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
Multimodal ArXiv: A Dataset for Improving Scientific Comprehension of Large Vision-Language Models

Presents:
-ArXivCap, a million-scale figure-caption dataset from arxiv papers
- ArXivQA, a QA dataset generated by prompting GPT-4V based on arxiv figures.

Paper here.
5
Intel's NPU Acceleration Library goes open source — Meteor Lake CPUs can now run TinyLlama and other lightweight LLMs.
4
Anthropic announced the Claude 3 model family

The family includes three state-of-the-art models in ascending order of capability:

Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus.

Each successive model offers increasingly powerful performance, allowing users to select the optimal balance of intelligence, speed, and cost for their specific application.
5
Claude 3 was trained on synthetic data (“data we generate internally”).

Fairly clear that compute is the bottleneck given that parameter count and data can be scaled.
6🔥2
Bytedance introduced Diffusion Protein Language Models (DPLM), a new suite of discrete diffusion-based protein language models

With versatility in both generative and predictive tasks, DPLM is poised to set the new SOTA in protein language models, excelling across a spectrum of benchmark tasks.
4
The Stable Diffusion 3 paper released for mutuals with an interest in such things.
3
A new article "Creative Flow as Optimized Processing: Evidence from Brain Oscillations During Jazz Improvisations by Expert and Non-Expert Musicians."

This is the first neuroimaging study to isolate the neural correlates of the flow experience during a creative production task, in this case, jazz improvisation. Flow is not hyperfocus. It results from an expert brain network plus release of executive control.
8👍3
Very cool data analysis from Paradigm showing the breakdown of Ethereum's state.

ERC20s make up 27% of total state, while ERC721s make up 21.6%. Accounts total 14.1%.

XEN makes up 3.5% of Ethereum's state, which is more than any other single protocol.
5🆒2
Andrew Ng: we will control and steer superhuman AI, so if we want humanity to survive and thrive we should develop AI "as fast as possible"
33
It’s a big! First-of-its-kind supplement clinically proven to slow effects of aging in dogs available at LeapYears.com
32
The world’s 4 biggest cloud firms, Amazon AWS, Microsoft, Google and Meta will spend a record high US$200 billion on capex in 2025, citing the Wells Fargo Investment Institute, up from $140 billion in spending last year.
3
MindSpeaker BCI has built its “MindSpeaker+MindClick”

The integrated product concept enables improving communication for patients and elderly suffering from speech disorders via in-ear EEG sensing.

MindSpeaker builds Alternative and Augmentative Communication products. This product will address patients with speech paralysis (dysarthria).
5
What if you and your friends could see through each other’s eyes all at once?

Researchers revealed that elephantnose fish might really do this kind of group sensing with their electro-location sensory system.
3🍌1
Having a diversity of open-source models is good of course. But benchmarks suggest it's worse than even a 34B open LLM.
4
South Korea’s National Tax Service plans to build a virtual asset management system to prevent users from using virtual assets to evade taxes.

The system is designed to effectively analyze and manage information collected through the mandatory submission of virtual asset transaction history, and is scheduled to be launched in 2025.
4
Tether announced today that USDT will launch on Celo, a mobile-first and EVM-compatible blockchain network

Celo core contributor proposed the use of USDT as a gas currency. Celo's ecosystem in countries like Kenya and Ghana will help adoption and utilization of USDT.
5
A major paper in AI-driven drug discovery was released

It describes the underlying biology, chemistry, and clinical data supporting the lead candidate in the small molecule AI drug discovery race (INS018_055 by Insilico Medicine).
3
A cool finding on multilingualism in the brain. An MIT study finds the brains of polyglots expend comparatively little effort when processing their native language.

In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak.

In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language.

When listening to one’s native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort.

Many languages, one network

The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes.

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency.

Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.
👍1
OpenAI released a tool they've been using internally to analyze transformer internals - the Transformer Debugger

It combines both automated interpretability and sparse autoencoders, and it allows rapid exploration of models without writing code.

It supports both neurons and attention heads. You can intervene on the forward pass by ablating individual neurons and see what changes.

In short, it's a quick and easy way to discover circuits manually.

This is still an early stage research tool.
32