All about AI, Web 3.0, BCI – Telegram
All about AI, Web 3.0, BCI
3.22K subscribers
724 photos
26 videos
161 files
3.08K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
Andrew Ng: we will control and steer superhuman AI, so if we want humanity to survive and thrive we should develop AI "as fast as possible"
33
It’s a big! First-of-its-kind supplement clinically proven to slow effects of aging in dogs available at LeapYears.com
32
The world’s 4 biggest cloud firms, Amazon AWS, Microsoft, Google and Meta will spend a record high US$200 billion on capex in 2025, citing the Wells Fargo Investment Institute, up from $140 billion in spending last year.
3
MindSpeaker BCI has built its “MindSpeaker+MindClick”

The integrated product concept enables improving communication for patients and elderly suffering from speech disorders via in-ear EEG sensing.

MindSpeaker builds Alternative and Augmentative Communication products. This product will address patients with speech paralysis (dysarthria).
5
What if you and your friends could see through each other’s eyes all at once?

Researchers revealed that elephantnose fish might really do this kind of group sensing with their electro-location sensory system.
3🍌1
Having a diversity of open-source models is good of course. But benchmarks suggest it's worse than even a 34B open LLM.
4
South Korea’s National Tax Service plans to build a virtual asset management system to prevent users from using virtual assets to evade taxes.

The system is designed to effectively analyze and manage information collected through the mandatory submission of virtual asset transaction history, and is scheduled to be launched in 2025.
4
Tether announced today that USDT will launch on Celo, a mobile-first and EVM-compatible blockchain network

Celo core contributor proposed the use of USDT as a gas currency. Celo's ecosystem in countries like Kenya and Ghana will help adoption and utilization of USDT.
5
A major paper in AI-driven drug discovery was released

It describes the underlying biology, chemistry, and clinical data supporting the lead candidate in the small molecule AI drug discovery race (INS018_055 by Insilico Medicine).
3
A cool finding on multilingualism in the brain. An MIT study finds the brains of polyglots expend comparatively little effort when processing their native language.

In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak.

In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language.

When listening to one’s native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort.

Many languages, one network

The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes.

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency.

Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.
👍1
OpenAI released a tool they've been using internally to analyze transformer internals - the Transformer Debugger

It combines both automated interpretability and sparse autoencoders, and it allows rapid exploration of models without writing code.

It supports both neurons and attention heads. You can intervene on the forward pass by ablating individual neurons and see what changes.

In short, it's a quick and easy way to discover circuits manually.

This is still an early stage research tool.
32
Robotics startup Covariant unveiled RFM-1, an AI platform that brings ChatGPT-like language reasoning to physical robots.

The platform allows robots to learn new skills, adapt to unexpected situations, and interact with humans more naturally.
4
Caduceus: bi-directional DNA language model built on Mamba, with long range modeling that respects inherent symmetry of double helix DNA structure.

Caduceus is SoTA on several benchmarks, including identifying causal SNPs for gene expression.

Project site

Paper here.

Repo here.

HF here.
4
Cognition AI introduced Devin, the first AI software engineer.

Devin is an autonomous agent that solves engineering tasks through the use of its own shell, code editor, and web browser.

When evaluated on the SWE-Bench benchmark, which asks an AI to resolve GitHub issues found in real-world open-source projects, Devin correctly resolves 13.86% of the issues unassisted, far exceeding the previous state-of-the-art model performance of 1.96% unassisted and 4.80% assisted.
What a day! Physical Intelligence - a team of robotics and AI all-stars out to build a universal AI for machines.

$70 million in seed funding from Thrive, Khosla, Lux and Sequoia.

Physical Intelligence building foundation models that can control any robot for any application including the ones that don't even exist today.
3👍2👻1
Google presents Synth^2: Boosting Visual-Language Models with Synthetic Captions and Image Embeddings

Introduces a method using LLMs and image generation to create synthetic image-text pairs, significantly boosting VLM training efficiency
3
WEF_Metaverse_Identity_Insights_Report_2024_1710335062.pdf
15.2 MB
The World Economic Forum released its "Metaverse Identity: Defining the Self in a Blended Reality" insight report.

Digital Identity is not us, but in a growing landscape of virtual worlds it's going to mirror us in virtual domains, extending the reach and impact of our activities, but also potentially opening to new individual and societal vulnerabilities.
32
Google introducing SIMA: the first generalist AI agent to follow natural-language instructions in a broad range of 3D virtual environments and video games.

The SIMA research builds towards more general AI that can understand and safely carry out instructions in both virtual and physical settings.

Such generalizable systems will make AI-powered technology more helpful and intuitive.
3
Stripe 2023 letter out - "the output of businesses that run on Stripe sums to roughly 1% of global GDP" !
3