All about AI, Web 3.0, BCI – Telegram
All about AI, Web 3.0, BCI
3.22K subscribers
724 photos
26 videos
161 files
3.08K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
A new study is out today in Nature! Researchers demonstrate a brain-computer interface that turns speech-related neural activity into text, enabling a person with paralysis to communicate at 62 words per minute - 3.4 times faster than prior work.

Researchers are publicly released all data and code, and are hosting a machine learning competition.
4
Lemur70B: the SOTA open LLM balancing text & code capabilities

The Lemur project is an open collaborative research effort between XLang Lab and Salesforce Research.

Lemur and Lemur-chat are initialized from Llama 2-70B

1. Pretrain Llama 2 on ~100B code-focused data > Lemur-70B
2. Finetune Lemur on ~300K examples > Lemur-70B-chat

Lemur outperforms other open source language models on coding benchmarks, yet remains competitive textual reasoning and knowledge performance.

Lemur-chat significantly outperforms other open-source supervised fine-tuned models across various dimensions.

Model: huggingface.co/OpenLemur
Blog: xlang.ai/blog/openlemur
3
Brain computer interface helped create a digital avatar of a stroke survivor’s face

A woman who lost her ability to speak after a stroke 18 years ago was able to replicate her voice and even convey a limited range of facial expressions via a computer avatar. A pair of papers published in Nature yesterday about experiments that restored speech to women via brain implants show just how quickly this field is advancing.

How they did it: Both teams used recording devices implanted into the brain to capture the signals controlling the small movements that provide facial expressions. Then they used AI algorithms to decode them into words, and a language model to adjust for accuracy. One team, led by Edward Chang, a neurosurgeon at the University of California, San Francisco, even managed to capture emotions.

The caveats: Researchers caution that these results may not hold for other people, and either way, we are still a very long way from tech that’s available to the wider public. Still, these proofs of concept are hugely exciting
Meta AI released Code Llama, a large language model built on top of Llama 2, fine-tuned for coding & state-of-the-art for publicly available coding tools.
A new startup founded by former members of team that created TensorFlow.js at Google Brain.

A new Open Source product to analyze, structure and clean data with AI.
3
Bond Tokenisation. The Hong Kong Monetary Authority released a report noscriptd “Bond Tokenisation in Hong Kong”

Bond tokenisation is one of the pilot projects announced in the Policy Statement on Development of Virtual Assets in Hong Kong issued by the Financial Services and the Treasury Bureau last October.

In February this year, the HKMA assisted the Government in the successful offering of HK$800 million of tokenised green bond under the Government Green Bond Programme (the Tokenised Green Bond), marking the first tokenised green bond issued by a government globally.

The use of distributed ledger technology has been applied to primary issuance, settlement of secondary trading and coupon payment, and will be tested out in maturity redemption.

The Report :

- sets out details of the Tokenised Green Bond, and suggests available options with regard to salient aspects of a tokenised bond transaction in Hong Kong ranging from technology and platform design to deal structuring considerations.

- serves as a blueprint for potential similar issuances in Hong Kong.

- considers what could further be done to promote tokenisation in the bond market; these include exploring further use cases, addressing issues of fragmentation across platforms and systems, and enhancing Hong Kong’s legal and regulatory framework.

- enables market participants to draw reference from HKMA’s experience when considering tokenised issuances in Hong Kong.
3
Researchers applied an algorithm from a video game to study the dynamics of molecules in living brain cells

Dr. Tristan Wallis and Professor Frederic Meunier from UQ’s Queensland Brain Institute came up with the idea while in lockdown during the COVID-19 pandemic.

“Combat video games use a very fast algorithm to track the trajectory of bullets, to ensure the correct target is hit on the battlefield at the right time,” Dr Wallis said. “The technology has been optimized to be highly accurate, so the experience feels as realistic as possible. We thought a similar algorithm could be used to analyze tracked molecules moving within a brain cell.”

Until now, technology has only been able to detect and analyze molecules in space, and not how they behave in space and time.

“Scientists use super-resolution microscopy to look into live brain cells and record how tiny molecules within them cluster to perform specific functions,” Dr Wallis said. “Individual proteins bounce and move in a seemingly chaotic environment, but when you observe these molecules in space and time, you start to see order within the chaos. It was an exciting idea – and it worked.”

Dr. Wallis used coding tools to build an algorithm that is now used by several labs to gather rich data about brain cell activity.
4
⚡️ Google Gemini eats the world – Gemini Smashes GPT-4 By 5X
The GPU-Poors, MosaicML, Together, and Hugging face
Broken Open-Source
Compute Resources That Make Everyone Look GPU-Poor
Google Cloud TPU wins
6👍2😱2🥴1
OpenAI introduced ChatGPT Enterprise: enterprise-grade security, unlimited high-speed GPT-4 access, extended context windows, and much more.

Bye bye a bunch of startups…
👍4🔥4🤡1
Baidu Apollo Go has launched driverless airport transportation services at Wuhan Tianhe International Airport. This expansion links urban and airport travel for the first time in China, and bridges city roads and highways.
🔥3
Scientists have discovered a previously unknown mechanism by which cells break down proteins that are no longer needed

Their discovery potentially provides a new pathway that could be useful for tackling many diseases including cancer.

It’s a example of the important research that AlphaFold is helping to enable.
4
Huawei graphics chips already match Nvidia’s A100 in terms of capability, and Huawei will soon take the dominant position in computing power away from Nvidia, media report, citing the chairman of China’s iFlytek, which is poised to launch a GPT-4 rival.
3
DeepMind launched 𝗦𝘆𝗻𝘁𝗵𝗜𝗗: a digital tool to watermark and identify AI-generated images.

SynthID can embed a digital watermark directly into the pixels of AI-generated images, making it imperceptible to the human eye.

By scanning an image for the watermark, SynthID can assess the likelihood of it being created on Imagen.
Tesla is about to flip the switch on its new AI cluster, featuring 10,000 Nvidia H100 compute GPUs.

Tesla launched its highly-anticipated supercomputer yesterday. The machine will be used for various AI applications, but the cluster is so powerful that it could also be used for demanding high-performance computing (HPC) workloads.

In fact, the Nvidia H100-based supercomputer will be one of the most powerful machines in the world.

Tesla's new cluster will employ 10,000 Nvidia H100 compute GPUs, which will offer a peak performance of 340 FP64 PFLOPS for technical computing and 39.58 INT8 ExaFLOPS for AI applications. In fact, Tesla’s 340 FP64 PFLOPS is higher than 304 FP64 PFLOPS offered by Leonardo, the world’s fourth highest-performing supercomputer.

With its new supercomputer, Tesla is significantly enhancing its computing capabilities to train its full self-driving (FSD) technology faster than ever. This could not only make Tesla more competitive than other automakers but will make the company the owner of one of the world's fastest supercomputers.
The scaling hypothesis of LLMs in biology - to what degree increasing the size of a model and training set improves performance and enables general purpose algorithms - will be the most interesting experiment in computational biology of the next decade.

This will require a better understanding of what protein language models learn about the underlying biology (eg sequence-structure-function relationships), and overcoming the context limitation of training LLMs on a single biological data type in order to capture the broader cellular/physiological context of a biological system.
From Concussion to Alzheimer’s: Mini-Brain Unlocks Time Link
Researchers
developed a “TBI on a Chip” to explore links between TBI and Alzheimer’s

Study:
- Researchers exposed the functioning clusters of cultured neurons from mice
- to 3 blows of 200 g-force,
- each approximating the higher end of what a football player receives in a single hit

- Using a device with a pendulum to deliver a specific g-force 
- to a chamber with a cluster of a quarter million neurons.

Key Findings:
1) Within the first 24 hours after a hit:
- 350% increase in the production of misfolded AB42,
- linking immediate effects of TBI to long-term alzheimers risk.

2) The device utilizes a microelectronic array to measure neuronal electrical activity

3) Hydralazine (an FDA-approved drug for lowering blood pressure)
was found effective in reducing the levels of acrolein and misfolded AB42.

“This amyloid beta pathology started within hours, maybe immediately. That’s never been heard of,” Shi said.
“It’s like attacking the weight-bearing stud in a house wall. If you break that stud, ... the house is going to fall down.”

Future directions:
The device can also be used to test possible therapeutics, including drugs known to reduce acrolein levels.
🦄4👍3
The biggest surprise at HotChips is Numenta.

Although some of the tech details were undisclosed, they seem to exploit sparsity in SW in extreme to beat CPUs in low batch inference.

They are 3x faster on 48 client one shot BERT inference vs 8-batch A100.

This was achieved with Xeon MAX w/AMX & HBM but obviously more memory bound than compute.
👍41