DeepMind released AlphaZeroᵈᵇ, a team of diverse AlphaZero agents that collaborate to solve chess puzzles and demonstrate increased creativity.
AlphaZeroᵈᵇ discovers a group of strong chess players with distinct playing styles. It models the players with a single, player-conditioned architecture and encourages them to play differently using behavioural and response diversity techniques.
Experiments show that players in AlphaZeroᵈᵇ play chess differently, solve more puzzles together, outperform a more homogeneous team and specialize in different openings. Remarkably, some of the players solve the challenging penrose positions.
AlphaZeroᵈᵇ discovers a group of strong chess players with distinct playing styles. It models the players with a single, player-conditioned architecture and encourages them to play differently using behavioural and response diversity techniques.
Experiments show that players in AlphaZeroᵈᵇ play chess differently, solve more puzzles together, outperform a more homogeneous team and specialize in different openings. Remarkably, some of the players solve the challenging penrose positions.
Chess.com
Will This Position Help Understand Human Consciousness?
It's a position computers struggle with, but chess players might solve instantly. Sir Roger Penrose from the Mathematical Institute of Oxford has started a public challenge that involves chess, in order to learn more about the uniqueness of the human brain.…
The newly formed CPC Central Science and Technology Commission (CSTC) has recently held its first meeting.
Although only minimally reported, the revelation of the first meeting suggests that China's industrial policy has been taken over by the top Party leadership in order to concentrate national resources on achieving tech breakthroughs.
Although only minimally reported, the revelation of the first meeting suggests that China's industrial policy has been taken over by the top Party leadership in order to concentrate national resources on achieving tech breakthroughs.
凤凰网
新成立的中央科技委员会,已首次亮相
撰文 | 余晖
Microsoft Mixed Reality Toolkit 3 (MRTK3) moves to an independent organization within GitHub.
TECHCOMMUNITY.MICROSOFT.COM
Microsoft Mixed Reality Toolkit 3 (MRTK3) moves to an independent organization within GitHub | Microsoft Community Hub
Microsoft is excited to share the future vision of Mixed Reality Toolkit 3 as we continue to invest in multi-platform mixed reality development.
😁1
The first digital twin of the human immune system highlights the vast societal opportunities in physical-digital fusion.
Virtual replicas of ‘real life’ give humanity exponential insights.
Much of the real world – like the seabed, the innards of cars, or the workings of our bodies – is hidden.
Digital-physical fusion creates dynamic representations of real-world objects, systems, and processes in the digital world and can make these things become clearer. And the human immune system is, in many ways, the ultimate example.
Why do we need a digital twin of the human immune system?
Digital twins provide a raft of advantages in all walks of society and business. The immune system project could mean:
1. A clear model of something very complicated
2. The chance to see – and understand – connections that were previously opaque
3. Immediate support for tackling immune-dependent conditions like organ transplants, cancer, and autoimmune diseases
4. Quicker and cheaper drug discovery
5. A doorway to drugs with less harmful side effects
6. The route to personalized medicine
Virtual replicas of ‘real life’ give humanity exponential insights.
Much of the real world – like the seabed, the innards of cars, or the workings of our bodies – is hidden.
Digital-physical fusion creates dynamic representations of real-world objects, systems, and processes in the digital world and can make these things become clearer. And the human immune system is, in many ways, the ultimate example.
Why do we need a digital twin of the human immune system?
Digital twins provide a raft of advantages in all walks of society and business. The immune system project could mean:
1. A clear model of something very complicated
2. The chance to see – and understand – connections that were previously opaque
3. Immediate support for tackling immune-dependent conditions like organ transplants, cancer, and autoimmune diseases
4. Quicker and cheaper drug discovery
5. A doorway to drugs with less harmful side effects
6. The route to personalized medicine
Nokia.com
Digital twin of the human immune system | Nokia.com
The first digital twin of the human immune system highlights the vast societal opportunities in physical-digital fusion. This project in the making could make us safer, healthier and better prepared for the next health crisis. Discover the exponential potential…
👍4
NFTs in the city. MoonPay enables Web3 experience at Manhattan’s Seaport
The Howard Hughes Corp. is a big real estate development and management company that owns Seaport. It teamed with Web3 infrastructure firm MoonPay to create an interactive digital experience at the New York City’s historic waterfront neighborhood Seaport.
The Seaport Scavenger Hunt will entertain visitors from August 21 to October 31.
Participants are invited to embark on a quest to discover ten purple “pearls” hidden throughout the Seaport neighborhood.
Each pearl features a QR code that can be scanned using a mobile device. Once scanned, participants will receive a free digital token, designed by 3D artist and NFT creator Johana Kroft, which will be added to their digital wallets.
These digital tokens represent playful interpretations of iconic New York City objects. Collecting all ten tokens will qualify participants to enter a weekly prize drawing. Prizes include VIP tickets to the Summer Concert Series on The Rooftop at Pier 17, complimentary workouts at HIIT the Deck Boxing, curated culinary gift boxes worth up to $350 from the Tin Building, and more. Participants have the opportunity to play multiple times each week, increasing their chances of winning.
Once a bustling maritime and commercial hub, the Seaport has evolved into a vibrant entertainment and dining destination within Lower Manhattan. It serves as a community anchor and attracts creative businesses drawn to the area’s dynamic atmosphere and connectivity.
The Howard Hughes Corp. is a big real estate development and management company that owns Seaport. It teamed with Web3 infrastructure firm MoonPay to create an interactive digital experience at the New York City’s historic waterfront neighborhood Seaport.
The Seaport Scavenger Hunt will entertain visitors from August 21 to October 31.
Participants are invited to embark on a quest to discover ten purple “pearls” hidden throughout the Seaport neighborhood.
Each pearl features a QR code that can be scanned using a mobile device. Once scanned, participants will receive a free digital token, designed by 3D artist and NFT creator Johana Kroft, which will be added to their digital wallets.
These digital tokens represent playful interpretations of iconic New York City objects. Collecting all ten tokens will qualify participants to enter a weekly prize drawing. Prizes include VIP tickets to the Summer Concert Series on The Rooftop at Pier 17, complimentary workouts at HIIT the Deck Boxing, curated culinary gift boxes worth up to $350 from the Tin Building, and more. Participants have the opportunity to play multiple times each week, increasing their chances of winning.
Once a bustling maritime and commercial hub, the Seaport has evolved into a vibrant entertainment and dining destination within Lower Manhattan. It serves as a community anchor and attracts creative businesses drawn to the area’s dynamic atmosphere and connectivity.
VentureBeat
MoonPay enables Web3 experience at Manhattan’s Seaport
MoonPay has teamed up with The Howard Hughes Corp. to enable a gamified Web3 experience at Manhattan's Seaport.
Europe spent €600 million to recreate the human brain in a computer. How did it go?
It took 10 years, around 500 scientists and some €600 million, and now the Human Brain Project — one of the biggest research endeavours ever funded by the European Union — is coming to an end.
Its audacious goal was to understand the human brain by modelling it in a computer.
During its run, scientists under the umbrella of the Human Brain Project (HBP) have published thousands of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions1, developing brain implants to treat blindness2 and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions.
EBRAINS is a suite of tools and imaging data that scientists around the world can use to run simulations and digital experiments.
Management aside, the HBP has stacked up some important and useful science. By creating and combining 3D maps of around 200 cerebral-cortex and deeper brain structures, HBP scientists made the Human Brain Atlas, which is accessible through EBRAINS. The atlas depicts the multilevel organization of the brain, from its cellular and molecular architecture to its functional modules and connectivity.
It took 10 years, around 500 scientists and some €600 million, and now the Human Brain Project — one of the biggest research endeavours ever funded by the European Union — is coming to an end.
Its audacious goal was to understand the human brain by modelling it in a computer.
During its run, scientists under the umbrella of the Human Brain Project (HBP) have published thousands of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions1, developing brain implants to treat blindness2 and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions.
EBRAINS is a suite of tools and imaging data that scientists around the world can use to run simulations and digital experiments.
Management aside, the HBP has stacked up some important and useful science. By creating and combining 3D maps of around 200 cerebral-cortex and deeper brain structures, HBP scientists made the Human Brain Atlas, which is accessible through EBRAINS. The atlas depicts the multilevel organization of the brain, from its cellular and molecular architecture to its functional modules and connectivity.
❤2
OpenAI’s most significant product update since the App Store: GPT-3.5 finetuning API
This will be the largest LoRA-as-a-service ever. GPT-4 ft is coming in a few months.
Pricing: inference (output tokens) is 2x more expensive than training tokens.
API is quite simple: submit a file -> create a finetuning job -> serve.
This will be the largest LoRA-as-a-service ever. GPT-4 ft is coming in a few months.
Pricing: inference (output tokens) is 2x more expensive than training tokens.
API is quite simple: submit a file -> create a finetuning job -> serve.
Openai
GPT-3.5 Turbo fine-tuning and API updates
Developers can now bring their own data to customize GPT-3.5 Turbo for their use cases.
A new startup founded by former members of the Imagen team at Google Brain.
Graphs of thoughts for solving elaborate problems with LLMs
- Models LLM generations as arbitrary graph
- "LLM thoughts" are vertices
- Edges are dependencies between
- Can combine & enhance LLM thoughts using feedback loops
- SoTA on a variety of tasks.
- Models LLM generations as arbitrary graph
- "LLM thoughts" are vertices
- Edges are dependencies between
- Can combine & enhance LLM thoughts using feedback loops
- SoTA on a variety of tasks.
Nordic Semiconductor is set to buy US startup Atlazo, for the company's AI hardware IP. Nordic plans to add on-chip AI acceleration to devices across its portfolio.
EE Times
Nordic Set to Buy U.S. Startup Atlazo for AI Hardware IP
The Norway-based IoT SoC maker plans to add AI acceleration across its entire portfolio.
⚡️ Salesforce is leading a funding round in Hugging Face, valuing the company at more than $4b.
The Information
Salesforce Leads Financing of AI Startup at More Than $4 Billion Valuation
Salesforce is leading a financing round in Hugging Face, one of the most highly valued startups helping businesses use artificial intelligence, at a valuation north of $4 billion, according to two people with knowledge of the situation. The roughly $200 million…
New research suggests that our visual memories are not simply what one has just seen, but instead are the result of neural codes dynamically evolving to incorporate how we intend to use that information in the future.
Working memory is an incredibly important aspect of cognition and our daily lives. It enables us to retain small amounts of information to be used later— for example, keeping elements or the sequence of a story in mind before the person completes telling it, dialing a telephone number that you were just told, or calculating the total bill of your groceries as you are shopping.
With regards to XR, when we enter an immersive virtual environment presenting novel visual imagery, it requires us to use our working memory—especially in cognitive training tasks specifically aimed to improve working memory in clinical populations.
The findings of this study suggest perhaps better healthcare outcomes can be achieved as patients are encoding information about why they want to use this information, namely their recovery.
“Research makes it clear that memory codes can simultaneously contain information about what we remember seeing and about the future behavior that depends on those visual memories… This means the neural dynamics driving our working memory result from reformatting memories into forms that are closer to later behaviors that rely on visual memories.”
Working memory is an incredibly important aspect of cognition and our daily lives. It enables us to retain small amounts of information to be used later— for example, keeping elements or the sequence of a story in mind before the person completes telling it, dialing a telephone number that you were just told, or calculating the total bill of your groceries as you are shopping.
With regards to XR, when we enter an immersive virtual environment presenting novel visual imagery, it requires us to use our working memory—especially in cognitive training tasks specifically aimed to improve working memory in clinical populations.
The findings of this study suggest perhaps better healthcare outcomes can be achieved as patients are encoding information about why they want to use this information, namely their recovery.
“Research makes it clear that memory codes can simultaneously contain information about what we remember seeing and about the future behavior that depends on those visual memories… This means the neural dynamics driving our working memory result from reformatting memories into forms that are closer to later behaviors that rely on visual memories.”
Neuroscience News
Memory’s Future Focus: It’s Not Just What, but Why We Remember
Our ability to process sentences relies on the dynamic nature of working memory, where information is not just stored, but also integrated with our future intentions.
🆒3
Solana Pay integrates its plug-in with Shopify for USDC payments.
TechCrunch
Solana Pay integrates plug-in with Shopify for USDC payments
Solana Pay, a decentralized payment protocol by Solana Labs, has integrated its plug-in with Shopify, allowing millions businesses on its platform to use it for payments, TechCrunch was told exclusively.
❤3
A new study is out today in Nature! Researchers demonstrate a brain-computer interface that turns speech-related neural activity into text, enabling a person with paralysis to communicate at 62 words per minute - 3.4 times faster than prior work.
Researchers are publicly released all data and code, and are hosting a machine learning competition.
Researchers are publicly released all data and code, and are hosting a machine learning competition.
Nature
A high-performance speech neuroprosthesis
Nature - A speech-to-text brain–computer interface that records spiking activity from intracortical microelectrode arrays enabled an individual who cannot speak intelligibly to achieve 9.1...
❤4
Lemur70B: the SOTA open LLM balancing text & code capabilities
The Lemur project is an open collaborative research effort between XLang Lab and Salesforce Research.
Lemur and Lemur-chat are initialized from Llama 2-70B
1. Pretrain Llama 2 on ~100B code-focused data > Lemur-70B
2. Finetune Lemur on ~300K examples > Lemur-70B-chat
Lemur outperforms other open source language models on coding benchmarks, yet remains competitive textual reasoning and knowledge performance.
Lemur-chat significantly outperforms other open-source supervised fine-tuned models across various dimensions.
Model: huggingface.co/OpenLemur
Blog: xlang.ai/blog/openlemur
The Lemur project is an open collaborative research effort between XLang Lab and Salesforce Research.
Lemur and Lemur-chat are initialized from Llama 2-70B
1. Pretrain Llama 2 on ~100B code-focused data > Lemur-70B
2. Finetune Lemur on ~300K examples > Lemur-70B-chat
Lemur outperforms other open source language models on coding benchmarks, yet remains competitive textual reasoning and knowledge performance.
Lemur-chat significantly outperforms other open-source supervised fine-tuned models across various dimensions.
Model: huggingface.co/OpenLemur
Blog: xlang.ai/blog/openlemur
❤3
Brain computer interface helped create a digital avatar of a stroke survivor’s face
A woman who lost her ability to speak after a stroke 18 years ago was able to replicate her voice and even convey a limited range of facial expressions via a computer avatar. A pair of papers published in Nature yesterday about experiments that restored speech to women via brain implants show just how quickly this field is advancing.
How they did it: Both teams used recording devices implanted into the brain to capture the signals controlling the small movements that provide facial expressions. Then they used AI algorithms to decode them into words, and a language model to adjust for accuracy. One team, led by Edward Chang, a neurosurgeon at the University of California, San Francisco, even managed to capture emotions.
The caveats: Researchers caution that these results may not hold for other people, and either way, we are still a very long way from tech that’s available to the wider public. Still, these proofs of concept are hugely exciting
A woman who lost her ability to speak after a stroke 18 years ago was able to replicate her voice and even convey a limited range of facial expressions via a computer avatar. A pair of papers published in Nature yesterday about experiments that restored speech to women via brain implants show just how quickly this field is advancing.
How they did it: Both teams used recording devices implanted into the brain to capture the signals controlling the small movements that provide facial expressions. Then they used AI algorithms to decode them into words, and a language model to adjust for accuracy. One team, led by Edward Chang, a neurosurgeon at the University of California, San Francisco, even managed to capture emotions.
The caveats: Researchers caution that these results may not hold for other people, and either way, we are still a very long way from tech that’s available to the wider public. Still, these proofs of concept are hugely exciting
MIT Technology Review
Brain implants helped create a digital avatar of a stroke survivor’s face
Two new papers show major advances in the effort to translate brain activity into speech.
Meta AI released Code Llama, a large language model built on top of Llama 2, fine-tuned for coding & state-of-the-art for publicly available coding tools.
Meta AI
Introducing Code Llama, a state-of-the-art large language model for coding
Code Llama, which is built on top of Llama 2, is free for research and commercial use.
Scalable Bayes nets getting some love (or at least some money).
TechCrunch
Ikigai lands $25M investment to bring generative AI to tabular data
Organizations are awash in data, but struggle with a host of challenges to actually use, organize and analyze that data. According to one estimate, companies will store 100 zettabytes of data in the cloud by 2025. But as of now, just 13% of organizations…
❤3