Stripe introduced stablecoin financial accounts. Hold a stablecoin balance. Send and receive funds with fiat and crypto rails. Accessible from 101 countries.
Stripe
Use stablecoins in Financial Accounts
Receive payments, store balances, manage expenses, and send funds globally with stablecoins or fiat currency. Supported currencies include USDC (Arbitrum, Avalanche C-Chain, Base, Ethereum, Optimism, Polygon, Solana, Stellar), USD (ACH, wire transfer), and…
👏3👍2🥰2
Material? Robot? It’s a metabot.
You can transform between a material and a robot, and it is controllable with an external magnetic field.
Researchers describe how they drew inspiration from origami to create a structure that blurs the lines between robotics and materials.
The invention is a metamaterial, which is a material engineered to feature new and unusual properties that depend on the material’s physical structure rather than its chemical composition.
In this case, the researchers built their metamaterial using a combination of simple plastics and custom-made magnetic composites.
Using a magnetic field, the researchers changed the metamaterial’s structure, causing it to expand, move and deform in different directions, all remotely without touching the metamaterial.
You can transform between a material and a robot, and it is controllable with an external magnetic field.
Researchers describe how they drew inspiration from origami to create a structure that blurs the lines between robotics and materials.
The invention is a metamaterial, which is a material engineered to feature new and unusual properties that depend on the material’s physical structure rather than its chemical composition.
In this case, the researchers built their metamaterial using a combination of simple plastics and custom-made magnetic composites.
Using a magnetic field, the researchers changed the metamaterial’s structure, causing it to expand, move and deform in different directions, all remotely without touching the metamaterial.
Princeton Engineering
Princeton Engineering - Material? Robot? It's a metabot
The invention is a metamaterial, which is a material engineered to feature new and unusual properties that depend on the material’s physical structure rather than its chemical composition.
👍3❤2👏2
Meta introduced LlamaFirewall: An open source guardrail system for building secure AI agents
Mitigates risks such as prompt injection, agent misalignment, and insecure code risks through three powerful guardrails.
GitHub.
Paper.
Mitigates risks such as prompt injection, agent misalignment, and insecure code risks through three powerful guardrails.
GitHub.
Paper.
Meta AI
Sharing new open source protection tools and advancements in AI privacy and security
Today, we’re releasing new Llama protection tools for the open source AI community.
🆒4👏3🔥2🤣1🦄1
Alibaba/Qwen team has created ZeroSearch, a way to do searching thru AI w/o accessing real world search engine APIs like Google.
Chinese ppl are already using AI apps like DeepSeek & Yuanbao to do search instead of Baidu.
Improved AI searching capabilities will only move users away from Baidu.
Long term, improved AI search that don't use Google API could significantly hurt Alphabet revenue globally.
Chinese ppl are already using AI apps like DeepSeek & Yuanbao to do search instead of Baidu.
Improved AI searching capabilities will only move users away from Baidu.
Long term, improved AI search that don't use Google API could significantly hurt Alphabet revenue globally.
GitHub
GitHub - Alibaba-NLP/ZeroSearch: ZeroSearch: Incentivize the Search Capability of LLMs without Searching
ZeroSearch: Incentivize the Search Capability of LLMs without Searching - Alibaba-NLP/ZeroSearch
❤5👍3👏2
BIS_DeFiying_gravity_1747043432.pdf
1.1 MB
A new analysis by the Bank for International Settlements sheds light on cross-border BTC, ETH and stablecoin flows.
Cross-border crypto flows have surged from under US $7 billion in Q1 2017 to a peak of US $2.6 trillion in 2021, with #stablecoins (USDT, USDC) now representing nearly half of that volume.
After a post-2021 dip, flows rebounded to around US $1.8 trillion in 2023 and approached US $600 billion by mid-2024, underscoring continuing ecosystem expansion.
A ‘gravity’ analysis shows that unlike traditional trade or banking, geographic distance and borders exert minimal drag on crypto transactions.
Sharing a common language still boosts flows (≈13 %), but far less than in conventional finance, while physical proximity has an almost negligible effect.
This reflects blockchain’s borderless #infrastructure and the rise of digital corridors.
Importantly, global funding conditions and #market sentiment now shape crypto movements.
A 1 % rise in expected volatility corresponds to a ≈2 % jump in Bitcoin flows, highlighting speculative trading motives.
Conversely, tighter credit spreads and #dollar strength dampen volumes, signalling growing integration with traditional #financial cycles.
Distinct use-cases emerge across asset types. Native tokens (BTC, ETH) respond strongly to speculative and funding factors, whereas stablecoins (USDT, USDC) and low-value Bitcoin transfers are sensitive to remittance‐cost differentials.
Corridors with high #fiat remittance fees see larger stablecoin flows, suggesting crypto’s role as an alternative payments rail.
And interestingly, capital‐control measures appear largely ineffective.
Tightening outflow or inflow restrictions often coincide with stable or even higher crypto flows, pointing to circumvention incentives.
Looking Ahead
1. Authorities should enhance real-time monitoring of stablecoin networks and improve attribution methods.
2. #CBDC frameworks may need to incorporate cross-border interoperability and #privacy safeguards to compete with private stablecoins.
3. As crypto flows increasingly substitute traditional remittances, coordination among regulators on transparency, #AML/CFT standards, and capital‐controls becomes critical.
4. Continued growth in #DeFi suggests policy must evolve from asset‐level oversight to network‐level risk management.
Cross-border crypto flows have surged from under US $7 billion in Q1 2017 to a peak of US $2.6 trillion in 2021, with #stablecoins (USDT, USDC) now representing nearly half of that volume.
After a post-2021 dip, flows rebounded to around US $1.8 trillion in 2023 and approached US $600 billion by mid-2024, underscoring continuing ecosystem expansion.
A ‘gravity’ analysis shows that unlike traditional trade or banking, geographic distance and borders exert minimal drag on crypto transactions.
Sharing a common language still boosts flows (≈13 %), but far less than in conventional finance, while physical proximity has an almost negligible effect.
This reflects blockchain’s borderless #infrastructure and the rise of digital corridors.
Importantly, global funding conditions and #market sentiment now shape crypto movements.
A 1 % rise in expected volatility corresponds to a ≈2 % jump in Bitcoin flows, highlighting speculative trading motives.
Conversely, tighter credit spreads and #dollar strength dampen volumes, signalling growing integration with traditional #financial cycles.
Distinct use-cases emerge across asset types. Native tokens (BTC, ETH) respond strongly to speculative and funding factors, whereas stablecoins (USDT, USDC) and low-value Bitcoin transfers are sensitive to remittance‐cost differentials.
Corridors with high #fiat remittance fees see larger stablecoin flows, suggesting crypto’s role as an alternative payments rail.
And interestingly, capital‐control measures appear largely ineffective.
Tightening outflow or inflow restrictions often coincide with stable or even higher crypto flows, pointing to circumvention incentives.
Looking Ahead
1. Authorities should enhance real-time monitoring of stablecoin networks and improve attribution methods.
2. #CBDC frameworks may need to incorporate cross-border interoperability and #privacy safeguards to compete with private stablecoins.
3. As crypto flows increasingly substitute traditional remittances, coordination among regulators on transparency, #AML/CFT standards, and capital‐controls becomes critical.
4. Continued growth in #DeFi suggests policy must evolve from asset‐level oversight to network‐level risk management.
👍3❤2👏2
Researchers at Tsinghua University introduced Absolute Zero, a new method for AI training
It enables models to learn and master complex reasoning tasks on their own through self-play.
Can be a strong alternative to training with costly human-labeled data.
Paper.
GitHub.
models.
It enables models to learn and master complex reasoning tasks on their own through self-play.
Can be a strong alternative to training with costly human-labeled data.
Paper.
GitHub.
models.
arXiv.org
Absolute Zero: Reinforced Self-play Reasoning with Zero Data
Reinforcement learning with verifiable rewards (RLVR) has shown promise in enhancing the reasoning capabilities of large language models by learning directly from outcome-based rewards. Recent...
❤3🔥3👏2
AG-UI is the Agent-User Interaction Protocol. This is a protocol for building user-facing AI agents. It's a bridge between a backend AI agent and a full-stack application.
Up to this point, most agents are backend automators: form-fillers, summarizers, and schedulers. They are useful as backend tools.
But, interactive agents like Cursor can bring agents to a whole new set of domains, and have been extremely hard to build.
If you want to build an agent that co-works with users, you need:
• Real-time updates
• Tool orchestration
• Shared mutable state
• Security boundaries
• UI synchronization
AG-UI gives you all of this.
It’s a lightweight, event-streaming protocol (over HTTP/SSE/webhooks) that creates a unified pipe between your agent backend (OpenAI, Ollama, LangGraph, custom code) and your frontend.
Here is how it works:
• Client sends a POST request to the agent endpoint
• Then listens to a unified event stream over HTTP
• Each event includes a type and a minimal payload
• Agents emit events in real-time
• The frontend can react immediately to these events
• The frontend emits events and context back to the agent
Up to this point, most agents are backend automators: form-fillers, summarizers, and schedulers. They are useful as backend tools.
But, interactive agents like Cursor can bring agents to a whole new set of domains, and have been extremely hard to build.
If you want to build an agent that co-works with users, you need:
• Real-time updates
• Tool orchestration
• Shared mutable state
• Security boundaries
• UI synchronization
AG-UI gives you all of this.
It’s a lightweight, event-streaming protocol (over HTTP/SSE/webhooks) that creates a unified pipe between your agent backend (OpenAI, Ollama, LangGraph, custom code) and your frontend.
Here is how it works:
• Client sends a POST request to the agent endpoint
• Then listens to a unified event stream over HTTP
• Each event includes a type and a minimal payload
• Agents emit events in real-time
• The frontend can react immediately to these events
• The frontend emits events and context back to the agent
GitHub
GitHub - ag-ui-protocol/ag-ui: AG-UI: the Agent-User Interaction Protocol. Bring Agents into Frontend Applications.
AG-UI: the Agent-User Interaction Protocol. Bring Agents into Frontend Applications. - ag-ui-protocol/ag-ui
❤3👍3👏2
All about AI, Web 3.0, BCI
Prime intellect introduced SYNTHETIC-1: Collaboratively generating the largest synthetic dataset of verified reasoning traces for math, coding and science using DeepSeek-R1. SYNTHETIC-1: - 1.4 million high-quality tasks & verifiers - Public synthetic data…
Prime intellect released INTELLECT-2: the first 32B parameter model trained via globally distributed reinforcement learning. It’s open-source.
Report.
HuggingFace.
Report.
HuggingFace.
www.primeintellect.ai
INTELLECT-2 Release: The First 32B Parameter Model Trained Through Globally Distributed Reinforcement Learning
We're excited to release INTELLECT-2, the first 32B parameter model trained via globally distributed reinforcement learning. Unlike traditional centralized training efforts, INTELLECT-2 trains a reasoning language model using fully asynchronous RL across…
👍3❤2👏2
OpenAI just released HealthBench — a new eval for AI systems for health.
Developed with 262 physicians who have practiced in 60 countries.
Developed with 262 physicians who have practiced in 60 countries.
Openai
Introducing HealthBench
HealthBench is a new evaluation benchmark for AI in healthcare which evaluates models in realistic scenarios. Built with input from 250+ physicians, it aims to provide a shared standard for model performance and safety in health.
🥰3👍2👏2
Researchers trained an end-to-end visual navigation policy on 2000 hours of uncurated, crowd-sourced data and evaluated it across 24 environments in 6 countries
Recipe: Train a model-based policy on a small amount of clean short-horizon data --> use it to re-label actions for the uncurated dataset --> train your favorite e2e BC model on these labels!
Open-sourced code & data.
Recipe: Train a model-based policy on a small amount of clean short-horizon data --> use it to re-label actions for the uncurated dataset --> train your favorite e2e BC model on these labels!
Open-sourced code & data.
GitHub
GitHub - NHirose/Learning-to-Drive-Anywhere-with-MBRA
Contribute to NHirose/Learning-to-Drive-Anywhere-with-MBRA development by creating an account on GitHub.
🥰3👍2👏2
MLE-Dojo: Interactive Environments for Empowering LLM Agents in Machine Learning Engineering
A Gym-style framework for systematically training, evaluating, and improving agents in iterative ML engineering workflows.
Paper.
GitHub.
A Gym-style framework for systematically training, evaluating, and improving agents in iterative ML engineering workflows.
Paper.
GitHub.
arXiv.org
MLE-Dojo: Interactive Environments for Empowering LLM Agents in...
We introduce MLE-Dojo, a Gym-style framework for systematically reinforcement learning, evaluating, and improving autonomous large language model (LLM) agents in iterative machine learning...
❤3🔥3👏2
Notion released AI for Work, a suite of work-centered AI features, including:
— AI Meeting Notes
— Enterprise Search to find answers across tools
— Research Mode to draft docs
— Access to models, including GPT-4.1 & Claude 3.7
— AI Meeting Notes
— Enterprise Search to find answers across tools
— Research Mode to draft docs
— Access to models, including GPT-4.1 & Claude 3.7
Notion
Is Notion's Business or Enterprise plan right for you?
Bringing AI into daily workflows is one of the biggest challenges companies face. Our Business and Enterprise plans meet that head-on with powerful AI tools, custom workflows, and rock-solid security— all in one connected workspace. Find the plan that fits…
❤4👍2👏2
Google introduced AlphaEvolve an AI coding agent
It’s able to:
1. Design faster matrix multiplication algorithms
2. Find new solutions to open math problems
3. Make data centers, chip design and AI training more efficient across Google.
AlphaEvolve uses:
- LLMs: To synthesize information about problems as well as previous attempts to solve them - and to propose new versions of algorithms
- Automated evaluation: To address the broad class of problems where progress can be clearly and systematically measured.
- Evolution: Iteratively improving the best algorithms found, and re-combining ideas from different solutions to find even better ones.
Google applied AlphaEvolve to a fundamental problem in computer science: discovering algorithms for matrix multiplication. It managed to identify multiple new algorithms.
This significantly advances our previous model AlphaTensor, which AlphaEvolve outperforms using its better and more generalist approach.
It’s able to:
1. Design faster matrix multiplication algorithms
2. Find new solutions to open math problems
3. Make data centers, chip design and AI training more efficient across Google.
AlphaEvolve uses:
- LLMs: To synthesize information about problems as well as previous attempts to solve them - and to propose new versions of algorithms
- Automated evaluation: To address the broad class of problems where progress can be clearly and systematically measured.
- Evolution: Iteratively improving the best algorithms found, and re-combining ideas from different solutions to find even better ones.
Google applied AlphaEvolve to a fundamental problem in computer science: discovering algorithms for matrix multiplication. It managed to identify multiple new algorithms.
This significantly advances our previous model AlphaTensor, which AlphaEvolve outperforms using its better and more generalist approach.
Google DeepMind
AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms
New AI agent evolves algorithms for math and practical applications in computing by combining the creativity of large language models with automated evaluators
❤6🔥3👏2
Meta just released new models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience.
1. Open Molecules 2025 (OMol25): A dataset for molecular discovery with simulations of large atomic systems.
2. Universal Model for Atoms: A machine learning interatomic potential for modeling atom interactions across a wide range of materials and molecules.
3. Adjoint Sampling: A scalable algorithm for training generative models based on scalar rewards.
4. FAIR and the Rothschild Foundation Hospital partnered on a large-scale study that reveals striking parallels between language development in humans and LLMs.
1. Open Molecules 2025 (OMol25): A dataset for molecular discovery with simulations of large atomic systems.
2. Universal Model for Atoms: A machine learning interatomic potential for modeling atom interactions across a wide range of materials and molecules.
3. Adjoint Sampling: A scalable algorithm for training generative models based on scalar rewards.
4. FAIR and the Rothschild Foundation Hospital partnered on a large-scale study that reveals striking parallels between language development in humans and LLMs.
Meta AI
Sharing new breakthroughs and artifacts supporting molecular property prediction, language processing, and neuroscience
Meta FAIR is sharing new research artifacts that highlight our commitment to advanced machine intelligence (AMI) through focused scientific and academic progress.
❤6🔥6👍4
#DeepSeek presents:Insights into DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architectures
Elaborates on hardware architecture and model design in achieving cost-efficient large-scale training and inference.
Elaborates on hardware architecture and model design in achieving cost-efficient large-scale training and inference.
❤3🔥3👏2👍1
Google introduced a notion of sufficient context to examine retrieval augmented generation (RAG) systems, developing a method to classify instances, analyzing failures of RAG systems & proposing a way to reduce hallucinations.
research.google
Deeper insights into retrieval augmented generation: The role of sufficient context
👍4🔥3❤2
Today "a milestone in the evolution of personalized therapies for rare & ultra-rare inborn errors of metabolism"
—the 1st human to undergo custom genome editing
—outgrowth of decades of NIH funded research.
Paper.
—the 1st human to undergo custom genome editing
—outgrowth of decades of NIH funded research.
Paper.
NY Times
Baby Is Healed With World’s First Personalized Gene-Editing Treatment
The technique used on a 9½-month-old boy with a rare condition has the potential to help people with thousands of other uncommon genetic diseases.
❤4🔥4👏2🤔1
Agents from scratch
This repo covers the basics of building agents:
+ Fundamentals
+ Build an agent
+ Agent eval
+ Agent w/ human-in-the-loop
+ Agent w/ long-term memory
Code (all open source).
Building agents -Combing workflow (router) with agent that can call email tools.
Notebook.
Slides.
Agent evals -Unit tests (Pytest) for triage decision + tools calls (test structured outputs using heuristic eval) and LLM-as-judge to eval email responses
Notebook.
Slides.
Human-in-the-loop -Add human in the loop for approval / editing of specific tool calls.
Notebook.
Memory - Add memory, so the agent learned email response preferences from human feedback
Notebook.
Agent can be hooked into Gmail by swapping out the tools used. Components are also general and can be used w/ various tools / MCP servers.
This repo covers the basics of building agents:
+ Fundamentals
+ Build an agent
+ Agent eval
+ Agent w/ human-in-the-loop
+ Agent w/ long-term memory
Code (all open source).
Building agents -Combing workflow (router) with agent that can call email tools.
Notebook.
Slides.
Agent evals -Unit tests (Pytest) for triage decision + tools calls (test structured outputs using heuristic eval) and LLM-as-judge to eval email responses
Notebook.
Slides.
Human-in-the-loop -Add human in the loop for approval / editing of specific tool calls.
Notebook.
Memory - Add memory, so the agent learned email response preferences from human feedback
Notebook.
Agent can be hooked into Gmail by swapping out the tools used. Components are also general and can be used w/ various tools / MCP servers.
Google Docs
Building Ambient Agents: Teaser + LangGraph 101
Building ambient agents with
❤7🔥4👏2
Qwen introduced Parallel Scaling Law for Language Models
"We introduce the third and more inference-efficient scaling paradigm: increasing the model’s parallel computation during both training and inference time."
"We draw inspiration from classifier-free guidance (CFG)".
"In this paper, we hypothesize that the effectiveness of CFG lies in its double computation."
"We propose a proof-of-concept scaling approach called parallel scaling (PARSCALE) to validate this hypothesis on language models. "
"parallelizing into P streams equates to scaling the model parameters by O(log P)".
"for a 1.6B model, when scaling to P = 8 using PARSCALE, it uses 22× less memory increase and 6× less latency increase compared to parameter scaling that achieves the same model capacity".
GitHub
"We introduce the third and more inference-efficient scaling paradigm: increasing the model’s parallel computation during both training and inference time."
"We draw inspiration from classifier-free guidance (CFG)".
"In this paper, we hypothesize that the effectiveness of CFG lies in its double computation."
"We propose a proof-of-concept scaling approach called parallel scaling (PARSCALE) to validate this hypothesis on language models. "
"parallelizing into P streams equates to scaling the model parameters by O(log P)".
"for a 1.6B model, when scaling to P = 8 using PARSCALE, it uses 22× less memory increase and 6× less latency increase compared to parameter scaling that achieves the same model capacity".
GitHub
arXiv.org
Parallel Scaling Law for Language Models
It is commonly believed that scaling language models should commit a significant space or time cost, by increasing the parameters (parameter scaling) or output tokens (inference-time scaling). We...
🔥4👍3🥰2
OpenAI introduced AI agent codex.
it is a software engineering agent that runs in the cloud and does tasks for you, like writing a new feature of fixing a bug.
U can run many tasks in parallel. Starting to roll out today to ChatGPT pro, enterprise, and team users.
it is a software engineering agent that runs in the cloud and does tasks for you, like writing a new feature of fixing a bug.
U can run many tasks in parallel. Starting to roll out today to ChatGPT pro, enterprise, and team users.
Openai
Introducing Codex
Introducing Codex: a cloud-based software engineering agent that can work on many tasks in parallel, powered by codex-1. With Codex, developers can simultaneously deploy multiple agents to independently handle coding tasks such as writing features, answering…
❤3👏3💯2👍1