All about AI, Web 3.0, BCI – Telegram
All about AI, Web 3.0, BCI
3.22K subscribers
724 photos
26 videos
161 files
3.08K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
Material? Robot? It’s a metabot.

You can transform between a material and a robot, and it is controllable with an external magnetic field.

Researchers describe how they drew inspiration from origami to create a structure that blurs the lines between robotics and materials.

The invention is a metamaterial, which is a material engineered to feature new and unusual properties that depend on the material’s physical structure rather than its chemical composition.

In this case, the researchers built their metamaterial using a combination of simple plastics and custom-made magnetic composites.

Using a magnetic field, the researchers changed the metamaterial’s structure, causing it to expand, move and deform in different directions, all remotely without touching the metamaterial.
👍32👏2
Meta introduced LlamaFirewall: An open source guardrail system for building secure AI agents

Mitigates risks such as prompt injection, agent misalignment, and insecure code risks through three powerful guardrails.

GitHub.
Paper.
🆒4👏3🔥2🤣1🦄1
Alibaba/Qwen team has created ZeroSearch, a way to do searching thru AI w/o accessing real world search engine APIs like Google.

Chinese ppl are already using AI apps like DeepSeek & Yuanbao to do search instead of Baidu.

Improved AI searching capabilities will only move users away from Baidu.

Long term, improved AI search that don't use Google API could significantly hurt Alphabet revenue globally.
5👍3👏2
BIS_DeFiying_gravity_1747043432.pdf
1.1 MB
A new analysis by the Bank for International Settlements sheds light on cross-border BTC, ETH and stablecoin flows.

Cross-border crypto flows have surged from under US $7 billion in Q1 2017 to a peak of US $2.6 trillion in 2021, with #stablecoins (USDT, USDC) now representing nearly half of that volume.

After a post-2021 dip, flows rebounded to around US $1.8 trillion in 2023 and approached US $600 billion by mid-2024, underscoring continuing ecosystem expansion.

A ‘gravity’ analysis shows that unlike traditional trade or banking, geographic distance and borders exert minimal drag on crypto transactions.

Sharing a common language still boosts flows (≈13 %), but far less than in conventional finance, while physical proximity has an almost negligible effect.

This reflects blockchain’s borderless #infrastructure and the rise of digital corridors.

Importantly, global funding conditions and #market sentiment now shape crypto movements.

A 1 % rise in expected volatility corresponds to a ≈2 % jump in Bitcoin flows, highlighting speculative trading motives.

Conversely, tighter credit spreads and #dollar strength dampen volumes, signalling growing integration with traditional #financial cycles.

Distinct use-cases emerge across asset types. Native tokens (BTC, ETH) respond strongly to speculative and funding factors, whereas stablecoins (USDT, USDC) and low-value Bitcoin transfers are sensitive to remittance‐cost differentials.

Corridors with high #fiat remittance fees see larger stablecoin flows, suggesting crypto’s role as an alternative payments rail.

And interestingly, capital‐control measures appear largely ineffective.

Tightening outflow or inflow restrictions often coincide with stable or even higher crypto flows, pointing to circumvention incentives.

Looking Ahead

1. Authorities should enhance real-time monitoring of stablecoin networks and improve attribution methods.

2. #CBDC frameworks may need to incorporate cross-border interoperability and #privacy safeguards to compete with private stablecoins.

3. As crypto flows increasingly substitute traditional remittances, coordination among regulators on transparency, #AML/CFT standards, and capital‐controls becomes critical.

4. Continued growth in #DeFi suggests policy must evolve from asset‐level oversight to network‐level risk management.
👍32👏2
Researchers at Tsinghua University introduced Absolute Zero, a new method for AI training

It enables models to learn and master complex reasoning tasks on their own through self-play.

Can be a strong alternative to training with costly human-labeled data.

Paper.

GitHub.

models.
3🔥3👏2
AG-UI is the Agent-User Interaction Protocol. This is a protocol for building user-facing AI agents. It's a bridge between a backend AI agent and a full-stack application.

Up to this point, most agents are backend automators: form-fillers, summarizers, and schedulers. They are useful as backend tools.

But, interactive agents like Cursor can bring agents to a whole new set of domains, and have been extremely hard to build.

If you want to build an agent that co-works with users, you need:

• Real-time updates
• Tool orchestration
• Shared mutable state
• Security boundaries
• UI synchronization

AG-UI gives you all of this.

It’s a lightweight, event-streaming protocol (over HTTP/SSE/webhooks) that creates a unified pipe between your agent backend (OpenAI, Ollama, LangGraph, custom code) and your frontend.

Here is how it works:

• Client sends a POST request to the agent endpoint
• Then listens to a unified event stream over HTTP
• Each event includes a type and a minimal payload
• Agents emit events in real-time
• The frontend can react immediately to these events
• The frontend emits events and context back to the agent
3👍3👏2
Researchers trained an end-to-end visual navigation policy on 2000 hours of uncurated, crowd-sourced data and evaluated it across 24 environments in 6 countries

Recipe: Train a model-based policy on a small amount of clean short-horizon data --> use it to re-label actions for the uncurated dataset --> train your favorite e2e BC model on these labels!

Open-sourced code & data.
🥰3👍2👏2
MLE-Dojo: Interactive Environments for Empowering LLM Agents in Machine Learning Engineering

A Gym-style framework for systematically training, evaluating, and improving agents in iterative ML engineering workflows.

Paper.
GitHub.
3🔥3👏2
Cohere presents Aya Vision: Advancing the Frontier of Multilingual Multimodality

- Aya-Vision-8B outperforms Qwen-2.5-VL-7B
- Aya-Vision-32B outperforms Qwen-2.5-VL-72B

Paper.
Models.
Bench.
6🥰2👏2
Notion released AI for Work, a suite of work-centered AI features, including:

— AI Meeting Notes
— Enterprise Search to find answers across tools
— Research Mode to draft docs
— Access to models, including GPT-4.1 & Claude 3.7
4👍2👏2
Google introduced AlphaEvolve an AI coding agent

It’s able to:

1. Design faster matrix multiplication algorithms
2. Find new solutions to open math problems
3. Make data centers, chip design and AI training more efficient across Google.

AlphaEvolve uses:
- LLMs: To synthesize information about problems as well as previous attempts to solve them - and to propose new versions of algorithms
- Automated evaluation: To address the broad class of problems where progress can be clearly and systematically measured.
- Evolution: Iteratively improving the best algorithms found, and re-combining ideas from different solutions to find even better ones.

Google applied AlphaEvolve to a fundamental problem in computer science: discovering algorithms for matrix multiplication. It managed to identify multiple new algorithms.

This significantly advances our previous model AlphaTensor, which AlphaEvolve outperforms using its better and more generalist approach.
6🔥3👏2
Meta just released new models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience.

1. Open Molecules 2025 (OMol25): A dataset for molecular discovery with simulations of large atomic systems.

2. Universal Model for Atoms: A machine learning interatomic potential for modeling atom interactions across a wide range of materials and molecules.

3. Adjoint Sampling: A scalable algorithm for training generative models based on scalar rewards.

4. FAIR and the Rothschild Foundation Hospital partnered on a large-scale study that reveals striking parallels between language development in humans and LLMs.
6🔥6👍4
#DeepSeek presents:Insights into DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architectures

Elaborates on hardware architecture and model design in achieving cost-efficient large-scale training and inference.
3🔥3👏2👍1
Google introduced a notion of sufficient context to examine retrieval augmented generation (RAG) systems, developing a method to classify instances, analyzing failures of RAG systems & proposing a way to reduce hallucinations.
👍4🔥32
Today "a milestone in the evolution of personalized therapies for rare & ultra-rare inborn errors of metabolism"
—the 1st human to undergo custom genome editing
—outgrowth of decades of NIH funded research.

Paper.
4🔥4👏2🤔1
Agents from scratch

This repo covers the basics of building agents:
+ Fundamentals
+ Build an agent
+ Agent eval
+ Agent w/ human-in-the-loop
+ Agent w/ long-term memory
Code (all open source).

Building agents -Combing workflow (router) with agent that can call email tools.
Notebook.
Slides.

Agent evals -Unit tests (Pytest) for triage decision + tools calls (test structured outputs using heuristic eval) and LLM-as-judge to eval email responses
Notebook.
Slides.

Human-in-the-loop -Add human in the loop for approval / editing of specific tool calls.
Notebook.

Memory - Add memory, so the agent learned email response preferences from human feedback
Notebook.

Agent can be hooked into Gmail by swapping out the tools used. Components are also general and can be used w/ various tools / MCP servers.
7🔥4👏2
Qwen introduced Parallel Scaling Law for Language Models

"We introduce the third and more inference-efficient scaling paradigm: increasing the model’s parallel computation during both training and inference time."

"We draw inspiration from classifier-free guidance (CFG)".

"In this paper, we hypothesize that the effectiveness of CFG lies in its double computation."

"We propose a proof-of-concept scaling approach called parallel scaling (PARSCALE) to validate this hypothesis on language models. "

"parallelizing into P streams equates to scaling the model parameters by O(log P)".

"for a 1.6B model, when scaling to P = 8 using PARSCALE, it uses 22× less memory increase and 6× less latency increase compared to parameter scaling that achieves the same model capacity".

GitHub
🔥4👍3🥰2
OpenAI introduced AI agent codex.

it is a software engineering agent that runs in the cloud and does tasks for you, like writing a new feature of fixing a bug.

U can run many tasks in parallel. Starting to roll out today to ChatGPT pro, enterprise, and team users.
3👏3💯2👍1