All about AI, Web 3.0, BCI – Telegram
All about AI, Web 3.0, BCI
3.22K subscribers
724 photos
26 videos
161 files
3.08K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
The UK tax authority has announced that, starting from January 1, 2026, crypto asset companies operating in the UK must comprehensively report user and transaction data, including user identity, address, tax identification number, and details of each transaction, in compliance with the global Crypto Asset Reporting Framework (CARF) to combat tax evasion and enhance transparency.

Violators will face a maximum fine of £300 per user.
🦄5🔥3👏2
Adobe introduced HUMOTO, 4D dataset for human-object interaction, developed with a combination of wearable motion capture, SOTA 6D pose estimation vision models, LLM, and the professional refining works of multiple animation studios.

HUMOTO features:
1. Over 700 diverse daily activities
2. Interactions with 60+ objects, 70+ articulated parts.
3. Fine-grained text annotations
4. Detailed hand and finger movements.
4🥰2👏2
Microsoft introduced Magentic-UI — an experimental human-centered web agent

It automates your web tasks while keeping you in control —through co-planning, co-tasking, action guards, and plan learning.

Fully open-source.
4🔥3👍2
Google is launching their own coding agent, Jules, at I/O today

It lets you make changes to your GitHub repos with English prompts in a VM using Gemini 2.5 Pro. It's Google’s version of Devin.

Here's the leaked official repo of prompts it can do.
🔥73👏2
ByteDance presents the Pre-trained Model Averaging strategy, a novel framework for model merging during LLM pre-training.

Found that merging checkpoints trained with constant learning rates not only achieves significant performance improvements but also enables accurate prediction of annealing behavior.

Code might be posted there
5👏3🔥2
Claude 4 is here ! Anthropic Prepares to Launch Claude 4

The lineup will reportedly include 2 versions:

1. Claude Sonnet 4 - a faster version for everyday tasks.

2. Claude Opus 4 - a powerful model for complex problems and creative work.

Based on leaked information, these models are currently in a closed testing environment marked "not intended for production use" and subject to strict rate limitations.

Several intriguing features have been discovered in the configuration files:

"show_raw_thinking" / "show_raw_thinking_mechanism" - functionality that potentially allows users to observe the AI's thought formation process.

The models appear to operate in an environment based on a popular web framework.

A sophisticated "feature gates" system is being employed to precisely control which capabilities are available to different users.

Configuration includes specific parameters for various interaction methods and model performance metrics.

The technical JSON contains numerous rule sets and session parameters suggesting enhanced personalization capabilities.
👍8
Google DeepMind introduced Gemma 3n, a model that runs on as little as 2GB of RAM

It shares the same architecture as Gemini Nano, and is engineered for incredible performance. Researchers added audio understanding, so now it’s multimodal, fast and lean, and runs on-device (no cloud connection required!).
🆒4
Coming soon: AI avatars in Google Vids

Just write a noscript and choose an avatar to deliver your message. It’s a fast, consistent way to create polished video content — for onboarding, announcements, product explainers, and more.
👍41
Agents at home. Mistral released Devstral, a SOTA open model designed specifically for coding agents and developed with All hands AI.
4
P-1 A shared the first paper: "On the Evaluation of Engineering AGI"

A paradigm shift has occurred in the AI field over the past year - the focus shifted from algorithms to ever more complex and diverse evals and RL environments

Together with domain experts researchers've developed in-house an "equivalent" of SWE-Bench but for design engineers who work at industrial oems of the world.

They informally call it the "Archie IQ" eval.

They developed a rich taxonomy of evaluation questions spanning from methodological knowledge to real-world design problems and est Archie on very complex design workflows, that encompass design synthesis, evaluation, etc.
4
Apple introduced EgoDex the largest and most diverse dataset of dexterous human manipulation to date — 829 hours of egocentric video + paired 3D hand poses across 194 tasks.

Unlike teleoperation, egocentric video is passively scalable - like text and images on the Internet.

Researchers use Apple Vision Pro to collect video + precise pose annotations (unlike Ego4D, which lacks native pose data). This unlocks 5x the scale of existing large datasets like DROID.

Also propose new benchmarks and train imitation learning policies for dexterous trajectory prediction. Below are 30 Hz wrist and fingertip trajectories on the test set, where blue = ground truth, red = model predictions, and points get lighter up to 2 seconds in the future.

The full dataset is now publicly available to the community, access details are in the paper. Sample code for data loading is coming soon.
👏6
Elon Musk's xAI announced Live Search in the API

The new beta (free for a limited time) feature allows apps leveraging Grok models to search real-time info from X and the internet, including news.

Here's how easy it is to try out Grok 3's new live search:

1/ Grab a key from xAI
2/ Remix our template
3/ Add your API key to Secrets
4/ Click Run and start chatting with Grok.

Since it's built with Agent, you can remix and keep editing with Agent.

Here's the template to get started.
3🆒3
Tencent presented Hunyuan-TurboS

- Hybrid Transformer-Mamba MoE (56B active params) trained on 16T tokens
- Dynamically switching between rapid responses and deep ”thinking” modes
- Overall top 7 on LMSYS Chatbot Arena.
3
VanEck will launch a private digital assets fund in June 2025 focused on the Avalanche ecosystem.

The fund will invest in projects with long-term token utility around the TGE stage across sectors such as gaming, financial services, payments, and AI, while allocating idle capital to Avalanche-native RWA products to maintain onchain liquidity.
👏3
G42 announced with OpenAI the Stargate in UAE

#Stargate UAE: a next generation 1GW AI compute cluster, will be built by G42 and operated by OpenAI and Oracle.

The collaboration will also include Cisco and SoftBank Group. NVIDIA will supply the latest Blackwell GB300 systems. This will be at the heart of 5GW AI campus announced last week.
👍3
Researchers introduced MedBrowseComp, a challenging deep research benchmark for LLM agents in medicine

MedBrowseComp is the first benchmark that tests the ability of agents to retrieve & synthesize multi-hop medical facts from oncology knowledge bases.
🔥4
Claude 4 is here, and it’s Anthropic’s vision about future of Agents
👍6
All about AI, Web 3.0, BCI
Claude 4 is here, and it’s Anthropic’s vision about future of Agents
More details about Claude4:

—Both models are hybrid models
—Opus 4 is great at understanding codebases and “the right choice” for agentic workflows
—Sonnet 4 excels at everyday tasks, and is your “daily go to”.

Coding agents are a huge theme here at the event and clearly a major focus for what’s coming next.

-Claude 4 has significantly greater agentic capabilities
-A new Code execution tool
-Claude Code coming to VSCode and Jetbrains
-Can now run Claude Code in GitHub.

Some more details on Claude 4 Opus:

—Matches or beats the best models in the world
—SOTA for coding, agentic tool use, and writing
—Memory capabilities across sessions
—Extended thinking mode for complex problem-solving
—200K context window with 32K output tokens.

Claude Code:

—Now generally available
—Integrates with VSCode and Jetbrains IDEs
—You can now see changes live inline in your editor
—A new Claude Code SDK for more flexibility.

If you want to read more about Sonnet & Opus 4, including a bunch of alignment and reward hacking findings, check out the model card.
6👍3
ByteDance introduced MMaDA: Multimodal Large Diffusion Language Models

MMaDA, a novel class of multimodal diffusion foundation models designed to achieve superior performance across diverse domains such as textual reasoning, multimodal understanding, and text-to-image generation.

Surpasses LLaMA-3-7B and Qwen2-7B, SDXL and Janus, Show-o and SEED-X.

3 key innovations:
1. a unified diffusion architecture with a shared probabilistic formulation and a modality-agnostic design, eliminating the need for modality-specific components.
2. mixed long chain-of-thought (CoT) fine-tuning strategy that curates a unified CoT format across modalities.
3. UniGRPO, a unified policy-gradient-based RL algorithm specifically tailored for diffusion foundation models.

GitHub.
👏4