All about AI, Web 3.0, BCI – Telegram
All about AI, Web 3.0, BCI
3.22K subscribers
724 photos
26 videos
161 files
3.08K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
Anthropic: Claude Sonnet 4 now supports 1 million tokens of context on the Anthropic API—a 5x increase.

Process over 75,000 lines of code or hundreds of documents in a single request.

Long context support is in public beta for API users with Tier 4 and custom rate limits.

Broader availability will be rolling out over the coming weeks. Available in Amazon Bedrock, and coming soon to Google Cloud's Vertex AI.
🔥6🥰3👏2
Microsoft introduced Dion is a new AI model optimization method that boosts scalability and performance over existing leading methods by orthonormalizing only a top rank subset of singular vectors, enabling more efficient training of large models such as LLaMA-3 with reduced overhead.

Orthonormal updates appear to roughly double transformer training convergence with Dion paving tractability at the largest scale.

Code.
Paper.
🔥3🆒32👏2
Matrix-Game 2.0 — the first open-source, real-time, long-sequence interactive world model

Last week, DeepMind's Genie 3 shook the AI world with real-time interactive world models.

But... it wasn't open-sourced.

Matrix-Game 2.0 is Skywork's next-gen interactive world model:

- Real-time: 25FPS generation
- Long-sequence: Minutes of continuous video
- Interactive: Move, rotate, explore
- Multi-scene: City, wild, TempleRun, GTA.

It's the foundation for:

- Game engines
- Embodied AI
- Virtual humans
- Spatial intelligence.

The Tech Stack:

- Data: 1,350 hrs of interactive videos from Unreal Engine + GTA5
- Control: Frame-level keyboard & mouse input
- Model: 1.3B autoregressive diffusion with action control
- Speed: Single GPU → 25FPS
- 3D Causal VAE for space-time compression
- Diffusion Transformer with action conditioning
- KV-Cache for infinite video generation
- DMD training to avoid error accumulation
🔥4🥰3👏2
Now you can run and benchmark evolutionary coding agents on 100+ algorithm optimization tasks from algotune.io
👍4🔥2🥰2
Google is rolling out their version of memory for Gemini today. It is called 'personal context.'

If you want to disable this, toggle off Personal Context in settings.

This works for 2.5 Pro only, not Flash.

It will be interesting to see what the effect of Gemini's monster context window will have on implementation.
🔥32🥰2🍌1
The revenue from just the AI Labs (publicly reported figures from OpenAI and Anthropic), along with the public AI infrastructure companies, has already eclipsed all public SaaS revenue in 2024 (Nvidia's datacenter revenue drives most of the growth).

It will almost double public SaaS on a net new revenue basis this year. And these figures don’t include private AI companies, which would even further show the spread.

It’s clear that the current set of 100+ public SaaS companies is not yet seeing revenue growth in their AI offerings, and for the most part, AI demand is happening where they are not.
🔥4🥰2👏2
ByteDance & Tsinghua University unveiled ASearcher

Agentic search by enabling long-horizon reasoning with large-scale asynchronous RL.

Goes beyond typical turn limits for complex, knowledge-intensive tasks.

Achieves SOTA performance, with significant gains of up to +46.7% on xBench and GAIA after RL training.

Models & data.
🔥43🥰2
Meta introduced DINOv3. A major release that raises the bar of self-supervised vision foundation models.

DINOv3 is open source. Researchers scaled model size and training data, but here's what makes it special.

What’s in DINOv3?
• 7B ViT foundation model + smaller distilled models
• Trained on 1.7B curated images with no annotations
• Gram anchoring fixes feature map degradation when training too big too long
• High-resolution adaptation with relative spatial coords and 2D RoPE.

Training+evaluation code, adapters and notebooks

Collection of pre-trained backbones in HF Transformers

Paper.
🔥4👏3🥰2
OpenAI is preparing ChatGPT to be used in its upcoming AI Browser

The new ChatGPT web app version adds a hidden option to "Use cloud browser" when enabling Agent mode.

However, interestingly, this option is enabled only if the user agent matches "ChatGPT.+Macintosh;.+ Chrome" (likely the new browser from OpenAI) - hinting at the possibility that ChatGPT in Agent mode might be able to control either your browser or the cloud browser.

"Aura" will be able to run ChatGPT Agent natively and will likely arrive on macOS first.

Watch out for Edge to get it first.
🔥4👏3🥰2
Inworld AI released AI runtime built to auto-scale consumer apps from prototype to millions of users, automate MLOps, and launch one-click AI experiments

Researchers delivered:

- Adaptive Graphs: Auto-scale from 10 to 10M users. No rework.

- Automated MLOps: Ops, telemetry, optimizations automatically.

- Live Experiments: Instant A/B tests, no code changes.
👍42🔥2
Nvidia announced Cosmos Reason 7B, an open-source VLM to enable robots to see, reason, and act in the physical world, solving multistep tasks

The company also made Isaac Sim 5.0 and Isaac Lab 2.2 generally available
3👍3🔥3
DatologyAI Team introduced BeyondWeb, a synthetic data generation framework

BeyondWeb significantly extends the capabilities of traditional web-scale datasets, outperforming SOTA synthetic pretraining datasets such as Cosmopedia and Nemotron-CC's high-quality synthetic subset (Nemotron-Synth) by up to 5.1 percentage points (pp) and 2.6pp, respectively, when averaged across a suite of 14 benchmark evaluations. It delivers up to 7.7x faster training than open web data and 2.7x faster than Nemotron-Synth.

Remarkably, a 3B model trained for 180B tokens on BeyondWeb outperforms an 8B model trained for the same token budget on Cosmopedia.
🔥32🆒2👍1
BlackRock just proposes AlphaAgents for investment research

Equity portfolio management has long relied on human analysts poring over 10-Ks, earnings calls, and market data—a process that’s slow and prone to biases.

Enter AI-powered multi-agent LLMs: teams of specialized agents that collaborate and debate to synthesize market and fundamental data.

This approach can speed up research and surface insights humans might miss.

AlphaAgents also tackle cognitive biases. Loss aversion, overconfidence, and anchoring often lead to suboptimal decisions—but multi-agent AI provides an objective second opinion.

By combining reasoning, memory, and tool usage, this framework helps:

• Aggregate massive datasets
• Reduce human and AI errors
• Improve portfolio decision-making efficiency

In short, multi-agent LLMs could redefine equity research, making it faster, more objective, and more data-driven. The future of alpha hunting may just be collaborative AI.
🔥5🆒4🤔2🥰1👏1
Nvidia dropped model that rivals Qwen3 8b, with data, with base model, not that bad of a license (could be better to be clear)

NVIDIA Nemotron Nano v2 - a 9B hybrid SSM that is 6X faster than similarly sized models, while also being more accurate.

Along with this model, also released most of the data researchers used to create it, including the pretraining corpus.
🔥43🥰2
DeepSeek Releases V3.1, context expanded to 128K.

You are welcome to try it on the official website, the app, and the mini program. The API calling method remains unchanged.
🔥3💅3👍21
Digital asset platform Bullish announced that its $1.15 billion IPO proceeds were fully settled in stablecoins, making it the first IPO in the United States to be completed using stablecoin funding.

The stablecoins used include USDCV, EURCV, USDG, PYUSD, RLUSD, among others.
🔥3🥰3👏2