All about AI, Web 3.0, BCI – Telegram
All about AI, Web 3.0, BCI
3.22K subscribers
724 photos
26 videos
161 files
3.08K links
This channel about AI, Web 3.0 and brain computer interface(BCI)

owner @Aniaslanyan
Download Telegram
The Hong Kong Stablecoins Bill has been introduced into the Legislative Council of Hong Kong, setting in motion for a regulatory framework on stablecoins in HK.

Under the proposed licensing regime, any person carrying on any of the following activities has to be licensed by the Hong Kong Monetary Authority :

1. issuing FRS in Hong Kong in the course of business;

2. issuing FRS that purport to maintain a stable value with reference to Hong Kong dollars in the course of business; or

3. actively marketing the person's issue of FRS to the public of Hong Kong.

The Bill also seeks to provide the HKMA with necessary supervision, investigation and enforcement powers for effective implementation of the regime.
Scientists Measure the "Depth" of Human Neurons and Explain Our Cognitive Abilities

In a study researchers have developed a new method to quantify what makes human brain cells more advanced than those of other mammals.

Using AI and detailed neuron modeling, they've introduced the Functional Complexity Index (FCI) - a revolutionary way to measure neuronal sophistication.

Key findings:

• Human cortical neurons are significantly more complex than rat neurons
• This complexity stems from two main factors:
- Larger dendritic surface areas with intricate branching patterns
- More sophisticated synaptic properties, especially in NMDA receptors

The study revealed that human Layer 2/3 neurons, which are more abundant in our species, show greater computational capabilities than other layers - a pattern unique to humans. This could help explain our superior cognitive abilities.

This breakthrough provides the first quantitative framework linking the microscopic properties of individual neurons to the remarkable cognitive capabilities that make us human. It opens new avenues for understanding brain evolution and potentially treating neurological disorders.
1
OpenAI Announces Major Structural Changes for 2025

The company plans to transform its current for-profit arm into a Delaware Public Benefit Corporation (PBC), marking a significant evolution from its original 2015 structure.

Key Changes and Motivations:

1. The core reason behind this restructuring is the need for substantially more capital than initially anticipated. While OpenAI began in 2015 expecting that progress would mainly depend on research breakthroughs, they've since realized that developing advanced AI systems requires massive computing resources and corresponding financial investments.

2. Under the new structure, OpenAI will maintain both non-profit and for-profit elements, but with important changes:
- The for-profit entity will become a Delaware Public Benefit Corporation
- The non-profit will receive shares in the PBC at a fair market value
- This transformation aims to make the non-profit one of the best-resourced in history

3. The new PBC structure will allow OpenAI to raise capital with more conventional terms, similar to other major players in the AI space. This is crucial as the company faces competition from well-funded competitors investing hundreds of billions in AI development.

Progress and Impact:

OpenAI has come a long way from its initial research lab status. The company now serves over 300 million weekly ChatGPT users and has made significant strides in AI development, including recent breakthroughs with their o-series models showing new reasoning capabilities.

Looking Forward:
The company views this restructuring as essential for advancing its mission of ensuring artificial general intelligence (AGI) benefits all of humanity. The PBC will handle operations and business aspects, while the non-profit arm will focus on charitable initiatives in sectors like healthcare, education, and science.
1
OpenAI and Microsoft have revealed their true understanding of AGI, and it's measured not in technological achievements but in dollars.

For a long time, the definition of AGI remained fuzzy and subjective.

OpenAI publicly described it as "automated systems that outperform humans at most economically valuable work." However, thanks to leaked documents, we now have a much more specific definition.

For OpenAI and Microsoft, achieving AGI has a clear financial criterion - the ability of AI systems to generate $100 billion in profits.

This is particularly significant given their partnership terms: once OpenAI reaches this milestone, the company can terminate its collaboration with Microsoft, and the tech giant will lose access to OpenAI's new developments.

This story perfectly illustrates how lofty ideals of creating technology "for the benefit of humanity" have transformed into purely commercial metrics.

AGI has evolved from a philosophical concept into a business indicator, and the question of its achievement has been reduced to a number on a bank account.

As stated in the leaked documents: "For OpenAI and Microsoft, AGI has a very specific definition: the point when OpenAI develops AI systems that can generate at least $100 billion in profits."

This revelation not only provides clarity about the companies' priorities but also raises questions about the future of AI development and the true meaning of technological progress in our increasingly profit-driven world.
Folks Happy New Year! Live this life in a way that lets you feel happiness in every moment, enjoying everything you have. We wish you success in all your endeavors and projects🦄
Please open Telegram to view this post
VIEW IN TELEGRAM
9
Anthropic's Bold Vision: Building the HTTP of AI with Model Context Protocol

Anthropic published a near-term development roadmap for the model context protocol (MCP).

In a strategic move that could reshape the AI landscape, Anthropic has revealed its ambitious plans for the MCP - potentially laying the groundwork for how we'll interact with AI in the years to come.

Just as HTTP revolutionized the web by standardizing how we access and share information, MCP aims to become the universal language for AI interactions.

Anthropic's H1 2025 roadmap reveals a vision that extends far beyond developing individual AI models like Claude. Instead, they're architecting the fundamental infrastructure that could power the next generation of AI interactions.

Here's what makes this approach revolutionary:

1. Building an Open Ecosystem
- Development of an open protocol for standardized AI model interactions
- Inviting other AI providers to shape MCP as an industry standard
- Focus on community-led development and shared governance

2. Enabling Decentralization
- Support for remote MCP connections
- Secure cross-system AI interactions
- Infrastructure for distributed AI systems

3. Scaling for the Future
- Advanced support for hierarchical agent systems
- Preparation for multimodal interactions (text, audio, video)
- Standardized packaging and distribution mechanisms

4. Democratizing Access
- Simplified installation and usage processes
- Creation of a universal server registry
- Open community participation in protocol development

The HTTP Parallel

The comparison to HTTP is particularly apt. Just as HTTP provided the foundational protocol that enabled the modern web to flourish, MCP could serve as the standard protocol for AI interactions. This standardization could:
- Enable seamless communication between different AI systems
- Create a more accessible and interoperable AI ecosystem
- Foster innovation through standardized interfaces

Strategic Implications

This move positions Anthropic not just as an AI company, but as a potential architect of the fundamental infrastructure that could power the future of AI interactions. By focusing on building this foundation, they're taking a long-term view that could significantly influence how AI systems are developed, deployed, and integrated in the years to come.

The success of this initiative could establish MCP as the de facto standard for AI interactions, similar to how HTTP became the backbone of web communications. This would not only benefit the broader AI community but could also cement Anthropic's position as a key player in shaping the future of AI.
🔥73
Google released white paper on AI agents

It covers the basics of llm agents and a quick Langchain implementation.
🔥5
Wow! A real big humanoid robotics dataset just got open sourced: AgiBot World is the first large-scale robotic learning dataset designed to advance multi-purpose humanoid policies

With 1M+ trajectories from 100 robots, AgiBot World spans 100+ real-world scenarios across five target domains, tackling fine-grained manipulation, tool usage, and multi-robot collaboration.

Cutting-edge multimodal hardware features visual tactile sensors, durable 6-DoF dexterous hands, and mobile dual-arm robots with whole-body control, supporting research in imitation learning, multi-agent collaboration, and more.

Github.
HuggingFace

Dataset Highlights:
- Cutting-edge sensor and hardware design
- Wide-spectrum of scenario coverage
- Quality assurance with human-in-the-loop
👍6🔥2
Nvidia introduced Cosmos, an open-source, open-weight Video World Model

It's trained on 20M hours of videos and weighs from 4B to 14B. Cosmos offers two flavors: diffusion (continuous tokens) and autoregressive (discrete tokens); and two generation modes: text->video and text+video->video.

Physical AI has a big data problem. Synthetic data to the rescue.

Nvidia apply Cosmos to large-scale synthetic data generation for robotics and autonomous driving, and now you can too.
👍7🔥1
This paper from DeepMind is blowing mind:
“Our findings reveal that models fine-tuned on weaker & cheaper generated data consistently outperform those trained on stronger & more-expensive generated data across multiple benchmarks…”

More low quality data > less high quality data is actually very surprising.
👍7
SynthLabs + Stanford presents:
Towards System 2 Reasoning in LLMs: Learning How to Think With Meta Chain-of-Thought


Proposes Meta Meta-CoT, which extends CoT by explicitly modeling the underlying reasoning required to arrive at a particular CoT
🔥3
Agent Laboratory: Using LLM Agents as Research Assistants

Enables you to focus on ideation and critical thinking while automating repetitive and time-intensive tasks like coding and documentation
🔥3
Microsoft presents rStar-Math. Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking

On the MATH benchmark, it improves Qwen2.5-Math-7B from 58.8% to 90.0% and Phi3-mini-3.8B from 41.4% to 86.4%, surpassing o1-preview by +4.5% and +0.9%.

On the USA Math Olympiad (AIME), rStar-Math solves an average of 53.3% (8/15) of problems, ranking among the top 20% the brightest high school math students.
🔥3🤔2
Hashdex_2025_Crypto_Investment_Outlook_1736438997.pdf
14.3 MB
Crypto outlook 2025: Infrastructure boom and Institutional adoption based on Hashdex Research

Market Dynamics:

• Bitcoin ETFs hit $24B inflows in first 10 months
• First major pension funds deploy capital ($164M from Wisconsin State Fund)
• Total crypto market cap projected to reach $10T in 2025, up from current $3T

Infrastructure Metrics:

• Ethereum L2 costs ↓99% post-Dencun upgrade
• Network throughput: 50x increase since 2020
• Stablecoin volume: $450B monthly transactions
• DeFi leaders maintain growth: Uniswap ($655B YTD volume)

Key Tech Development Areas:

1. AI-Blockchain Integration
- Focus: Decentralized computing networks
- Target: Training data verification
- Applications: Autonomous AI agents using blockchain for transactions

2. Smart Contract Platforms
- Ethereum: Layer-2 scaling solutions dominate
- Solana: Emerging as serious competitor
- Key metric: Transaction costs <$0.01 on L2s

3. DeFi Infrastructure
- Major protocols show resilience
- Institutional adoption accelerating
- Regulatory clarity expected post-2024 election

Market Catalysts:

Macro:
• Fed rate cuts projected: -1.2% in 2025
• US inflation target: 2.2%
• Global de-dollarization trend accelerates

Regulatory:
• 260+ pro-crypto Congress members
• Clear framework expected for stablecoins
• Potential expansion of crypto ETF products

Risk Factors:
- Geopolitical tensions impact market stability
- Traditional market correlation remains high
- Technical challenges in network scaling
- Regulatory uncertainty in key markets

Infrastructure improvements and institutional adoption creating foundation for next growth phase. Focus shifts from speculation to practical applications, particularly in finance and AI integration.
Stanford launched a free Google Deep Research clone called STORM.

It uses GPT 4-o + Bing Search under the hood to generate long cited reports from many websites in ~3mins.

It's also completely open-source and free to use.

GitHub.
2025_Top_Strategic_Technology_Trends_1736775138.pdf
2.4 MB
Gartner has released its Top Strategic Technology Trends for 2025.

Gartner analysts organized them across three themes:

1. AI imperatives and risks drive organizations to protect themselves.

2. New frontiers of #computing prompt organizations to reconsider how they compute.

3. Human-machine synergy brings together the physical and digital worlds

The Top Technology Trends for 2025 are:

- Agentic AI

- Post-quantum #Cryptography

- Spatial Computing

- #AIGovernance Platforms

- Ambient Invisible Intelligence

- Polyfunctional #Robots

- Disinformation #Security

- Energy-Efficient Computing

- Neurological Enhancement

- Hybrid Computing
Multiagent Finetuning. Researchers Introduced multiagent finetuning, a novel approach for improving language models through self-improvement.

Unlike traditional single-agent finetuning methods that often plateau after a few iterations, this approach uses a society of language models derived from the same base model but independently specialized through multiagent interactions.

The method assigns some models as generation agents that produce initial responses, and others as critic agents that evaluate and refine those responses.

Through this specialization, the system maintains diverse reasoning chains and consistently improves over multiple rounds of fine-tuning.

They demonstrate significant performance gains across various reasoning tasks using both open-source models (Phi-3, Mistral, LLaMA-3) and proprietary models (GPT-3.5).

GitHub.
3
Google presents the successor to the Transformer architecture:
"TITANS: Learning to Memorize at Test Time"


Titans: a new architecture with attention and a meta in-context memory that learns how to memorize at test time. Titans are more effective than Transformers and modern linear RNNs, and can effectively scale to larger than 2M context window, with better performance than ultra-large models (e.g., GPT4, Llama3-80B).
🆒7🔥3👀2