TSMC is so strong that South Korea business and academic leaders have proposed creating a state-backed “KSMC” (Korea Semiconductor Manufacturing Company) to compete, media report, saying a ₩20 trillion won (US$13.9 billion) investment could pay off in ₩300 trillion ($208.7 billion) in economic benefits by 2045 as KSMC would help incubate local chip design houses the same way TSMC has helped grow 250 chip designers in Taiwan.
The Korea Bizwire
Korea Considers Establishing ‘KSMC’ to Bolster Semiconductor Ecosystem
SEOUL, Dec. 19 (Korea Bizwire) – South Korean industry and academia have proposed creating a “KSMC” (Korea Semiconductor Manufacturing Company), modeled after Taiwan’s TSMC, to address challenges facing the nation’s semiconductor industry.
Deepseek-V3-Base was just opensourced
- 685B MoE w/ 256 experts topk=8 with sigmoid routing
- Outperforms Sonnet 3.5 on Aider benchmark
The "preview" of DeepSeek's new V3 model now ranks 1st on BigCodeBench-Hard.
Complete -40.5%
Instruct -28.4%
Average -34.5%
Gemini-Exp-1206 Average -34.1%
o1-2024-12-17 (reasoning=medium) Average -32.8%.
This was benchmarked against the normal DeepSeek API. Apparently it has been upgraded from V2.5 to V3 Preview.
- 685B MoE w/ 256 experts topk=8 with sigmoid routing
- Outperforms Sonnet 3.5 on Aider benchmark
The "preview" of DeepSeek's new V3 model now ranks 1st on BigCodeBench-Hard.
Complete -40.5%
Instruct -28.4%
Average -34.5%
Gemini-Exp-1206 Average -34.1%
o1-2024-12-17 (reasoning=medium) Average -32.8%.
This was benchmarked against the normal DeepSeek API. Apparently it has been upgraded from V2.5 to V3 Preview.
huggingface.co
deepseek-ai/DeepSeek-V3-Base at main
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
The Hong Kong Stablecoins Bill has been introduced into the Legislative Council of Hong Kong, setting in motion for a regulatory framework on stablecoins in HK.
Under the proposed licensing regime, any person carrying on any of the following activities has to be licensed by the Hong Kong Monetary Authority :
1. issuing FRS in Hong Kong in the course of business;
2. issuing FRS that purport to maintain a stable value with reference to Hong Kong dollars in the course of business; or
3. actively marketing the person's issue of FRS to the public of Hong Kong.
The Bill also seeks to provide the HKMA with necessary supervision, investigation and enforcement powers for effective implementation of the regime.
Under the proposed licensing regime, any person carrying on any of the following activities has to be licensed by the Hong Kong Monetary Authority :
1. issuing FRS in Hong Kong in the course of business;
2. issuing FRS that purport to maintain a stable value with reference to Hong Kong dollars in the course of business; or
3. actively marketing the person's issue of FRS to the public of Hong Kong.
The Bill also seeks to provide the HKMA with necessary supervision, investigation and enforcement powers for effective implementation of the regime.
www.info.gov.hk
立法會:財經事務及庫務局局長動議二讀《穩定幣條例草案》發言全文(只有中文)
以下是財經事務及庫務局(財庫局)局長許正宇今日(十二月十八日)在立法會會議上動議二讀《穩定幣條例草案》(《條例草案》)的發言全文:
主席:
我謹...
主席:
我謹...
Scientists Measure the "Depth" of Human Neurons and Explain Our Cognitive Abilities
In a study researchers have developed a new method to quantify what makes human brain cells more advanced than those of other mammals.
Using AI and detailed neuron modeling, they've introduced the Functional Complexity Index (FCI) - a revolutionary way to measure neuronal sophistication.
Key findings:
• Human cortical neurons are significantly more complex than rat neurons
• This complexity stems from two main factors:
- Larger dendritic surface areas with intricate branching patterns
- More sophisticated synaptic properties, especially in NMDA receptors
The study revealed that human Layer 2/3 neurons, which are more abundant in our species, show greater computational capabilities than other layers - a pattern unique to humans. This could help explain our superior cognitive abilities.
This breakthrough provides the first quantitative framework linking the microscopic properties of individual neurons to the remarkable cognitive capabilities that make us human. It opens new avenues for understanding brain evolution and potentially treating neurological disorders.
In a study researchers have developed a new method to quantify what makes human brain cells more advanced than those of other mammals.
Using AI and detailed neuron modeling, they've introduced the Functional Complexity Index (FCI) - a revolutionary way to measure neuronal sophistication.
Key findings:
• Human cortical neurons are significantly more complex than rat neurons
• This complexity stems from two main factors:
- Larger dendritic surface areas with intricate branching patterns
- More sophisticated synaptic properties, especially in NMDA receptors
The study revealed that human Layer 2/3 neurons, which are more abundant in our species, show greater computational capabilities than other layers - a pattern unique to humans. This could help explain our superior cognitive abilities.
This breakthrough provides the first quantitative framework linking the microscopic properties of individual neurons to the remarkable cognitive capabilities that make us human. It opens new avenues for understanding brain evolution and potentially treating neurological disorders.
bioRxiv
What makes human cortical pyramidal neurons functionally complex
Humans exhibit unique cognitive abilities within the animal kingdom, but the neural mechanisms driving these advanced capabilities remain poorly understood. Human cortical neurons differ from those of other species, such as rodents, in both their morphological…
❤1
OpenAI Announces Major Structural Changes for 2025
The company plans to transform its current for-profit arm into a Delaware Public Benefit Corporation (PBC), marking a significant evolution from its original 2015 structure.
Key Changes and Motivations:
1. The core reason behind this restructuring is the need for substantially more capital than initially anticipated. While OpenAI began in 2015 expecting that progress would mainly depend on research breakthroughs, they've since realized that developing advanced AI systems requires massive computing resources and corresponding financial investments.
2. Under the new structure, OpenAI will maintain both non-profit and for-profit elements, but with important changes:
- The for-profit entity will become a Delaware Public Benefit Corporation
- The non-profit will receive shares in the PBC at a fair market value
- This transformation aims to make the non-profit one of the best-resourced in history
3. The new PBC structure will allow OpenAI to raise capital with more conventional terms, similar to other major players in the AI space. This is crucial as the company faces competition from well-funded competitors investing hundreds of billions in AI development.
Progress and Impact:
OpenAI has come a long way from its initial research lab status. The company now serves over 300 million weekly ChatGPT users and has made significant strides in AI development, including recent breakthroughs with their o-series models showing new reasoning capabilities.
Looking Forward:
The company views this restructuring as essential for advancing its mission of ensuring artificial general intelligence (AGI) benefits all of humanity. The PBC will handle operations and business aspects, while the non-profit arm will focus on charitable initiatives in sectors like healthcare, education, and science.
The company plans to transform its current for-profit arm into a Delaware Public Benefit Corporation (PBC), marking a significant evolution from its original 2015 structure.
Key Changes and Motivations:
1. The core reason behind this restructuring is the need for substantially more capital than initially anticipated. While OpenAI began in 2015 expecting that progress would mainly depend on research breakthroughs, they've since realized that developing advanced AI systems requires massive computing resources and corresponding financial investments.
2. Under the new structure, OpenAI will maintain both non-profit and for-profit elements, but with important changes:
- The for-profit entity will become a Delaware Public Benefit Corporation
- The non-profit will receive shares in the PBC at a fair market value
- This transformation aims to make the non-profit one of the best-resourced in history
3. The new PBC structure will allow OpenAI to raise capital with more conventional terms, similar to other major players in the AI space. This is crucial as the company faces competition from well-funded competitors investing hundreds of billions in AI development.
Progress and Impact:
OpenAI has come a long way from its initial research lab status. The company now serves over 300 million weekly ChatGPT users and has made significant strides in AI development, including recent breakthroughs with their o-series models showing new reasoning capabilities.
Looking Forward:
The company views this restructuring as essential for advancing its mission of ensuring artificial general intelligence (AGI) benefits all of humanity. The PBC will handle operations and business aspects, while the non-profit arm will focus on charitable initiatives in sectors like healthcare, education, and science.
Openai
Why OpenAI’s structure must evolve to advance our mission
A stronger non-profit supported by the for-profit’s success.
❤1
OpenAI and Microsoft have revealed their true understanding of AGI, and it's measured not in technological achievements but in dollars.
For a long time, the definition of AGI remained fuzzy and subjective.
OpenAI publicly described it as "automated systems that outperform humans at most economically valuable work." However, thanks to leaked documents, we now have a much more specific definition.
For OpenAI and Microsoft, achieving AGI has a clear financial criterion - the ability of AI systems to generate $100 billion in profits.
This is particularly significant given their partnership terms: once OpenAI reaches this milestone, the company can terminate its collaboration with Microsoft, and the tech giant will lose access to OpenAI's new developments.
This story perfectly illustrates how lofty ideals of creating technology "for the benefit of humanity" have transformed into purely commercial metrics.
AGI has evolved from a philosophical concept into a business indicator, and the question of its achievement has been reduced to a number on a bank account.
As stated in the leaked documents: "For OpenAI and Microsoft, AGI has a very specific definition: the point when OpenAI develops AI systems that can generate at least $100 billion in profits."
This revelation not only provides clarity about the companies' priorities but also raises questions about the future of AI development and the true meaning of technological progress in our increasingly profit-driven world.
For a long time, the definition of AGI remained fuzzy and subjective.
OpenAI publicly described it as "automated systems that outperform humans at most economically valuable work." However, thanks to leaked documents, we now have a much more specific definition.
For OpenAI and Microsoft, achieving AGI has a clear financial criterion - the ability of AI systems to generate $100 billion in profits.
This is particularly significant given their partnership terms: once OpenAI reaches this milestone, the company can terminate its collaboration with Microsoft, and the tech giant will lose access to OpenAI's new developments.
This story perfectly illustrates how lofty ideals of creating technology "for the benefit of humanity" have transformed into purely commercial metrics.
AGI has evolved from a philosophical concept into a business indicator, and the question of its achievement has been reduced to a number on a bank account.
As stated in the leaked documents: "For OpenAI and Microsoft, AGI has a very specific definition: the point when OpenAI develops AI systems that can generate at least $100 billion in profits."
This revelation not only provides clarity about the companies' priorities but also raises questions about the future of AI development and the true meaning of technological progress in our increasingly profit-driven world.
The Information
Microsoft and OpenAI’s Secret AGI Definition
Finally, a verifiable, numbers-based denoscription of artificial general intelligence has arrived!Whether AGI has or hasn’t been “achieved” by AI developers has been a hotly debated topic due to its fuzzy and subjective definition. OpenAI has publicly described…
Tencent and Meirui have released the "Qiyuan critical care model" for ICUs
This is world's 1st large model for ICUs.
Trained on trillion parameters w/ 7T tokens, using data from 2.85m medical entities & 98.5% of medical knowledge.
Supports RAG
This is world's 1st large model for ICUs.
Trained on trillion parameters w/ 7T tokens, using data from 2.85m medical entities & 98.5% of medical knowledge.
Supports RAG
Ithome
全球首个重症医疗大模型:腾讯 X 迈瑞发布“启元重症大模型”,已用于 ICU 病房 - IT之家
它的工作流程,大致分为「两步」:先是整合患者的海量数据,生成数字画像;随后,用重症思维对画像进行深度分析,预测病情发展,提出干预建议。
👍5
Folks Happy New Year! Live this life in a way that lets you feel happiness in every moment, enjoying everything you have. We wish you success in all your endeavors and projects🦄
Please open Telegram to view this post
VIEW IN TELEGRAM
❤9
Anthropic's Bold Vision: Building the HTTP of AI with Model Context Protocol
Anthropic published a near-term development roadmap for the model context protocol (MCP).
In a strategic move that could reshape the AI landscape, Anthropic has revealed its ambitious plans for the MCP - potentially laying the groundwork for how we'll interact with AI in the years to come.
Just as HTTP revolutionized the web by standardizing how we access and share information, MCP aims to become the universal language for AI interactions.
Anthropic's H1 2025 roadmap reveals a vision that extends far beyond developing individual AI models like Claude. Instead, they're architecting the fundamental infrastructure that could power the next generation of AI interactions.
Here's what makes this approach revolutionary:
1. Building an Open Ecosystem
- Development of an open protocol for standardized AI model interactions
- Inviting other AI providers to shape MCP as an industry standard
- Focus on community-led development and shared governance
2. Enabling Decentralization
- Support for remote MCP connections
- Secure cross-system AI interactions
- Infrastructure for distributed AI systems
3. Scaling for the Future
- Advanced support for hierarchical agent systems
- Preparation for multimodal interactions (text, audio, video)
- Standardized packaging and distribution mechanisms
4. Democratizing Access
- Simplified installation and usage processes
- Creation of a universal server registry
- Open community participation in protocol development
The HTTP Parallel
The comparison to HTTP is particularly apt. Just as HTTP provided the foundational protocol that enabled the modern web to flourish, MCP could serve as the standard protocol for AI interactions. This standardization could:
- Enable seamless communication between different AI systems
- Create a more accessible and interoperable AI ecosystem
- Foster innovation through standardized interfaces
Strategic Implications
This move positions Anthropic not just as an AI company, but as a potential architect of the fundamental infrastructure that could power the future of AI interactions. By focusing on building this foundation, they're taking a long-term view that could significantly influence how AI systems are developed, deployed, and integrated in the years to come.
The success of this initiative could establish MCP as the de facto standard for AI interactions, similar to how HTTP became the backbone of web communications. This would not only benefit the broader AI community but could also cement Anthropic's position as a key player in shaping the future of AI.
Anthropic published a near-term development roadmap for the model context protocol (MCP).
In a strategic move that could reshape the AI landscape, Anthropic has revealed its ambitious plans for the MCP - potentially laying the groundwork for how we'll interact with AI in the years to come.
Just as HTTP revolutionized the web by standardizing how we access and share information, MCP aims to become the universal language for AI interactions.
Anthropic's H1 2025 roadmap reveals a vision that extends far beyond developing individual AI models like Claude. Instead, they're architecting the fundamental infrastructure that could power the next generation of AI interactions.
Here's what makes this approach revolutionary:
1. Building an Open Ecosystem
- Development of an open protocol for standardized AI model interactions
- Inviting other AI providers to shape MCP as an industry standard
- Focus on community-led development and shared governance
2. Enabling Decentralization
- Support for remote MCP connections
- Secure cross-system AI interactions
- Infrastructure for distributed AI systems
3. Scaling for the Future
- Advanced support for hierarchical agent systems
- Preparation for multimodal interactions (text, audio, video)
- Standardized packaging and distribution mechanisms
4. Democratizing Access
- Simplified installation and usage processes
- Creation of a universal server registry
- Open community participation in protocol development
The HTTP Parallel
The comparison to HTTP is particularly apt. Just as HTTP provided the foundational protocol that enabled the modern web to flourish, MCP could serve as the standard protocol for AI interactions. This standardization could:
- Enable seamless communication between different AI systems
- Create a more accessible and interoperable AI ecosystem
- Foster innovation through standardized interfaces
Strategic Implications
This move positions Anthropic not just as an AI company, but as a potential architect of the fundamental infrastructure that could power the future of AI interactions. By focusing on building this foundation, they're taking a long-term view that could significantly influence how AI systems are developed, deployed, and integrated in the years to come.
The success of this initiative could establish MCP as the de facto standard for AI interactions, similar to how HTTP became the backbone of web communications. This would not only benefit the broader AI community but could also cement Anthropic's position as a key player in shaping the future of AI.
Model Context Protocol
Roadmap - Model Context Protocol
Our plans for evolving Model Context Protocol
🔥7❤3
New research from Meta FAIR — Meta Memory Layers at Scale. This work takes memory layers beyond proof-of-concept, proving their utility at contemporary scale.
GitHub.
GitHub.
arXiv.org
Memory Layers at Scale
Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsely activated memory layers complement compute-heavy dense...
Google released white paper on AI agents
It covers the basics of llm agents and a quick Langchain implementation.
It covers the basics of llm agents and a quick Langchain implementation.
Kaggle
Agents
Authors: Julia Wiesinger, Patrick Marlow and Vladimir Vuskovic
🔥5
Wow! A real big humanoid robotics dataset just got open sourced: AgiBot World is the first large-scale robotic learning dataset designed to advance multi-purpose humanoid policies
With 1M+ trajectories from 100 robots, AgiBot World spans 100+ real-world scenarios across five target domains, tackling fine-grained manipulation, tool usage, and multi-robot collaboration.
Cutting-edge multimodal hardware features visual tactile sensors, durable 6-DoF dexterous hands, and mobile dual-arm robots with whole-body control, supporting research in imitation learning, multi-agent collaboration, and more.
Github.
HuggingFace
Dataset Highlights:
- Cutting-edge sensor and hardware design
- Wide-spectrum of scenario coverage
- Quality assurance with human-in-the-loop
With 1M+ trajectories from 100 robots, AgiBot World spans 100+ real-world scenarios across five target domains, tackling fine-grained manipulation, tool usage, and multi-robot collaboration.
Cutting-edge multimodal hardware features visual tactile sensors, durable 6-DoF dexterous hands, and mobile dual-arm robots with whole-body control, supporting research in imitation learning, multi-agent collaboration, and more.
Github.
HuggingFace
Dataset Highlights:
- Cutting-edge sensor and hardware design
- Wide-spectrum of scenario coverage
- Quality assurance with human-in-the-loop
GitHub
GitHub - OpenDriveLab/AgiBot-World: [IROS 2025 Award Finalist] The Large-scale Manipulation Platform for Scalable and Intelligent…
[IROS 2025 Award Finalist] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems - OpenDriveLab/AgiBot-World
👍6🔥2
Nvidia introduced Cosmos, an open-source, open-weight Video World Model
It's trained on 20M hours of videos and weighs from 4B to 14B. Cosmos offers two flavors: diffusion (continuous tokens) and autoregressive (discrete tokens); and two generation modes: text->video and text+video->video.
Physical AI has a big data problem. Synthetic data to the rescue.
Nvidia apply Cosmos to large-scale synthetic data generation for robotics and autonomous driving, and now you can too.
It's trained on 20M hours of videos and weighs from 4B to 14B. Cosmos offers two flavors: diffusion (continuous tokens) and autoregressive (discrete tokens); and two generation modes: text->video and text+video->video.
Physical AI has a big data problem. Synthetic data to the rescue.
Nvidia apply Cosmos to large-scale synthetic data generation for robotics and autonomous driving, and now you can too.
GitHub
GitHub - NVIDIA/Cosmos: New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos
New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos - NVIDIA/Cosmos
👍7🔥1
This paper from DeepMind is blowing mind:
“Our findings reveal that models fine-tuned on weaker & cheaper generated data consistently outperform those trained on stronger & more-expensive generated data across multiple benchmarks…”
More low quality data > less high quality data is actually very surprising.
“Our findings reveal that models fine-tuned on weaker & cheaper generated data consistently outperform those trained on stronger & more-expensive generated data across multiple benchmarks…”
More low quality data > less high quality data is actually very surprising.
👍7
SynthLabs + Stanford presents:
Towards System 2 Reasoning in LLMs: Learning How to Think With Meta Chain-of-Thought
Proposes Meta Meta-CoT, which extends CoT by explicitly modeling the underlying reasoning required to arrive at a particular CoT
Towards System 2 Reasoning in LLMs: Learning How to Think With Meta Chain-of-Thought
Proposes Meta Meta-CoT, which extends CoT by explicitly modeling the underlying reasoning required to arrive at a particular CoT
arXiv.org
Towards System 2 Reasoning in LLMs: Learning How to Think With...
We propose a novel framework, Meta Chain-of-Thought (Meta-CoT), which extends traditional Chain-of-Thought (CoT) by explicitly modeling the underlying reasoning required to arrive at a particular...
🔥3
Agent Laboratory: Using LLM Agents as Research Assistants
Enables you to focus on ideation and critical thinking while automating repetitive and time-intensive tasks like coding and documentation
Enables you to focus on ideation and critical thinking while automating repetitive and time-intensive tasks like coding and documentation
Agent Laboratory: Using LLMs as Research Assistants
by Samuel Schmidgall at JHU
🔥3
Microsoft presents rStar-Math. Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking
On the MATH benchmark, it improves Qwen2.5-Math-7B from 58.8% to 90.0% and Phi3-mini-3.8B from 41.4% to 86.4%, surpassing o1-preview by +4.5% and +0.9%.
On the USA Math Olympiad (AIME), rStar-Math solves an average of 53.3% (8/15) of problems, ranking among the top 20% the brightest high school math students.
On the MATH benchmark, it improves Qwen2.5-Math-7B from 58.8% to 90.0% and Phi3-mini-3.8B from 41.4% to 86.4%, surpassing o1-preview by +4.5% and +0.9%.
On the USA Math Olympiad (AIME), rStar-Math solves an average of 53.3% (8/15) of problems, ranking among the top 20% the brightest high school math students.
huggingface.co
Paper page - rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep
Thinking
Thinking
Join the discussion on this paper page
🔥3🤔2
Hashdex_2025_Crypto_Investment_Outlook_1736438997.pdf
14.3 MB
Crypto outlook 2025: Infrastructure boom and Institutional adoption based on Hashdex Research
Market Dynamics:
• Bitcoin ETFs hit $24B inflows in first 10 months
• First major pension funds deploy capital ($164M from Wisconsin State Fund)
• Total crypto market cap projected to reach $10T in 2025, up from current $3T
Infrastructure Metrics:
• Ethereum L2 costs ↓99% post-Dencun upgrade
• Network throughput: 50x increase since 2020
• Stablecoin volume: $450B monthly transactions
• DeFi leaders maintain growth: Uniswap ($655B YTD volume)
Key Tech Development Areas:
1. AI-Blockchain Integration
- Focus: Decentralized computing networks
- Target: Training data verification
- Applications: Autonomous AI agents using blockchain for transactions
2. Smart Contract Platforms
- Ethereum: Layer-2 scaling solutions dominate
- Solana: Emerging as serious competitor
- Key metric: Transaction costs <$0.01 on L2s
3. DeFi Infrastructure
- Major protocols show resilience
- Institutional adoption accelerating
- Regulatory clarity expected post-2024 election
Market Catalysts:
Macro:
• Fed rate cuts projected: -1.2% in 2025
• US inflation target: 2.2%
• Global de-dollarization trend accelerates
Regulatory:
• 260+ pro-crypto Congress members
• Clear framework expected for stablecoins
• Potential expansion of crypto ETF products
Risk Factors:
- Geopolitical tensions impact market stability
- Traditional market correlation remains high
- Technical challenges in network scaling
- Regulatory uncertainty in key markets
Infrastructure improvements and institutional adoption creating foundation for next growth phase. Focus shifts from speculation to practical applications, particularly in finance and AI integration.
Market Dynamics:
• Bitcoin ETFs hit $24B inflows in first 10 months
• First major pension funds deploy capital ($164M from Wisconsin State Fund)
• Total crypto market cap projected to reach $10T in 2025, up from current $3T
Infrastructure Metrics:
• Ethereum L2 costs ↓99% post-Dencun upgrade
• Network throughput: 50x increase since 2020
• Stablecoin volume: $450B monthly transactions
• DeFi leaders maintain growth: Uniswap ($655B YTD volume)
Key Tech Development Areas:
1. AI-Blockchain Integration
- Focus: Decentralized computing networks
- Target: Training data verification
- Applications: Autonomous AI agents using blockchain for transactions
2. Smart Contract Platforms
- Ethereum: Layer-2 scaling solutions dominate
- Solana: Emerging as serious competitor
- Key metric: Transaction costs <$0.01 on L2s
3. DeFi Infrastructure
- Major protocols show resilience
- Institutional adoption accelerating
- Regulatory clarity expected post-2024 election
Market Catalysts:
Macro:
• Fed rate cuts projected: -1.2% in 2025
• US inflation target: 2.2%
• Global de-dollarization trend accelerates
Regulatory:
• 260+ pro-crypto Congress members
• Clear framework expected for stablecoins
• Potential expansion of crypto ETF products
Risk Factors:
- Geopolitical tensions impact market stability
- Traditional market correlation remains high
- Technical challenges in network scaling
- Regulatory uncertainty in key markets
Infrastructure improvements and institutional adoption creating foundation for next growth phase. Focus shifts from speculation to practical applications, particularly in finance and AI integration.
There is no more waitlist for GitHub Copilot Workspace
GitHub Next
GitHub Next | Copilot Workspace
GitHub Next Project: An agentic dev environment, designed for everyday tasks.
Stanford launched a free Google Deep Research clone called STORM.
It uses GPT 4-o + Bing Search under the hood to generate long cited reports from many websites in ~3mins.
It's also completely open-source and free to use.
GitHub.
It uses GPT 4-o + Bing Search under the hood to generate long cited reports from many websites in ~3mins.
It's also completely open-source and free to use.
GitHub.
GitHub
GitHub - stanford-oval/storm: An LLM-powered knowledge curation system that researches a topic and generates a full-length report…
An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations. - stanford-oval/storm
2025_Top_Strategic_Technology_Trends_1736775138.pdf
2.4 MB
Gartner has released its Top Strategic Technology Trends for 2025.
Gartner analysts organized them across three themes:
1. AI imperatives and risks drive organizations to protect themselves.
2. New frontiers of #computing prompt organizations to reconsider how they compute.
3. Human-machine synergy brings together the physical and digital worlds
The Top Technology Trends for 2025 are:
- Agentic AI
- Post-quantum #Cryptography
- Spatial Computing
- #AIGovernance Platforms
- Ambient Invisible Intelligence
- Polyfunctional #Robots
- Disinformation #Security
- Energy-Efficient Computing
- Neurological Enhancement
- Hybrid Computing
Gartner analysts organized them across three themes:
1. AI imperatives and risks drive organizations to protect themselves.
2. New frontiers of #computing prompt organizations to reconsider how they compute.
3. Human-machine synergy brings together the physical and digital worlds
The Top Technology Trends for 2025 are:
- Agentic AI
- Post-quantum #Cryptography
- Spatial Computing
- #AIGovernance Platforms
- Ambient Invisible Intelligence
- Polyfunctional #Robots
- Disinformation #Security
- Energy-Efficient Computing
- Neurological Enhancement
- Hybrid Computing