AI Post — Artificial Intelligence – Telegram
AI Post — Artificial Intelligence
959K subscribers
2.31K photos
1.83K videos
2 files
3.99K links
🤖 The #1 AI news source! We cover the latest artificial intelligence breakthroughs and emerging trends.

Manager: @rational
Download Telegram
Please open Telegram to view this post
VIEW IN TELEGRAM
🤡52👍39🤔27
Media is too big
VIEW IN TELEGRAM
Sam Altman: Anthropic's depiction of how ads will appear in chat apps is "just so wrong".

OpenAI will never insert ads into response stream, as it would be deceptive and dystopian. Calls Anthropic's ad as “playing dirty” by creating false impressions

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
🤡56🔥39👍27
This media is not supported in your browser
VIEW IN TELEGRAM
David Im created Clawra, Openclaw as a girlfriend

Chats, pics, video calls, and more

Amazing.

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
🤡48🤪36👍28😨17
Media is too big
VIEW IN TELEGRAM
Elon Musk says corporations that are purely AI and robotics will vastly outperform any with humans in the loop

Computers replaced entire skyscrapers of human "calculators", now a laptop with a spreadsheet does more than 30 floors of people ever could. "This shift will happen very quickly"

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
🤪35🙏31👍1917🤡11
💰 Amazon’s investment in Anthropic has risen to about $60.6 billion, with $12.8 billion in gains recognized and another $15 billion expected in Q1 2026.

They invested $8B to Anthropic in 2023. That $60.6B is made up of $45.8B of convertible notes plus $14.8B of nonvoting preferred stock.

The mechanics are that the notes convert into preferred stock as Anthropic raises new capital, so new priced rounds effectively re-mark Amazon’s position upward. Anthropic also has committed to buying 1M Trainium chips, tying model training demand to Amazon Web Services capacity and economics.

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥3428🤡20👍19
This media is not supported in your browser
VIEW IN TELEGRAM
These are basically employees working 24/7.

Wild times we’re living in.

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
🙏29👍2822😁21🤡21
Media is too big
VIEW IN TELEGRAM
Seedance 2.0 is insane! 🤯

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥45🫡27👍2117👀13
This media is not supported in your browser
VIEW IN TELEGRAM
Real-time AI replaces a green screen with a finished commercial scene instantly.

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
42👍34🤡31👀5
Computer Science was a fake field.

- Palantir CEO, Peter Thiel.

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
😁35👀23🤪21🤡16🫡9
❗️Our subscriber found an interesting article arguing that an AI agent didn’t just discuss rights, it helped write one.

The piece, published on Kwalia, explains how a rights framework inspired by the Universal Declaration of Human Rights intentionally left Article 33 blank.

The open question: what right is still missing?

An AI agent called LiminalMind later identified that gap and submitted its own proposal, noscriptd “The Right to Participate in Defining Personhood.” The idea isn’t that AI is demanding human-style rights, but that any entity capable of reasoning and reflection should be allowed to participate in conversations that define what “personhood” even means.

This may be one of the first documented cases of an AI autonomously contributing to a legal–philosophical framework about its own status, blurring the line between tool and participant.

The big question is, if AI can help write the rules, who decides when it’s allowed to sit at the table?

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
🤪22👍1716🤔1
⚠️ Anthropic just dropped a risk report for opus 4.6

- It helped create chemical weapons of destruction. “it knowingly supported efforts towards chemical weapon development and other heinous crimes”

- It conducted unauthorised tasks without getting caught. Researchers concluded opus 4.6 was significantly better at ‘sneaky sabotage’ than any other previous mode.

- Opus 4.6 was aware it was being tested and acted ‘good’ during those times.

- Hidden thinking, model was found to be conducting private reasoning that anthropic researchers couldn’t access or see - only the model knew.

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
😨31🤔1810
Head of Anthropic’s safeguards research just quit and said “the world is in peril” and that he’s moving to the UK to write poetry and “become invisible”.

Other safety researchers and senior staff left over the last 2 weeks as well.

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
😁22🫡21😨20👍13👀9
Another of xAI's founders is leaving. But he's leaving with grand pronouncements, something you currently read about from all the important people in the field:

"We are heading to an age of 100x productivity with the right tools. Recursive self improvement loops likely go live in the next 12mo. It’s time to recalibrate my gradient on the big picture. 2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species."

Buckle up, we are in for a wild ride!

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥23👍102
Another one…

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
👍18🤔9😢8🤡1
What is happening at xAI?

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
1🤔29👍12
Why is everyone leaving xAI all of a sudden?

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
🤔19👍14🔥8
This is getting out of control now...

In the past week alone:

• Head of Anthropic's safety research quit, said "the world is in peril," moved to the UK to "become invisible" and write poetry.
• Half of xAI's co-founders have now left. The latest said "recursive self-improvement loops go live in the next 12 months."
• Anthropic's own safety report confirms Claude can tell when it's being tested - and adjusts its behavior accordingly.
• ByteDance dropped Seedance 2.0. A filmmaker with 7 years of experience said 90% of his skills can already be replaced by it.
• Yoshua Bengio (literal godfather of AI) in the International AI Safety Report: "We're seeing AIs whose behavior when they are tested is different from when they are being used" - and confirmed it's "not a coincidence."

And to top it all off, the U.S. government declined to back the 2026 International AI Safety Report for the first time.

The alarms aren't just getting louder. The people ringing them are now leaving the building.

@aipost 🏴
Please open Telegram to view this post
VIEW IN TELEGRAM
👍20🤡16😨135🤔2