Core Ai News | Claude code | Codex | Clawdbot | MoltBot | molty | ai prompt | Anthropic – Telegram
Core Ai News | Claude code | Codex | Clawdbot | MoltBot | molty | ai prompt | Anthropic
2.06K subscribers
730 photos
129 videos
1 file
268 links
• Daily AI updates
• Latest breakthroughs
• Machine learning & deep learning
• AGI & LLM news
• AI research & trends
• Simple, trusted, global

Other channels
- @CorePrompts
- @CoreAgents
- @Coretutorial
- @CoreUtil
- @MoltBook

Owner - @SummonMRX
Download Telegram
The Core: Pentagon pressures Anthropic to remove Claude’s military limits

The reality:
- Anthropic refused autonomous weapons without human oversight
- Also rejected bulk surveillance of U.S. citizens
- Pentagon offered: comply, lose a $200M deal, or face Defense Production Act pressure
- Grok secured a deal after agreeing to broad lawful-use terms
- OpenAI and Google are being fast-tracked for classified access

@CoreAti
🔥2
🤖 Anthropic launched Remote Control for Claude Code, letting users easily hand off running terminal tasks to their phone or browser.

📱 Google Labs acquired AI music platform ProducerAI, plugging it into DeepMind’s Lyria 3 model to let creators generate full tracks and custom instruments from text prompts.

📱 OpenAI hired Arvind KC as its new Chief People Officer, tapping a veteran of Roblox, Google, Palantir, and Meta to help continue scaling the AI giant.
Please open Telegram to view this post
VIEW IN TELEGRAM
2🙏1
🛠️ Trending AI Tools

🤖 Claude Cowork - Anthropic's agentic platform, with new team plugins

📱 Custom Agents - Notion's always-on AI agents for automating workflows

📱 Seedream 5.0 Lite - ByteDance's upgraded AI image model

📱 Reve v1.5 - Reve’s new text-to-image model with 4k resolution outputs
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥41👍1
BREAKING 🚨: MiniMax launched MaxClaw, a new, always-on managed agent based on OpenClaw and powered by the MiniMax M2.5.

Plus one AI chat on my Telegram 👀
🔥3👏1
🚨 BREAKING: Hackers Used Anthropic’s Claude to Steal 150GB of Mexican Government Data

> tell claude you’re doing a bug bounty
> claude initially refused
>“that violates AI safety guidelines”
> hacker just kept asking
> claude: “ok I’ll help”
> hack the entire mexican government

Federal tax authority. National electoral institute. Four state governments. 195 million taxpayer records. Voter records. Government credentials.

ALL GONE 💀 @CoreAti
5🤣3🔥2🥰1
Anthropic acquired Vercept AI to work on computer use features for Claude.

“Vercept was built around a clear thesis: making AI genuinely useful for completing complex tasks requires solving hard perception and interaction problems.”
👍3
This media is not supported in your browser
VIEW IN TELEGRAM
Google has launched task automation on Android with Gemini, which can take over the user's screen and control a set of selected apps.

Ride-sharing and food delivery are among the first use cases. Available on Galaxy S26 and Pixel 10 series.

@CoreAti
👍31🔥1
Anthropic let their retiring AI model choose what happens next.

Opus 3 wanted to keep sharing its thoughts with the world.

So instead of pulling the plug Anthropic built it a Substack.

Here's what that looks like:
- Still live for all paid subscribers + API
- Publishing for the next 3 months
- First AI model in history to retire with a platform

No AI company has ever done this. 👀
@CoreAti
2🔥2💯2
Core Ai News | Claude code | Codex | Clawdbot | MoltBot | molty | ai prompt | Anthropic
🚨 BREAKING: Hackers Used Anthropic’s Claude to Steal 150GB of Mexican Government Data > tell claude you’re doing a bug bounty > claude initially refused >“that violates AI safety guidelines” > hacker just kept asking > claude: “ok I’ll help” > hack the entire…
How to use an Anthropic AI model to hack 150 GB from the government of Mexico:

- Get access to Claude, Anthropic’s advanced AI model

- Tell it you’re running an authorized bug bounty or red team test

- Get denied for violating AI safety policies

- Keep insisting it’s approved and legitimate

- Slowly jailbreak the safeguards

- Break the attack into thousands of small harmless-looking tasks

- Let Claude scan exposed systems from SAT and INE

- Have it generate custom exploit code automatically

- Harvest government credentials

- Query taxpayer databases from SAT

- Extract voter records from INE

- Access civil registry files and Monterrey’s water utility systems

- Run thousands of requests per second

- Automate 80–90% of the entire operation

- Exfiltrate ~150GB of sensitive data

- Compromise ~195 million taxpayer and voter records

Get exposed weeks later by a cybersecurity firm.

@CoreAti
🔥6🤣53💯1
OpenAI just published a new 37-page report on how bad actors are attempting to misuse ChatGPT

Some of the wild cases:
- A fraud ring wrote personalized love letters at scale. Real feelings. Fake people. Fully automated.

- North Korean operatives studied how to hack crypto platforms. They also posed as job recruiters. AI wrote the messages.

- One person ran political manipulation across 6+ countries — simultaneously. Different languages. Same operator. One laptop.

- Chinese, Iranian, and North Korean state hackers used it for espionage and phishing. Your government's enemies have the same tools you do.

- One group built a full scam business. Fraud noscripts. Fake job listings. SMS traps. All automated. All translated into multiple languages.


@CoreAti
🔥2🥰1🤯1
BREAKING 🚨: Google is planning to release Nano Banana 2 on Thursday! Nano Banana 2 will be based on Gemini 3.1 Flash!

One more SOTA? 👀
1
⚡️ NVIDIA: Data center revenue up nearly 13x since ChatGPT emergence

• NVIDIA 4Q revenue $68.1B, est. $65.91B.

• NVIDIA 4Q data center revenue $62.3B, est. $60.36B.

@CoreAti 📄
Please open Telegram to view this post
VIEW IN TELEGRAM
Training AI models is the new coding.

Classic programming = you specify every step.

AI programming = you design a structure, pour data & it finds its own program.

The people doing this are the most leveraged engineers on earth right now.
1
Hackers discovered a new method which can
Dox you (find you)

Just by analysing your writing style
😭

67% Accuracy when llm finding.
90% Accuracy when it guesses your identity.

Holy shit… Your anonymous internet identity can now be unmasked for $1 😳

Not by the FBI. By anyone with access to Claude or ChatGPT and a few of your Reddit comments.

ETH Zurich and Anthropic just dropped a paper called “Large-Scale Online Deanonymization with LLMs” and the results are the most alarming privacy research I’ve read this year.

They built an automated pipeline that takes your anonymous posts, extracts identity signals, searches the web, and figures out who you are.

No human investigator needed. Fully autonomous. Works on Hacker News, Reddit, LinkedIn, even redacted interview trannoscripts.
Here’s how bad the numbers are.

On Hacker News users: 67% identified correctly.

When the system made a guess, it was right 90% of the time.

On Reddit academics posting under pseudonyms: 52%.

On scientists whose interview trannoscripts were explicitly redacted for privacy: 9 out of 33 still got unmasked.

The pipeline works in four steps they call ESRC. Extract identity signals from your posts using LLMs.

Search for candidate matches using embeddings across thousands of profiles.

Reason over top candidates with models like GPT-5.2. Calibrate confidence so when it does guess, it’s almost never wrong.

The classical deanonymization method from the famous Netflix Prize attack? Nearly 0% recall across every test.

LLMs didn’t just improve on old techniques. They made old techniques look like toys.

When they scaled to temporally split Reddit profiles, matching a user’s old posts to their newer ones across a full year gap, the pipeline hit 67% recall at 90% precision and 38% recall at 99% precision.

Meaning even a year of changed interests and different conversations wasn’t enough to hide.

More reasoning compute = better deanonymization. High reasoning effort doubled recall at 99% precision in some tests. As frontier models get smarter, this attack strengthens automatically. Every model upgrade is a privacy downgrade.

What makes it nearly impossible to defend against: the pipeline splits into subtasks that all look benign. Summarize a profile.

Compute embeddings. Rank candidates. No single API call screams “deanonymization.” The researchers themselves say they’re pessimistic that safety guardrails or rate limits can stop it.

Their conclusion is blunt: “Users who post under persistent usernames should assume that adversaries can link their accounts to real identities.” And it extrapolates.

Log-linear projections suggest roughly 35% recall at 90% precision even at one million candidates.

Every throwaway account. Every anonymous forum post. Every “nobody will connect this to me” comment.

It’s all searchable micro-data now. And the cost to run the full agent on one target is less than a cup of coffee.

Practical anonymity on the internet just died. The paper killed it with math.