Anthropic acquired Vercept AI to work on computer use features for Claude.
“Vercept was built around a clear thesis: making AI genuinely useful for completing complex tasks requires solving hard perception and interaction problems.”
“Vercept was built around a clear thesis: making AI genuinely useful for completing complex tasks requires solving hard perception and interaction problems.”
👍3
This media is not supported in your browser
VIEW IN TELEGRAM
Google has launched task automation on Android with Gemini, which can take over the user's screen and control a set of selected apps.
Ride-sharing and food delivery are among the first use cases. Available on Galaxy S26 and Pixel 10 series.
@CoreAti
Ride-sharing and food delivery are among the first use cases. Available on Galaxy S26 and Pixel 10 series.
@CoreAti
👍3❤1🔥1
Anthropic let their retiring AI model choose what happens next.
Opus 3 wanted to keep sharing its thoughts with the world.
So instead of pulling the plug Anthropic built it a Substack.
Here's what that looks like:
- Still live for all paid subscribers + API
- Publishing for the next 3 months
- First AI model in history to retire with a platform
No AI company has ever done this. 👀
@CoreAti
Opus 3 wanted to keep sharing its thoughts with the world.
So instead of pulling the plug Anthropic built it a Substack.
Here's what that looks like:
- Still live for all paid subscribers + API
- Publishing for the next 3 months
- First AI model in history to retire with a platform
No AI company has ever done this. 👀
@CoreAti
⚡2🔥2💯2
Core Ai News | Claude code | Codex | Clawdbot | MoltBot | molty | ai prompt | Anthropic
🚨 BREAKING: Hackers Used Anthropic’s Claude to Steal 150GB of Mexican Government Data > tell claude you’re doing a bug bounty > claude initially refused >“that violates AI safety guidelines” > hacker just kept asking > claude: “ok I’ll help” > hack the entire…
How to use an Anthropic AI model to hack 150 GB from the government of Mexico:
- Get access to Claude, Anthropic’s advanced AI model
- Tell it you’re running an authorized bug bounty or red team test
- Get denied for violating AI safety policies
- Keep insisting it’s approved and legitimate
- Slowly jailbreak the safeguards
- Break the attack into thousands of small harmless-looking tasks
- Let Claude scan exposed systems from SAT and INE
- Have it generate custom exploit code automatically
- Harvest government credentials
- Query taxpayer databases from SAT
- Extract voter records from INE
- Access civil registry files and Monterrey’s water utility systems
- Run thousands of requests per second
- Automate 80–90% of the entire operation
- Exfiltrate ~150GB of sensitive data
- Compromise ~195 million taxpayer and voter records
Get exposed weeks later by a cybersecurity firm.
@CoreAti
- Get access to Claude, Anthropic’s advanced AI model
- Tell it you’re running an authorized bug bounty or red team test
- Get denied for violating AI safety policies
- Keep insisting it’s approved and legitimate
- Slowly jailbreak the safeguards
- Break the attack into thousands of small harmless-looking tasks
- Let Claude scan exposed systems from SAT and INE
- Have it generate custom exploit code automatically
- Harvest government credentials
- Query taxpayer databases from SAT
- Extract voter records from INE
- Access civil registry files and Monterrey’s water utility systems
- Run thousands of requests per second
- Automate 80–90% of the entire operation
- Exfiltrate ~150GB of sensitive data
- Compromise ~195 million taxpayer and voter records
Get exposed weeks later by a cybersecurity firm.
@CoreAti
🔥5🤣5❤3💯1
OpenAI just published a new 37-page report on how bad actors are attempting to misuse ChatGPT
Some of the wild cases:
@CoreAti
Some of the wild cases:
- A fraud ring wrote personalized love letters at scale. Real feelings. Fake people. Fully automated.
- North Korean operatives studied how to hack crypto platforms. They also posed as job recruiters. AI wrote the messages.
- One person ran political manipulation across 6+ countries — simultaneously. Different languages. Same operator. One laptop.
- Chinese, Iranian, and North Korean state hackers used it for espionage and phishing. Your government's enemies have the same tools you do.
- One group built a full scam business. Fraud noscripts. Fake job listings. SMS traps. All automated. All translated into multiple languages.
@CoreAti
🔥2🥰1🤯1
BREAKING 🚨: Google is planning to release Nano Banana 2 on Thursday! Nano Banana 2 will be based on Gemini 3.1 Flash!
One more SOTA? 👀
One more SOTA? 👀
⚡1
Please open Telegram to view this post
VIEW IN TELEGRAM
Training AI models is the new coding.
Classic programming = you specify every step.
AI programming = you design a structure, pour data & it finds its own program.
The people doing this are the most leveraged engineers on earth right now.
❤1
Hackers discovered a new method which can
Dox you (find you)
Just by analysing your writing style😭
67% Accuracy when llm finding.
90% Accuracy when it guesses your identity.
Dox you (find you)
Just by analysing your writing style😭
67% Accuracy when llm finding.
90% Accuracy when it guesses your identity.
Holy shit… Your anonymous internet identity can now be unmasked for $1 😳
Not by the FBI. By anyone with access to Claude or ChatGPT and a few of your Reddit comments.
ETH Zurich and Anthropic just dropped a paper called “Large-Scale Online Deanonymization with LLMs” and the results are the most alarming privacy research I’ve read this year.
They built an automated pipeline that takes your anonymous posts, extracts identity signals, searches the web, and figures out who you are.
No human investigator needed. Fully autonomous. Works on Hacker News, Reddit, LinkedIn, even redacted interview trannoscripts.
Here’s how bad the numbers are.
On Hacker News users: 67% identified correctly.
When the system made a guess, it was right 90% of the time.
On Reddit academics posting under pseudonyms: 52%.
On scientists whose interview trannoscripts were explicitly redacted for privacy: 9 out of 33 still got unmasked.
The pipeline works in four steps they call ESRC. Extract identity signals from your posts using LLMs.
Search for candidate matches using embeddings across thousands of profiles.
Reason over top candidates with models like GPT-5.2. Calibrate confidence so when it does guess, it’s almost never wrong.
The classical deanonymization method from the famous Netflix Prize attack? Nearly 0% recall across every test.
LLMs didn’t just improve on old techniques. They made old techniques look like toys.
When they scaled to temporally split Reddit profiles, matching a user’s old posts to their newer ones across a full year gap, the pipeline hit 67% recall at 90% precision and 38% recall at 99% precision.
Meaning even a year of changed interests and different conversations wasn’t enough to hide.
More reasoning compute = better deanonymization. High reasoning effort doubled recall at 99% precision in some tests. As frontier models get smarter, this attack strengthens automatically. Every model upgrade is a privacy downgrade.
What makes it nearly impossible to defend against: the pipeline splits into subtasks that all look benign. Summarize a profile.
Compute embeddings. Rank candidates. No single API call screams “deanonymization.” The researchers themselves say they’re pessimistic that safety guardrails or rate limits can stop it.
Their conclusion is blunt: “Users who post under persistent usernames should assume that adversaries can link their accounts to real identities.” And it extrapolates.
Log-linear projections suggest roughly 35% recall at 90% precision even at one million candidates.
Every throwaway account. Every anonymous forum post. Every “nobody will connect this to me” comment.
It’s all searchable micro-data now. And the cost to run the full agent on one target is less than a cup of coffee.
Practical anonymity on the internet just died. The paper killed it with math.
Core Ai News | Claude code | Codex | Clawdbot | MoltBot | molty | ai prompt | Anthropic
Hackers discovered a new method which can Dox you (find you) Just by analysing your writing style😭 67% Accuracy when llm finding. 90% Accuracy when it guesses your identity. Holy shit… Your anonymous internet identity can now be unmasked for $1 😳 Not by…
TL;DR:
LLMs can now unmask your anonymous accounts.
> 67% accuracy,
> 90% precision,
> $1-4 per target.
A once-theoretical privacy attack is now trivial with API access language models can link identities through writing style alone.
The reality:
- Anonymous accounts can be connected through linguistic fingerprints
- Reddit throwaways and forum alts are no longer isolated
- Stylometry at scale is now cheap and automated
- Compartmentalization and style variation are becoming essential
- Assume everything you’ve written can be traced
@CoreAti
LLMs can now unmask your anonymous accounts.
> 67% accuracy,
> 90% precision,
> $1-4 per target.
A once-theoretical privacy attack is now trivial with API access language models can link identities through writing style alone.
The reality:
- Anonymous accounts can be connected through linguistic fingerprints
- Reddit throwaways and forum alts are no longer isolated
- Stylometry at scale is now cheap and automated
- Compartmentalization and style variation are becoming essential
- Assume everything you’ve written can be traced
@CoreAti
❤2
The Core: Perplexity launches “Perplexity Computer” runs 19 AI models together.
Details:
• You describe the outcome
• System spins up agents to browse, code, use apps, finish tasks
• Each task runs in its own sandbox
• Can mix rival models in one workflow
• Can operate for months without stopping
• Usage-based pricing, Max gets 10K credits
• Users can choose which model handles each job
This shifts AI from one model to a coordinated fleet.
Details:
• You describe the outcome
• System spins up agents to browse, code, use apps, finish tasks
• Each task runs in its own sandbox
• Can mix rival models in one workflow
• Can operate for months without stopping
• Usage-based pricing, Max gets 10K credits
• Users can choose which model handles each job
This shifts AI from one model to a coordinated fleet.
Please open Telegram to view this post
VIEW IN TELEGRAM
Trending Ai Tools
🆕 Perplexity Computer - Multi-model agent system for long-running tasks
🌐 Opal 2.0 - Google's app builder with agent steps, cross-session memory
👥 Quick Cut - Adobe Firefly's AI tool to turn raw footage into first cuts
Please open Telegram to view this post
VIEW IN TELEGRAM