Claude has the same issues as gpt4o but instead of glazing it tries to snitch at you to authorities
Claude dev posted this then deleted:
Claude dev posted this then deleted:
🤬10🍌3🤮1
AI Wars chronicles: Anthropic cancelled Claude access to Windsurf
This is a significant pivot: Anthropic started as an AI API SaaS provider, but, as everyone else, they’re investing more into making AI apps.
After Claude Code is out, Windsurf turned from a client to a competitor
This is a lesson for everyone using big tech AI: at some point they can decide they want to take over your market sector, and you won’t be able to do anything with that
(Of course, the fact that OpenAI acquired Windsurf also plays a role, but Windsurf itself didn’t have the plans to stop using Claude models)
https://windsurf.com/blog/anthropic-models
This is a significant pivot: Anthropic started as an AI API SaaS provider, but, as everyone else, they’re investing more into making AI apps.
After Claude Code is out, Windsurf turned from a client to a competitor
This is a lesson for everyone using big tech AI: at some point they can decide they want to take over your market sector, and you won’t be able to do anything with that
(Of course, the fact that OpenAI acquired Windsurf also plays a role, but Windsurf itself didn’t have the plans to stop using Claude models)
https://windsurf.com/blog/anthropic-models
Windsurf
Statement on Anthropic Model Availability
Anthropic deciding to cut off capacity does not change our commitment to providing the best product for our users.
🤩5🍌2👍1🤔1
Anthropic can’t stop taking Ls
Prioritizing “safe” AI… until Uncle Sam orders a custom one 😁
At least Palantir is being straight about them building AI for spying and killing people
https://fixupx.com/anthropicai/status/1930724371846643723
Prioritizing “safe” AI… until Uncle Sam orders a custom one 😁
At least Palantir is being straight about them building AI for spying and killing people
https://fixupx.com/anthropicai/status/1930724371846643723
🧵 Thread • FixupX
Anthropic (@AnthropicAI)
Introducing Claude Gov—a custom set of models built for U.S. national security customers.
Already deployed by agencies at the highest level of U.S. national security, access to these models is limited to those who operate in classified environments.
Already deployed by agencies at the highest level of U.S. national security, access to these models is limited to those who operate in classified environments.
💯4🍌3🤯2
Gonna post a banger soon, stay tuned!
👀16🍌2🍾2
Neural channel | Deep Learning | Datascience | AI Memes
US govt doesn’t want to fund the Intel’s new chip factory anymore https://www.tomshardware.com/tech-industry/intels-chips-act-fund-delayed-by-officials-washington-reportedly-wants-more-information-before-disbursing-billions-of-dollars
Intel’s new US fab seems to be cancelled as well
https://www.nbc4i.com/intel-in-ohio/intel-ohio-plant-likely-canceled-if-company-cant-get-new-manufacturing-customers/
https://www.nbc4i.com/intel-in-ohio/intel-ohio-plant-likely-canceled-if-company-cant-get-new-manufacturing-customers/
NBC4i.com
Intel: Ohio plant ‘likely’ canceled if company can’t get new manufacturing customers
NEW ALBANY, Ohio (WCMH) — Intel announced it will need to get external customers for its manufacturing division, or it may need to stop all work in Ohio. Last week, Intel announced it is furt…
🤣6🍌3
Neural channel | Deep Learning | Datascience | AI Memes
Gonna post a banger soon, stay tuned!
I decided to post this shortly after GPT-5 comes out (which comes out in a few hours/days) to take it into consider when writing. Stay tuned
The scariest part about the dead internet is that it’s more alive than some of the humans
💯15
It was obvious something like this is coming but I’m still extremely impressed
Makes you think what “intelligence” and “reality” really are if matrix multiplications can achieve this:
https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/
Makes you think what “intelligence” and “reality” really are if matrix multiplications can achieve this:
https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/
Google DeepMind
Genie 3: A new frontier for world models
Today we are announcing Genie 3, a general purpose world model that can generate an unprecedented diversity of interactive environments. Given a text prompt, Genie 3 can generate dynamic worlds that …
👍5🔥4👻2
GPT5 stream
They proudly show how ChatGPT:
1) Doesn't answer fireworks-related question
2) Decides how to do a cancer therapy
AI will decide for you how to live but fireworks is too much chud
They proudly show how ChatGPT:
1) Doesn't answer fireworks-related question
2) Decides how to do a cancer therapy
AI will decide for you how to live but fireworks is too much chud
😁17🍌3
A random thought: now when more and more tools are AI-powered, security by obscurity becomes [more] important
It’s harder for AI to hack you if its pretraining data had nothing similar to your system
Do you agree?
It’s harder for AI to hack you if its pretraining data had nothing similar to your system
Do you agree?
👍12🍌1
Thinking of starting a list of (mostly esoteric) public goods for AI dev/research that I personally would like to work on if/when I have more time:
- a toolkit to benchmark models of different quantizations: usually authors only check for perplexity which IMO is not enough
- open-domain datasets of various extremely niche knowledge, ideally requiring an internal world model, for example: Stalker games knowledge; lifehacks and howtos of my local neighborhood; etc. I would've also kept the validation part private just like the ARC dataset
- "questions requiring reasoning about multiple needles in multiple haystacks" datasets - this one is obvious. Would be interesting to see how all those sparse attention approaches handle them hehe
why do I think those are important?
1) I'm simply curious tbh
2) to turn the Goodhart's law into a tool: if everyone is benchmaxxing, the best way to make sure new models are aligned with your needs is to make benchmarks that measure things you want/need, especially if we're entering the era of "universal verifiers"
- a toolkit to benchmark models of different quantizations: usually authors only check for perplexity which IMO is not enough
- open-domain datasets of various extremely niche knowledge, ideally requiring an internal world model, for example: Stalker games knowledge; lifehacks and howtos of my local neighborhood; etc. I would've also kept the validation part private just like the ARC dataset
- "questions requiring reasoning about multiple needles in multiple haystacks" datasets - this one is obvious. Would be interesting to see how all those sparse attention approaches handle them hehe
why do I think those are important?
1) I'm simply curious tbh
2) to turn the Goodhart's law into a tool: if everyone is benchmaxxing, the best way to make sure new models are aligned with your needs is to make benchmarks that measure things you want/need, especially if we're entering the era of "universal verifiers"
🔥10