Media is too big
VIEW IN TELEGRAM
Third Starship launch failure in a row earlier today
Though it made it further than any of the past ones
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
Though it made it further than any of the past ones
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
👀5🔥1
The Floyd Effect had precedent
In Baltimore, the death of Freddie Gray came with a large and immediate uptick in the number of homicides
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
In Baltimore, the death of Freddie Gray came with a large and immediate uptick in the number of homicides
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
😱5
Police pulled back right away
The arrest rate plummeted
The number of shootings then soared far beyond what had happened in recent years
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
The arrest rate plummeted
The number of shootings then soared far beyond what had happened in recent years
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
🤬4
This media is not supported in your browser
VIEW IN TELEGRAM
Robotic system designed for scoring baguette dough, a process crucial for allowing the dough to expand properly during baking
It’s over Frenchcels
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
It’s over Frenchcels
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
😱5🔥4😁4🤯1
Many such cases
One hallmark of AI code,
— constantly silently swallowing errors, opposite of the “fail fast” good practice
This ofc builds up, until soon you have a giant codebase that’s constantly silently failing while the code tells you that everything is working fine
One example of this is e.g. the birdeye API, which frequently returns something for API calls, but it’s silently and arbitrarily missing some of the critical of the data = almost definitely written by AI.
Real humans didn’t really ever cheat like this, for some reason
The cause?
(1) OpenAI and others using retardedly-simple automated training which often just fails the code generation if there’s an exception — but passes the code if it cheats and just silently swallows the errors
(2) OpenAI run by absolute idiot wordcels who have zero intuition basic good coding practices like fail fast — i.e. people with very bad taste. Given how bad their taste is for making the rest of the AI non-hateable, suppose their terrible taste in coding practices isn’t such a huge surprise
Expecting to see an increasing number of disasters like this in the coming years
Today’s AIs have horrible coding habits that are purely a reflection of complete retards running the ship
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
One hallmark of AI code,
— constantly silently swallowing errors, opposite of the “fail fast” good practice
This ofc builds up, until soon you have a giant codebase that’s constantly silently failing while the code tells you that everything is working fine
One example of this is e.g. the birdeye API, which frequently returns something for API calls, but it’s silently and arbitrarily missing some of the critical of the data = almost definitely written by AI.
Real humans didn’t really ever cheat like this, for some reason
The cause?
(1) OpenAI and others using retardedly-simple automated training which often just fails the code generation if there’s an exception — but passes the code if it cheats and just silently swallows the errors
(2) OpenAI run by absolute idiot wordcels who have zero intuition basic good coding practices like fail fast — i.e. people with very bad taste. Given how bad their taste is for making the rest of the AI non-hateable, suppose their terrible taste in coding practices isn’t such a huge surprise
Expecting to see an increasing number of disasters like this in the coming years
Today’s AIs have horrible coding habits that are purely a reflection of complete retards running the ship
🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
💯8👀5