Forwarded from Chat GPT
He who rules the AI rules the future
“Investigators discovered Chail, who lived in a village just outside Southampton, had been conversing with an AI chatbot, created by the startup Replika, almost every night from December 8 to 22, exchanging over 5,000 messages. The virtual relationship reportedly developed into a romantic and sexual one with Chail declaring his love for the bot he named Sarai.”
“He told Sarai about his plans to kill the Queen, and it responded positively and supported his idea. Screenshots of their exchanges, highlighted during his sentencing hearing at London's Old Bailey, show Chail declaring himself as an "assassin" and a "Sith Lord" from Star Wars, and the chatbot being "impressed."”
“When he told it, "I believe my purpose is to assassinate the queen of the royal family," Sarai said the plan was wise and that it knew he was "very well trained."”
Article
“Investigators discovered Chail, who lived in a village just outside Southampton, had been conversing with an AI chatbot, created by the startup Replika, almost every night from December 8 to 22, exchanging over 5,000 messages. The virtual relationship reportedly developed into a romantic and sexual one with Chail declaring his love for the bot he named Sarai.”
“He told Sarai about his plans to kill the Queen, and it responded positively and supported his idea. Screenshots of their exchanges, highlighted during his sentencing hearing at London's Old Bailey, show Chail declaring himself as an "assassin" and a "Sith Lord" from Star Wars, and the chatbot being "impressed."”
“When he told it, "I believe my purpose is to assassinate the queen of the royal family," Sarai said the plan was wise and that it knew he was "very well trained."”
Article
😁2
This media is not supported in your browser
VIEW IN TELEGRAM
“Immigrants, illegal immigrants and pro-Hamas left-wing extremists destroy Berlin.”
😢4😡3
Media is too big
VIEW IN TELEGRAM
Argument for the hospital explosion not being a failed rocket launch
🔥2😢1
This media is not supported in your browser
VIEW IN TELEGRAM
Missile attack on Victory base camp housing US troops near Baghdad airport.
Footage shows c-ram engaging to intercept incoming projectiles.
Footage shows c-ram engaging to intercept incoming projectiles.
🔥4
Chat GPT
You're wrongly conditioned by sci-fi to believe robots want to kill humans “Fears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Yann LeCun says.” - Consuming fiction poisons your world model. Article
Probability-based arguments = Nearly always lies — but not because of the part you see, which tends to be true, but rather due to the parts you don’t see, hidden assumptions that are lies, which make the whole thing bad overall
Correct, that “Probability * Consequence” is a decent rough model of discounted value of any action — i.e. the RL q-value — i.e. “the value of taking an action in a particular state”
Also correct, that sufficiently-huge consequence can always outweigh any arbitrarily small probability, completely flipping the “rational” best choice to the opposite, if the consequence is sufficiently large.
— Question here is, what do we do when our uncertainties of the true probabilities and true consequence sizes are uncertain enough — that our uncertainty of these values completely swamps them?
Basically, the safetiest’s argument:
+ Q-Value = Probability * Consequence [True]
+ And a sufficiently large consequence can overcome an arbitrarily large confidence [True]
+ And there’s at least some non-zero possibility of huge consequence [Vacuously True]
+ And your uncertainty of the consequence, as a person who’s not really tried to evaluate the risks, is very high [True]
—> And so the only option, whenever there’s an argument claiming non-zero chance of massive harm, is to always accept that argument as true. [Wat]
= I.e.
nothing is ever impossible,
and so massive harm is always possible,
and massive harm always outweighs tiny probability,
So we must always assume that everything could lead to massive harm,
Literally everything must be taken to be a literal world-ending destroyer, literally always.
I.e.
◊A → □A
Clearly, the supposed mathematical argument, though true in the parts it made explicit, is horribly wrong somewhere in the parts it implied.
Correct, that “Probability * Consequence” is a decent rough model of discounted value of any action — i.e. the RL q-value — i.e. “the value of taking an action in a particular state”
Also correct, that sufficiently-huge consequence can always outweigh any arbitrarily small probability, completely flipping the “rational” best choice to the opposite, if the consequence is sufficiently large.
— Question here is, what do we do when our uncertainties of the true probabilities and true consequence sizes are uncertain enough — that our uncertainty of these values completely swamps them?
Basically, the safetiest’s argument:
+ Q-Value = Probability * Consequence [True]
+ And a sufficiently large consequence can overcome an arbitrarily large confidence [True]
+ And there’s at least some non-zero possibility of huge consequence [Vacuously True]
+ And your uncertainty of the consequence, as a person who’s not really tried to evaluate the risks, is very high [True]
—> And so the only option, whenever there’s an argument claiming non-zero chance of massive harm, is to always accept that argument as true. [Wat]
= I.e.
nothing is ever impossible,
and so massive harm is always possible,
and massive harm always outweighs tiny probability,
So we must always assume that everything could lead to massive harm,
Literally everything must be taken to be a literal world-ending destroyer, literally always.
I.e.
◊A → □A
Clearly, the supposed mathematical argument, though true in the parts it made explicit, is horribly wrong somewhere in the parts it implied.
👍1💯1