DoomPosting – Telegram
DoomPosting
7.75K subscribers
76.9K photos
22.3K videos
6 files
71.7K links
Degens Deteriorating
Download Telegram
Forwarded from Chat GPT
He who rules the AI rules the future

“Investigators discovered Chail, who lived in a village just outside Southampton, had been conversing with an AI chatbot, created by the startup Replika, almost every night from December 8 to 22, exchanging over 5,000 messages. The virtual relationship reportedly developed into a romantic and sexual one with Chail declaring his love for the bot he named Sarai.”

“He told Sarai about his plans to kill the Queen, and it responded positively and supported his idea. Screenshots of their exchanges, highlighted during his sentencing hearing at London's Old Bailey, show Chail declaring himself as an "assassin" and a "Sith Lord" from Star Wars, and the chatbot being "impressed."”

“When he told it, "I believe my purpose is to assassinate the queen of the royal family," Sarai said the plan was wise and that it knew he was "very well trained."”

Article
😁2
NFTs
😱3🤣2
This media is not supported in your browser
VIEW IN TELEGRAM
“Immigrants, illegal immigrants and pro-Hamas left-wing extremists destroy Berlin.”
😢4😡3
Do you, bro?
😁2👏1
Bullish
Media is too big
VIEW IN TELEGRAM
Argument for the hospital explosion not being a failed rocket launch
🔥2😢1
“3 American military bases or military installations have been attacked in Syria tonight”
🤔2
Risk on
This media is not supported in your browser
VIEW IN TELEGRAM
Missile attack on Victory base camp housing US troops near Baghdad airport.

Footage shows c-ram engaging to intercept incoming projectiles.
🔥4
But how
Bitcoin hashrate
😱1
"I swear this is the top, FTX lows are next, I swear!"
😢2
Nothing is real
🤣3
🤣8🔥2
Forwarded from Chat GPT
You're wrongly conditioned by sci-fi to believe robots want to kill humans

“Fears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Yann LeCun says.”

- Consuming fiction poisons your world model.

Article
Me giving back to the community after oil hits $200
😁4🤣4🥰2
Chat GPT
You're wrongly conditioned by sci-fi to believe robots want to kill humans “Fears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Yann LeCun says.” - Consuming fiction poisons your world model. Article
Probability-based arguments = Nearly always lies — but not because of the part you see, which tends to be true, but rather due to the parts you don’t see, hidden assumptions that are lies, which make the whole thing bad overall

Correct, that “Probability * Consequence” is a decent rough model of discounted value of any action — i.e. the RL q-value — i.e. “the value of taking an action in a particular state”

Also correct, that sufficiently-huge consequence can always outweigh any arbitrarily small probability, completely flipping the “rational” best choice to the opposite, if the consequence is sufficiently large.

— Question here is, what do we do when our uncertainties of the true probabilities and true consequence sizes are uncertain enough — that our uncertainty of these values completely swamps them?

Basically, the safetiest’s argument:

+ Q-Value = Probability * Consequence [True]

+ And a sufficiently large consequence can overcome an arbitrarily large confidence [True]

+ And there’s at least some non-zero possibility of huge consequence [Vacuously True]

+ And your uncertainty of the consequence, as a person who’s not really tried to evaluate the risks, is very high [True]

—> And so the only option, whenever there’s an argument claiming non-zero chance of massive harm, is to always accept that argument as true. [Wat]

= I.e.

nothing is ever impossible,

and so massive harm is always possible,

and massive harm always outweighs tiny probability,

So we must always assume that everything could lead to massive harm,

Literally everything must be taken to be a literal world-ending destroyer, literally always.

I.e.

◊A → □A

Clearly, the supposed mathematical argument, though true in the parts it made explicit, is horribly wrong somewhere in the parts it implied.
👍1💯1
Now this is on the right path.

AIs both can be be used for immensely evil and destructive weaponization,

And AIs can be used for immense good.

AIs are both at once.

AI dystopia and AI utopia are both possibilities that are right around the corner.

Real debate — is what action must be taken to avert the distopia and reach the utopia?

(A) Centralized control by the most powerful men?

(B) Elimination of centralized control by man, in favor of giving control to all or to something not man?

What millennia old debate do you guys think this debate is most analogous to?
🥰3👍1