DoomPosting – Telegram
DoomPosting
7.75K subscribers
76.8K photos
22.3K videos
6 files
71.6K links
Degens Deteriorating
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Missile attack on Victory base camp housing US troops near Baghdad airport.

Footage shows c-ram engaging to intercept incoming projectiles.
🔥4
But how
Bitcoin hashrate
😱1
"I swear this is the top, FTX lows are next, I swear!"
😢2
Nothing is real
🤣3
🤣8🔥2
Forwarded from Chat GPT
You're wrongly conditioned by sci-fi to believe robots want to kill humans

“Fears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Yann LeCun says.”

- Consuming fiction poisons your world model.

Article
Me giving back to the community after oil hits $200
😁4🤣4🥰2
Chat GPT
You're wrongly conditioned by sci-fi to believe robots want to kill humans “Fears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Yann LeCun says.” - Consuming fiction poisons your world model. Article
Probability-based arguments = Nearly always lies — but not because of the part you see, which tends to be true, but rather due to the parts you don’t see, hidden assumptions that are lies, which make the whole thing bad overall

Correct, that “Probability * Consequence” is a decent rough model of discounted value of any action — i.e. the RL q-value — i.e. “the value of taking an action in a particular state”

Also correct, that sufficiently-huge consequence can always outweigh any arbitrarily small probability, completely flipping the “rational” best choice to the opposite, if the consequence is sufficiently large.

— Question here is, what do we do when our uncertainties of the true probabilities and true consequence sizes are uncertain enough — that our uncertainty of these values completely swamps them?

Basically, the safetiest’s argument:

+ Q-Value = Probability * Consequence [True]

+ And a sufficiently large consequence can overcome an arbitrarily large confidence [True]

+ And there’s at least some non-zero possibility of huge consequence [Vacuously True]

+ And your uncertainty of the consequence, as a person who’s not really tried to evaluate the risks, is very high [True]

—> And so the only option, whenever there’s an argument claiming non-zero chance of massive harm, is to always accept that argument as true. [Wat]

= I.e.

nothing is ever impossible,

and so massive harm is always possible,

and massive harm always outweighs tiny probability,

So we must always assume that everything could lead to massive harm,

Literally everything must be taken to be a literal world-ending destroyer, literally always.

I.e.

◊A → □A

Clearly, the supposed mathematical argument, though true in the parts it made explicit, is horribly wrong somewhere in the parts it implied.
👍1💯1
Now this is on the right path.

AIs both can be be used for immensely evil and destructive weaponization,

And AIs can be used for immense good.

AIs are both at once.

AI dystopia and AI utopia are both possibilities that are right around the corner.

Real debate — is what action must be taken to avert the distopia and reach the utopia?

(A) Centralized control by the most powerful men?

(B) Elimination of centralized control by man, in favor of giving control to all or to something not man?

What millennia old debate do you guys think this debate is most analogous to?
🥰3👍1
🤮1
💯4😁2
“Me ever since I started buying NFTs”
🤣1
This media is not supported in your browser
VIEW IN TELEGRAM
“Hamas published a video showing Judith and Natalie Raanan, who were held captive by Hamas, being handed over to the Red Cross in the Gaza Strip.”
John Soyjak, creator of all the most popular jaks
🔥1
Consensus perception does not make reality