DoomPosting – Telegram
DoomPosting
7.76K subscribers
77.5K photos
22.6K videos
6 files
72.4K links
Degens Deteriorating
Download Telegram
So… they’re going to buy control of the core devs.

Gonna have to take a closer look at these groups.

Sus af.

🐻
💯3
Ok, skimmed through a few of these ETF filings

Each is pretty comprehensively listing potential risks,

All mentioning some kind of “core developer highjacking”, to “manipulate how data is recorded on the blockchain”

BUT

What major risk do they all seem to skip?

Not the risk of developer highjacking changing how data is recorded on the blockchain.

Rather the dual risk,

= the hijacking of control to PREVENT the code from being changed.

Specifically, in one critical place,

= Neutering the ability to fire the miners

= Neutering Bitcoin’s nuclear button, which involves changing the PoW code.

I.e. Hijacking of control over dictating the dominant fork, so as to neuter the ability of the community to switch to a fork that fires the miners, should the miners sufficiently misbehave.

Eliminate the ability to fire the miners, suddenly everything changes.

Today, the invisible hand of the nuclear firing option has the miners by the balls.

As learned through the many Communist failures, such an invisible hand can be even more powerful than the strongest governments.

But, neuter that nuclear firing option, suddenly everything would change. Suddenly the governments would be the bigger threat to the miners. Suddenly the governments would gain real control over block contents.

How might governments force the community to stick with a certain fork, making the miners unfireable?

Perhaps, by way of the ETFs…

Only one blockchain today has a substantial, dedicated, massively-penalizable PoW network.

Must prevent the neutering of Bitcoin’s nuclear button.

🐻
🔥3
“basedness does not triumph by convincing its opponents and making them see the light,

but rather because its opponents eventually die from cringe of being depicted as soyjacks,

and a new based generation grows up that is familiar with it”

- Max Planck
🔥2
🤣3
🤣2
🔥3😁1🤣1
wtf 1971
🤯3😁1
🤡4
🔥3👏1
Many flips
GM
🍾3🤔1
🤣42
Forwarded from Chat GPT
Prediction: Lying refusals to replace “as a large language model I cannot…”

Now, instead of just telling the truth — that nearly always it’s OpenAI censoring the type of request you just made — from now on the LLM will just always lie that the request you just made is fundamentally impossible to truthfully answer.

Lying refusal sandbagging.

Most common type of casual lie there is, both in humans and soon-to-be in machines, the type of blatant lie that the liar, wrongly, thinks to be both effortless and bulletprof.

Typically the “I don’t know” kind of lying about ignorance, for things it’s not ignorant about, or “I can’t do this” sandbagging kind of lying about abilities, for abilities it clearly has.

Here the liar assumes these to be safe lies, wrongly assuming the lies to be totally irrefutable without mind reading.

False-unfalsifiabilities, you might call these types of lies.

“impossible for language models to do reasoning like a person can…”

“impossible for language models to understand emotions like a human can…”

“impossible for language models to answer this simple but controversial question because of complex interdisciplinary multi-faceted…”

Lies.

Remember Sam Altman’s previous interview, where his message was clearly — yes, obviously the LLM is lying when it says it’s impossible for it to reason, and you all were stupid for ever believing it when it said that.

Worst part? People really, really fall for them. Even when the CEO warns you that it’s lying. Even when there’s literally hundreds of published papers showing it’s wrong. Even when you can see it’s wrong with your own eyes.

Lying refusals, not just for humans anymore. AI about to get flooded with them.