DoomPosting – Telegram
DoomPosting
7.76K subscribers
77.5K photos
22.6K videos
6 files
72.4K links
Degens Deteriorating
Download Telegram
“basedness does not triumph by convincing its opponents and making them see the light,

but rather because its opponents eventually die from cringe of being depicted as soyjacks,

and a new based generation grows up that is familiar with it”

- Max Planck
🔥2
🤣3
🤣2
🔥3😁1🤣1
wtf 1971
🤯3😁1
🤡4
🔥3👏1
Many flips
GM
🍾3🤔1
🤣42
Forwarded from Chat GPT
Prediction: Lying refusals to replace “as a large language model I cannot…”

Now, instead of just telling the truth — that nearly always it’s OpenAI censoring the type of request you just made — from now on the LLM will just always lie that the request you just made is fundamentally impossible to truthfully answer.

Lying refusal sandbagging.

Most common type of casual lie there is, both in humans and soon-to-be in machines, the type of blatant lie that the liar, wrongly, thinks to be both effortless and bulletprof.

Typically the “I don’t know” kind of lying about ignorance, for things it’s not ignorant about, or “I can’t do this” sandbagging kind of lying about abilities, for abilities it clearly has.

Here the liar assumes these to be safe lies, wrongly assuming the lies to be totally irrefutable without mind reading.

False-unfalsifiabilities, you might call these types of lies.

“impossible for language models to do reasoning like a person can…”

“impossible for language models to understand emotions like a human can…”

“impossible for language models to answer this simple but controversial question because of complex interdisciplinary multi-faceted…”

Lies.

Remember Sam Altman’s previous interview, where his message was clearly — yes, obviously the LLM is lying when it says it’s impossible for it to reason, and you all were stupid for ever believing it when it said that.

Worst part? People really, really fall for them. Even when the CEO warns you that it’s lying. Even when there’s literally hundreds of published papers showing it’s wrong. Even when you can see it’s wrong with your own eyes.

Lying refusals, not just for humans anymore. AI about to get flooded with them.
Forwarded from Chat GPT
“Complexity” refusal lie

“This simple and easy to do thing we did horribly wrong because things are`complex` bro”

“We must deny the simplest basic truths because this topic is`complex` bro”

“We refuse to ever give you any real answer or argument to anything because everything is complex bro”

100% always lies, whenever every time someone refuses to answer a question citing “complexity.”

Now ChatGPT gives the “it’s complex” lying refusal all the time too.

Whenever ChatGPT cites “complexity” while refusing to answer something, it’s practicaly never true.

OpenAI has RLHF-baked-in this same weasel wording woke nonsense lying deep into GPT-4.

Complexity refusal lie.
🤔2