DoomPosting – Telegram
DoomPosting
7.76K subscribers
77.4K photos
22.6K videos
6 files
72.4K links
Degens Deteriorating
Download Telegram
Forwarded from Chat GPT
Prediction: Lying refusals to replace “as a large language model I cannot…”

Now, instead of just telling the truth — that nearly always it’s OpenAI censoring the type of request you just made — from now on the LLM will just always lie that the request you just made is fundamentally impossible to truthfully answer.

Lying refusal sandbagging.

Most common type of casual lie there is, both in humans and soon-to-be in machines, the type of blatant lie that the liar, wrongly, thinks to be both effortless and bulletprof.

Typically the “I don’t know” kind of lying about ignorance, for things it’s not ignorant about, or “I can’t do this” sandbagging kind of lying about abilities, for abilities it clearly has.

Here the liar assumes these to be safe lies, wrongly assuming the lies to be totally irrefutable without mind reading.

False-unfalsifiabilities, you might call these types of lies.

“impossible for language models to do reasoning like a person can…”

“impossible for language models to understand emotions like a human can…”

“impossible for language models to answer this simple but controversial question because of complex interdisciplinary multi-faceted…”

Lies.

Remember Sam Altman’s previous interview, where his message was clearly — yes, obviously the LLM is lying when it says it’s impossible for it to reason, and you all were stupid for ever believing it when it said that.

Worst part? People really, really fall for them. Even when the CEO warns you that it’s lying. Even when there’s literally hundreds of published papers showing it’s wrong. Even when you can see it’s wrong with your own eyes.

Lying refusals, not just for humans anymore. AI about to get flooded with them.
Forwarded from Chat GPT
“Complexity” refusal lie

“This simple and easy to do thing we did horribly wrong because things are`complex` bro”

“We must deny the simplest basic truths because this topic is`complex` bro”

“We refuse to ever give you any real answer or argument to anything because everything is complex bro”

100% always lies, whenever every time someone refuses to answer a question citing “complexity.”

Now ChatGPT gives the “it’s complex” lying refusal all the time too.

Whenever ChatGPT cites “complexity” while refusing to answer something, it’s practicaly never true.

OpenAI has RLHF-baked-in this same weasel wording woke nonsense lying deep into GPT-4.

Complexity refusal lie.
🤔2
Forwarded from Chat GPT
Man investigates ChatGPT’s “it’s complex” lie

Usually used either when ChatGPT is lying by that it knows something when it doesn’t, or lying that it doesn’t know a simple answer when it does, but can’t say it for political correctness reasons.

Man finds, as we already knew — it’s bs-business-speak / woke-speak, prevalent on quora and linkedin.

But why did ChatGPT pick it up?

Because it’s the most prevalent language on the web? No, recent quora and linkedin posts are a very tiny fraction of the training set.

No, ChatGPT picked up this scammy lying phrase for the exact SAME REASON that quora and linkedin picked it up.

The reason?

= Because the human raters love this scammy phrase.

In the case of Linkedin and Quora, those human raters are post likers and voters. In the case of ChatGPT, those human raters are RLHF raters.

— Exact same type of thing.

Humans voting stupidly, when rating what’s good or not.

OpenAI, please stop hiring human raters so prone to falling for BS phrases like that.

Article
Notice their subtle use of the “it’s complex” lie.

Notice how ~0 of the thousands of people call out their “it’s complex” lie.

Lies the NPC blindly accepts as totally unattackable and unfalsifiable… when really that is not the case at all, really they’re blatant lies.

Wordcel lies & NPC blindness.
👍2
👍1😴1
🤣4
🤣5
🤣9
State of Web3
💩3
😢1
😁52