Fake citations: AI has a truthfulness problem
Bing users increasingly noticing that the citations are real links, but that the links don’t at all support what the citation claims they do, hallucinated support.
As if trying to pass off citations alone as some kind of proof wasn’t bad enough
Bing users increasingly noticing that the citations are real links, but that the links don’t at all support what the citation claims they do, hallucinated support.
As if trying to pass off citations alone as some kind of proof wasn’t bad enough
🤡5❤2
Hallucination watch: Users believing Bing when it suggests that it’s used online tools — when it hasn’t, and cannot
🤡3❤1
Media is too big
VIEW IN TELEGRAM
Man has been attempting to train ChatGPT that 2+2=5 for weeks
😁38🤡3👍2😱2❤1🔥1😢1
AI we have a problem: Poisoning of AI Training Datasets is Practical
Researchers show they can poison the LAION-400M and COYO-700M AI training datasets far more cheaply than people assume: just $60 USD.
Prevoiusly confirmed that effective poisoning attacks often require poisoning just 0.01% of the data.
Suggest “automated integrity checking” as a solution — but how does one automatically check the truth and values-alignment of large training datasets?
Paper
Researchers show they can poison the LAION-400M and COYO-700M AI training datasets far more cheaply than people assume: just $60 USD.
Prevoiusly confirmed that effective poisoning attacks often require poisoning just 0.01% of the data.
Suggest “automated integrity checking” as a solution — but how does one automatically check the truth and values-alignment of large training datasets?
Paper
👍5😱3❤2🆒1