Interesting interview with Demis Hassabis:
https://time.com/6246119/demis-hassabis-deepmind-interview/
> "In the wrong hands, a 2021 DeepMind research paper says, language-generation tools like ChatGPT and its predecessor GPT-3 could turbocharge the spread of disinformation, facilitate government censorship or surveillance, and perpetuate harmful stereotypes under the guise of objectivity."
To be precise, OpenAI (and many others) said it in 2018:
https://openai.com/blog/preparing-for-malicious-uses-of-ai/
https://arxiv.org/abs/1802.07228
> "DeepMind is also considering releasing its own chatbot, called Sparrow, for a “private beta” some time in 2023. (The delay is in order for DeepMind to work on reinforcement learning-based features that ChatGPT lacks, like citing its sources."
The race continues...
And this is especially interesting:
> “We’re getting into an era where we have to start thinking about the freeloaders, or people who are reading but not contributing to that information base,” he says. “And that includes nation states as well.” He declines to name which states he means—“it’s pretty obvious, who you might think”—but he suggests that the AI industry’s culture of publishing its findings openly may soon need to end.
https://time.com/6246119/demis-hassabis-deepmind-interview/
> "In the wrong hands, a 2021 DeepMind research paper says, language-generation tools like ChatGPT and its predecessor GPT-3 could turbocharge the spread of disinformation, facilitate government censorship or surveillance, and perpetuate harmful stereotypes under the guise of objectivity."
To be precise, OpenAI (and many others) said it in 2018:
https://openai.com/blog/preparing-for-malicious-uses-of-ai/
https://arxiv.org/abs/1802.07228
> "DeepMind is also considering releasing its own chatbot, called Sparrow, for a “private beta” some time in 2023. (The delay is in order for DeepMind to work on reinforcement learning-based features that ChatGPT lacks, like citing its sources."
The race continues...
And this is especially interesting:
> “We’re getting into an era where we have to start thinking about the freeloaders, or people who are reading but not contributing to that information base,” he says. “And that includes nation states as well.” He declines to name which states he means—“it’s pretty obvious, who you might think”—but he suggests that the AI industry’s culture of publishing its findings openly may soon need to end.
TIME
DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution
DeepMind brought artificial intelligence mainstream. Now its CEO Demis Hassabis is issuing a warning
👍13😱5
gonzo-обзоры ML статей
Interesting interview with Demis Hassabis: https://time.com/6246119/demis-hassabis-deepmind-interview/ > "In the wrong hands, a 2021 DeepMind research paper says, language-generation tools like ChatGPT and its predecessor GPT-3 could turbocharge the spread…
С комментарием про старую статью OpenAI и сотоварищей про Malicious use of AI, упустил свежую очень тематическую
https://openai.com/blog/forecasting-misuse/
https://openai.com/blog/forecasting-misuse/
Openai
Forecasting potential misuses of language models for disinformation campaigns and how to reduce risk
OpenAI researchers collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory to investigate how large language models might be misused for disinformation purposes. The collaboration included…
👍9