Что-то давно мы не слышали этого отца-основателя
https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says
https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says
the Guardian
Rise of artificial intelligence is inevitable but should not be feared, ‘father of AI’ says
Jürgen Schmidhuber believes AI will progress to the point where it surpasses human intelligence and will pay no attention to people
👍17😁4
Из новостей последней недели в дайджесте от Economist
The disruptive potential of generative artificial intelligence came into sharp focus when Chegg, a provider of online study aids, said that the use of Chatgpt by students was starting to affect revenues. Although the chief executive tried to assure investors that this “is not a sky-is-falling thing” and the effects are “just on the margin”, Chegg’s share price swooned, dragging down the stock of other online education companies.
The Writers Guild of America called its first strike for 15 years, in a dispute over pay with studios such as Apple, Disney and Netflix. The television and film writers say the studios are creating “a gig economy” in the industry, for example by moving to “day rates” in comedy variety. It also wants to restrict the use of ai in creating noscripts. The last stoppage by Hollywood’s noscript writers lasted 100 days.
Geoffrey Hinton, one of the pioneers of ai, resigned from Google in order to speak his mind about the technology. Mr Hinton, 75, said ai was developing too rapidly and the idea that it would outsmart people was no longer “30 to 50 years” away. Humans are “biological systems and these are digital systems”, he warned, as he called for more safety protocols for ai.
https://www.economist.com/the-world-this-week/2023/05/04/business
The disruptive potential of generative artificial intelligence came into sharp focus when Chegg, a provider of online study aids, said that the use of Chatgpt by students was starting to affect revenues. Although the chief executive tried to assure investors that this “is not a sky-is-falling thing” and the effects are “just on the margin”, Chegg’s share price swooned, dragging down the stock of other online education companies.
The Writers Guild of America called its first strike for 15 years, in a dispute over pay with studios such as Apple, Disney and Netflix. The television and film writers say the studios are creating “a gig economy” in the industry, for example by moving to “day rates” in comedy variety. It also wants to restrict the use of ai in creating noscripts. The last stoppage by Hollywood’s noscript writers lasted 100 days.
Geoffrey Hinton, one of the pioneers of ai, resigned from Google in order to speak his mind about the technology. Mr Hinton, 75, said ai was developing too rapidly and the idea that it would outsmart people was no longer “30 to 50 years” away. Humans are “biological systems and these are digital systems”, he warned, as he called for more safety protocols for ai.
https://www.economist.com/the-world-this-week/2023/05/04/business
The Economist
Business | May 6th 2023 Edition
The world this week
👍4
Looks interesting!
Introducing ImageBind by Meta AI: the first AI model capable of binding information from six different modalities at once.
Humans absorb information from the world by combining data from different senses, like sight and sound. ImageBind brings machines one step closer to this ability with a model that’s capable of learning a single embedding for text, image/video, audio, depth, thermal and IMU inputs. We hope this work opens the floodgates for researchers as they work to develop new, holistic systems across a wide array of real-world applications.
The model and a new paper are now available publicly for the research community.
https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/
Introducing ImageBind by Meta AI: the first AI model capable of binding information from six different modalities at once.
Humans absorb information from the world by combining data from different senses, like sight and sound. ImageBind brings machines one step closer to this ability with a model that’s capable of learning a single embedding for text, image/video, audio, depth, thermal and IMU inputs. We hope this work opens the floodgates for researchers as they work to develop new, holistic systems across a wide array of real-world applications.
The model and a new paper are now available publicly for the research community.
https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/
Meta
ImageBind: Holistic AI learning across six modalities
ImageBind is the first AI model capable of binding information from six modalities.
👍19❤2
Anil Seth on machine consciousness, difference between consciousness and intelligence, another petition, and a new set of risks. Worth reading.
"There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.
The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel."
https://nautil.us/why-conscious-ai-is-a-bad-bad-idea-302937/
"There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.
The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel."
https://nautil.us/why-conscious-ai-is-a-bad-bad-idea-302937/
Nautilus
Why Conscious AI Is a Bad, Bad Idea
Our minds haven’t evolved to deal with machines we believe have consciousness.
🤯7❤5🔥3👍1🤔1🤡1