gonzo-обзоры ML статей – Telegram
gonzo-обзоры ML статей
24.1K subscribers
2.72K photos
2 videos
3 files
1.34K links
Авторы:
Гриша Сапунов, ранее руководитель разработки Яндекс-Новостей, ныне CTO Intento. Области интересов: AI/ML/DL, биоинформатика.
Лёша Тихонов, ранее аналитик в Яндексе, автор Автопоэта, Нейронной Обороны... Области интересов: discrete domain, NLP, RL.
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
🔥19
Interesting thoughts about AI vs. AGI in a recent podcast with David Deutsch:

Naval Ravikant: Related to that, we’ve touched upon AGI [artificial general intelligence] here and there. You have said AGI is absolutely possible, and that Turing settled that issue. Now some people are saying it’s almost inevitable and we have to worry about things like AGI alignment. I’m wondering if you have thoughts on both, is self-improving, runaway AGI here? And is this something that we need to align with our beliefs, whatever those are?

In fact, I don’t think we, as humans, can even agree upon alignment, but suppose we could. How would we align AGI?

David Deutsch: Yeah, and we don’t even any longer try to align humans in the way that people want to align AGIs, namely by physically crippling their thinking. Yes, I don’t think we’re anywhere near it yet, I’d love to be wrong about that, but I don’t think we’re anywhere near it. And I think that AI, although it’s a wonderful technology, and I think it’s going to go a lot further than it is now, AI has nothing to do with AGI. It’s a completely different technology and it is in many ways the opposite of AGI. And the way I always explain this is that with an AGI or a person, an artificial person, their thinking is unpredictable; we’re expecting them to produce ideas that nobody predicted they would produce, and which are good explanations, that’s what people can do. And I don’t mean necessarily write physics papers or whatever, we do this thing in our everyday lives all the time.

You can’t live an ordinary human life without creating new good explanations. An AGI would be needed to build a robot that can live in the world as a human, that’s Turing’s idea, with what is mistakenly called the Turing test. Now, why an AI is the opposite to an AGI is that an AGI, as I said, can do anything, whereas an AI can only do the narrow thing that it’s supposed to do, like a better chatbot is one that replies in good English, replies to the question you ask, can look things up for you, doesn’t say anything politically incorrect. The better the AGI is, the more constrained its output is. You may not be able to actually say what the result of all your constraints must be, it’s not constrained in the sense that you prescribe what it is going to say, but you prescribe the rule that what it is going to say must follow or the rules.

So if it’s a chess-playing machine, chess-playing program, then the idea is that you must win the game. And making a better one of these means amputating more of the possibilities of what it would otherwise do, like namely lose, or in the case of chatbots, say the wrong thing or not answer your question or contradict itself or whatever. So the art of making a good AI is to limit its possibilities tremendously. You limit them a trillion fold compared with what it could be. There are a trillion ways of being wrong for every way of being right, same is true of chess-playing programs. Whereas, for the perfect AGI, as it were, would be where you can show by looking at the program and you can show mathematically that there is no output that it couldn’t produce, including no output at all. So an AGI, like a person might refuse to answer, it should have that right by the first amendment.

So you can’t have a behavioral test for an AGI because the AGI may not cooperate. It may be right not to cooperate because it may be very right to suspect what you are going to do to it. So you see that this is not only a different kind of program, it’s going to require a different kind of programming because there is no such thing as the specification. We know sort of philosophically what we want the AGI to be, a bit like parents know philosophically that they want their children to be happy, but they don’t want — if they’re doing the right thing, they don’t want to say, “Well, my child will never say X, will never utter these words,” like you do for an AI. You will recognize what it means to be happy once they’ve done it.

(there are many other interesting things as well)
🤔104👍2😁1👌1🤣1
И ещё одно, что сложно не запостить (но сложно и прочитать ибо пейволл)

Гугл объединил все свои силы с DeepMind и делает проект Gemini чтобы догнать OpenAI с GPT-4. Jacob Devlin (помните BERT?) при этом ушёл в OpenAI. И также большой шум про то, что Bard обучался на результатах ChatGPT, что типа нельзя по ToS.

https://www.theinformation.com/articles/alphabets-google-and-deepmind-pause-grudges-join-forces-to-chase-openai
👍10🤔7🤮73😱1💩1
Ну и давайте до кучи ссылки на петиции тоже сохраним:

- Петиция 1 (против, или за, откуда смотреть): https://futureoflife.org/open-letter/pause-giant-ai-experiments/

- Петиция 2 (за, или против):
https://laion.ai/blog/petition/
🤡8👍3🗿3🔥2🤷2😢1
Первого апреля был занят, но подумал, что надо уже готовить доклад про NLP в 2024-м. Начал готовить. Включайтесь во флэшмоб про #AInews2024

https://gonzoml.substack.com/p/the-guardian-2024
11🔥4👏3😱3
Stanford 2023 AI Index Report is published!

The section on machine translation is based on Intento data as usual :)

https://aiindex.stanford.edu/report/
👍284