gonzo-обзоры ML статей – Telegram
gonzo-обзоры ML статей
24.1K subscribers
2.72K photos
2 videos
3 files
1.34K links
Авторы:
Гриша Сапунов, ранее руководитель разработки Яндекс-Новостей, ныне CTO Intento. Области интересов: AI/ML/DL, биоинформатика.
Лёша Тихонов, ранее аналитик в Яндексе, автор Автопоэта, Нейронной Обороны... Области интересов: discrete domain, NLP, RL.
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
🤨61
Когда сгенерил книгу через ChatGPT...
🥴81😁15👍2❤‍🔥1
The engine powering Grok is Grok-1, our frontier LLM, which we developed over the last four months. Grok-1 has gone through many iterations over this span of time.

After announcing xAI, we trained a prototype LLM (Grok-0) with 33 billion parameters. This early model approaches LLaMA 2 (70B) capabilities on standard LM benchmarks but uses only half of its training resources. In the last two months, we have made significant improvements in reasoning and coding capabilities leading up to Grok-1, a state-of-the-art language model that is significantly more powerful, achieving 63.2% on the HumanEval coding task and 73% on MMLU.

...

At the frontier of deep learning research, reliable infrastructure must be built with the same care as datasets and learning algorithms. To create Grok, we built a custom training and inference stack based on Kubernetes, Rust, and JAX.

https://x.ai
👍13🥱113
In case you didn't have time to watch the keynote (https://www.youtube.com/live/U9mJuUkhUzk?si=9_KjNVsS3x7vxCdP) or read any other summaries, here's a very brief mine.

# GPT-4 Turbo
## 1 context length 
- up to 128k, 300 pages std book

## 2 more control:
- valid JSON mode for output
- multiple function calling + better in following instructions 
- consistent output with the seed param
-logprobs in the API soon

## 3 better world knowledge
- bringing retrieval to the platform
- knowledge cutoff shifted Sep 21 to Apr 23

## 4 new modalities
- dalle 3, gpt-4-turbo with vision, TTS in API
- protect from misuse
- 6 preset voices
- oss whisper v3 in the API soon

## 5 Customization
- fine-tuning for gpt-3.5-16k
- fine-tuning for gpt-4 experimental access program
- custom models for new domain, with tools to adjust different training stages

## 6 higher rate limits
- x2 tokens per minute
- can request further increase in settings

## 7 Lower Pricing 
GPT 4 turbo 
- 3x less for input tokens (1c per 1000 tokens)
- 2x for completion tokens (3c per 1000)
- total 2.75x less for most devs
- starting today 
- speed is also a lot faster

GPT 3.5 turbo 16k
- 0.1c/0.2c (3x/2x) (cheaper than prev 4k model)

old Fine-tuning GPT 3.5 turbo 4k 
- 1.2c/1.6c
new Fine-tuning GPT 3.5 turbo 16k
- 0.3c/0.6c (4x/2.7x)

# Building on the platform 
- Copyright shield for enterprise and API 
- defend customers and pay costs incurred
- remind: don't train on API or ChatGPT enterprise

# ChatGPT news
- now uses GPT-4 turbo by default
- can browse web
- without model clicker

# Agents
- Gradual iterative deployment 
- GPTs -- tailored versions of GPT (instructions, expanded knowledge, actions)
- data is shared only on permission
- build with natural language in GPT Builder
- can upload documents 
- can publish to use, or make it private, or use by link, on create for the company in ChatGPT Enterprise
- Launching GPT Store later this month
- Revenue sharing will be there
- Bringing the same concept to API with Assistants API 

# Assistants API (beta today)
- persistent threads with long time conversation history (threads and messages, managing state)
- retrieval, can read pdf files, RAG 
- code interpreter can generate and run code (Python)
- function calling
- can navigate threads in the console and look inside
👍22🔥6
Интересная новость.

https://www.hpcwire.com/2023/11/13/training-of-1-trillion-parameter-scientific-ai-begins/

Интересно даже не тем, что 1T модель обучают (если оно MoE, то бывали и побольше), а тем, что не на Нвидии это делают. Неужели реальная конкуренция наконец?

"Argonne National Laboratory (ANL) is creating a generative AI model called AuroraGPT and is pouring a giant mass of scientific information into creating the brain.

The model is being trained on its Aurora supercomputer, which delivers more than an half an exaflop performance at ANL. The system has Intel’s Ponte Vecchio GPUs, which provide the main computing power."

...

"Brkic said its Ponte Vecchio GPUs outperformed Nvidia’s A100 GPUs in another Argonne supercomputer called Theta, which has a peak performance of 11.7 petaflops."
👍32🤔53
С генерацией картинок и текстов уже давно всё хорошо и мейнстрим, а музыка с видео пока отставали. Вот теперь Deepmind взялся за музыку:

https://deepmind.google/discover/blog/transforming-the-future-of-music-creation/
👍23🤮6🔥4
Свежие слухи -- OpenAI начали работать над GPT-5

https://twitter.com/rowancheung/status/1724079608054812684?t=3Fs3ELPj6JKQH6pcYSHZuw&s=19
🔥32👻8
Вона как!

"Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI."

https://openai.com/blog/openai-announces-leadership-transition
😱12😢8🤨5👍4🤯4😁21🔥1