The integration and deployment of large language model (LLM)-based intelligent agents have been fraught with challenges that compromise their efficiency and efficacy. Among these issues are sub-optimal scheduling and resource allocation of agent requests over the LLM, the difficulties in maintaining context during interactions between agent and LLM, and the complexities inherent in integrating heterogeneous agents with different capabilities and specializations. The rapid increase of agent quantity and complexity further exacerbates these issues, often leading to bottlenecks and sub-optimal utilization of resources. Inspired by these challenges, this paper presents AIOS, an LLM agent operating system, which embeds large language model into operating systems (OS). Specifically, AIOS is designed to optimize resource allocation, facilitate context switch across agents, enable concurrent execution of agents, provide tool service for agents, and maintain access control for agents. We present the architecture of such an operating system, outline the core challenges it aims to resolve, and provide the basic design and implementation of the AIOS. Our experiments on concurrent execution of multiple agents demonstrate the reliability and efficiency of our AIOS modules. Through this, we aim to not only improve the performance and efficiency of LLM agents but also to pioneer for better development and deployment of the AIOS ecosystem in the future. The project is open-source at this https URL.
옳고 그름보다 중요한 것은 마음의 따뜻함입니다. 모든 사람은 각자의 전투를 치르며 사는데, 이를 알고서 서로에게 친절해야 합니다. 옳고 그른 것보다 마음이 따뜻한 사람이 되는 것이 훨씬 중요합니다. 자비로운 마음으로 자기 자신과 타인을 대할 줄 아는 사람이 될 때, 진정한 자기 존중과 타인 존중이 이루어집니다.
❤4
Continuous Learning_Startup & Investment
Idea #1: LLMs as Agents - LLMs have the potential to be powerful agents, defined as (1) choosing a sequence of actions to take - through reasoning/planning or hard-coded chains – and (2) executing that sequence of actions - @AndrewYNg and @hwchase17: some…
YouTube
What's next for AI agentic workflows ft. Andrew Ng of AI Fund
Andrew Ng, founder of DeepLearning.AI and AI Fund, speaks at Sequoia Capital's AI Ascent about what's next for AI agentic workflows and their potential to significantly propel AI advancements—perhaps even surpassing the impact of the forthcoming generation…
Excited to release DBRX, a 132 billion parameter mixture of experts language model with 36 billion active parameters.
It’s not only a super capable model, but has many nice properties at inference time because of its MoE architecture. Long context (up to 32K tokens), large batch size, and other compute bound workloads will especially benefit from the sparsity win over similarly sized dense models. Instead of going through all the parameters in the model, only the active parameters need to be passed through - a FLOPs win allowing for high model quality without compromising on inference speed.
It’s not only a super capable model, but has many nice properties at inference time because of its MoE architecture. Long context (up to 32K tokens), large batch size, and other compute bound workloads will especially benefit from the sparsity win over similarly sized dense models. Instead of going through all the parameters in the model, only the active parameters need to be passed through - a FLOPs win allowing for high model quality without compromising on inference speed.
Investing heavily in people is key to fostering the entrepreneurship that shapes our future.
Meta Pursues AI Talent With Quick Offers, Emails From Zuckerberg
Company has made job offers without interviewing candidates and relaxed its longstanding practice of not increasing compensation for employees threatening to leave.
To better compete for artificial intelligence researchers, Meta Platforms is making unconventional moves, including extending job offers to candidates without interviewing them and relaxing a longstanding practice of not increasing compensation for employees threatening to leave.
In a sign of how seriously the social media company is taking the competition for AI talent, CEO Mark Zuckerberg has personally written to researchers at Google’s DeepMind unit to recruit them, according to two people who viewed the emails. In some notes, Zuckerberg emphasized the importance of AI to Meta and said he hopes the recipient and the company will work together, one of those people said.
Meta’s intense efforts to recruit and retain employees come as it ramps up investment in AI and after several researchers who developed its large language models left for rivals, including DeepMind, OpenAI and French startup Mistral, two of whose founders came from Meta.
Zuckerberg’s interventions have helped with recruiting. In announcing his move to Meta as a principal Llama engineer in the generative AI group last week, former DeepMind researcher Michal Valko gave “massive thanks for a very personal involvement” to Meta’s senior AI leaders—and “Mark,” referring to Zuckerberg. Valko declined to comment. Zuckerberg typically isn’t involved in hiring individual contributors, a classification for most research scientists and engineers, a former employee said. Meta declined to comment.
Company has made job offers without interviewing candidates and relaxed its longstanding practice of not increasing compensation for employees threatening to leave.
To better compete for artificial intelligence researchers, Meta Platforms is making unconventional moves, including extending job offers to candidates without interviewing them and relaxing a longstanding practice of not increasing compensation for employees threatening to leave.
In a sign of how seriously the social media company is taking the competition for AI talent, CEO Mark Zuckerberg has personally written to researchers at Google’s DeepMind unit to recruit them, according to two people who viewed the emails. In some notes, Zuckerberg emphasized the importance of AI to Meta and said he hopes the recipient and the company will work together, one of those people said.
Meta’s intense efforts to recruit and retain employees come as it ramps up investment in AI and after several researchers who developed its large language models left for rivals, including DeepMind, OpenAI and French startup Mistral, two of whose founders came from Meta.
Zuckerberg’s interventions have helped with recruiting. In announcing his move to Meta as a principal Llama engineer in the generative AI group last week, former DeepMind researcher Michal Valko gave “massive thanks for a very personal involvement” to Meta’s senior AI leaders—and “Mark,” referring to Zuckerberg. Valko declined to comment. Zuckerberg typically isn’t involved in hiring individual contributors, a classification for most research scientists and engineers, a former employee said. Meta declined to comment.
Earlier this week, someone asked me about how poker has informed my view of business risk. In short, profoundly.
Poker is a fundamentally defensive game when played at an elite level. A defensive game doesn’t mean you can’t generate huge profits. In fact, poker can yield enormous profits but the way it happens is unintuitive to most.
Maximum profits in poker, and other defensive games for that matter, occur when your error rate is less than your opponent’s error rate.
So their errors - your errors = your profit. If you minimize your errors, you maximize your potential profit.
This simple formula forces you to learn that a lot of the time, the biggest enemy of your success is you. By managing yourself in a predictable, reliable way, you give yourself time for your opponent to self-own themselves. This is true in poker, but it is even more true in business.
As an example, suppose you have an R&D budget and you’re trying to build a product. Once you have some initial product market fit, the most important thing to do is to allocate your remaining resources in a thoughtful way.
You should have many small bets that extend the product area. If any one of these fail, it won’t be life-threatening and you will have learned something that will reduce your future error rate. These small bets can then ladder into a few medium-sized bets which ultimately lead to a few large bets. In such a process, you’ve not only taken many bets, of various sizes, you’ve also done this over a long quantum of time. In that same time, a less organized competitor will eventually do something wrong/stupid/both.
Said differently, you’ve de-risked your error rate in a thoughtful methodical way and have evidence that things are working while giving your competitor enough time to flail and eventually fail.
In so many companies that I’ve invested in and companies that I’ve worked for, I’ve seen enormous bets being made too early, and mostly out of ego. These bets are rarely rooted in data and most have eventually been rolled back.
The second thing to understand in poker is that when you make many small bets, you can play more hands - and some of these can lead to huge pots. Some of the biggest pots I’ve won have been with 2-2 and 8-6 suited while some of the biggest pots I’ve lost are with A-A!
In business, as in poker, you have to make unconventional bets if you want to win huge pots. And the no-brainer bets are rarely big winners and can sometimes come back and sting you. So as an investor, by keeping my bets small I keep my errors small while giving myself a chance to win big by doubling and tripling down at the right time.
Poker is a fundamentally defensive game when played at an elite level. A defensive game doesn’t mean you can’t generate huge profits. In fact, poker can yield enormous profits but the way it happens is unintuitive to most.
Maximum profits in poker, and other defensive games for that matter, occur when your error rate is less than your opponent’s error rate.
So their errors - your errors = your profit. If you minimize your errors, you maximize your potential profit.
This simple formula forces you to learn that a lot of the time, the biggest enemy of your success is you. By managing yourself in a predictable, reliable way, you give yourself time for your opponent to self-own themselves. This is true in poker, but it is even more true in business.
As an example, suppose you have an R&D budget and you’re trying to build a product. Once you have some initial product market fit, the most important thing to do is to allocate your remaining resources in a thoughtful way.
You should have many small bets that extend the product area. If any one of these fail, it won’t be life-threatening and you will have learned something that will reduce your future error rate. These small bets can then ladder into a few medium-sized bets which ultimately lead to a few large bets. In such a process, you’ve not only taken many bets, of various sizes, you’ve also done this over a long quantum of time. In that same time, a less organized competitor will eventually do something wrong/stupid/both.
Said differently, you’ve de-risked your error rate in a thoughtful methodical way and have evidence that things are working while giving your competitor enough time to flail and eventually fail.
In so many companies that I’ve invested in and companies that I’ve worked for, I’ve seen enormous bets being made too early, and mostly out of ego. These bets are rarely rooted in data and most have eventually been rolled back.
The second thing to understand in poker is that when you make many small bets, you can play more hands - and some of these can lead to huge pots. Some of the biggest pots I’ve won have been with 2-2 and 8-6 suited while some of the biggest pots I’ve lost are with A-A!
In business, as in poker, you have to make unconventional bets if you want to win huge pots. And the no-brainer bets are rarely big winners and can sometimes come back and sting you. So as an investor, by keeping my bets small I keep my errors small while giving myself a chance to win big by doubling and tripling down at the right time.
👍1
Today we are excited to announce the next chapter of Bezi, a fundamental shift in designing for 3D apps and games.
Introducing Bezi AI ✨
Designers can now ideate at the speed of thought with an infinite asset library.
With Bezi AI, you can:
✨ Generate 3D assets within seconds using text prompts.
🛠️ Simply drag and drop assets into the Bezi editor.
👥 Collaborate in real-time on a 3D canvas.
Created by designers for designers, Bezi empowers you and your team to create faster, together.
Get started today 👉 bezi.com/ai
Introducing Bezi AI ✨
Designers can now ideate at the speed of thought with an infinite asset library.
With Bezi AI, you can:
✨ Generate 3D assets within seconds using text prompts.
🛠️ Simply drag and drop assets into the Bezi editor.
👥 Collaborate in real-time on a 3D canvas.
Created by designers for designers, Bezi empowers you and your team to create faster, together.
Get started today 👉 bezi.com/ai