I think this is mostly right.
- LLMs created a whole new layer of abstraction and profession.
- I've so far called this role "Prompt Engineer" but agree it is misleading. It's not just prompting alone, there's a lot of glue code/infra around it. Maybe "AI Engineer" is ~usable, though it takes something a bit too specific and makes it a bit too broad.
- ML people train algorithms/networks, usually from scratch, usually at lower capability.
- LLM training is becoming sufficently different from ML because of its systems-heavy workloads, and is also splitting off into a new kind of role, focused on very large scale training of transformers on supercomputers.
- In numbers, there's probably going to be significantly more AI Engineers than there are ML engineers / LLM engineers.
- One can be quite successful in this role without ever training anything.
- I don't fully follow the Software 1.0/2.0 framing. Software 3.0 (imo ~prompting LLMs) is amusing because prompts are human-designed "code", but in English, and interpreted by an LLM (itself now a Software 2.0 artifact). AI Engineers simultaneously program in all 3 paradigms. It's a bit 😵💫
https://twitter.com/karpathy/status/1674873002314563584
- LLMs created a whole new layer of abstraction and profession.
- I've so far called this role "Prompt Engineer" but agree it is misleading. It's not just prompting alone, there's a lot of glue code/infra around it. Maybe "AI Engineer" is ~usable, though it takes something a bit too specific and makes it a bit too broad.
- ML people train algorithms/networks, usually from scratch, usually at lower capability.
- LLM training is becoming sufficently different from ML because of its systems-heavy workloads, and is also splitting off into a new kind of role, focused on very large scale training of transformers on supercomputers.
- In numbers, there's probably going to be significantly more AI Engineers than there are ML engineers / LLM engineers.
- One can be quite successful in this role without ever training anything.
- I don't fully follow the Software 1.0/2.0 framing. Software 3.0 (imo ~prompting LLMs) is amusing because prompts are human-designed "code", but in English, and interpreted by an LLM (itself now a Software 2.0 artifact). AI Engineers simultaneously program in all 3 paradigms. It's a bit 😵💫
https://twitter.com/karpathy/status/1674873002314563584
[어떻게 AI를 사용해서 10배 좋은 제품을 만들 수 있을까?_Github Copilot]
Copilot을 사용하시는 분들 있으신가요? GitHub Copilot은 GitHub와 OpenAI에서 개발한 AI coding assistance인데요. 제 주변에 Copilot을 한번 사용하신 분들은 대부분 꾸준히 사용하시면서 만족하시더라고요.
내가 작성하는 코드 베이스의 맥락을 자세히 이해하고 실시간으로 적합한 정보를 추천하는 것이 Copilot이 가진 가장 큰 장점이라고 생각하는데요. 이런 기능들은 어떻게 지원할 수 있을까요? AI가 알아서 다 추천해주는 걸까요?
최근에 Parth Thakkar이 쓴 Copilot Internals을 읽고 새롭게 배운 내용과 제 생각 한 스푼을 공유합니다.
Copilot이 내가 작성하는 코드에 맥락을 이해하고 실시간으로 답을 주는 데에는 크게 3가지 비법 소스가 존재합니다.
비법 소스 1: 프롬프트 엔지니어링
- 클라이언트가 프롬프트를 보내면(코드를 작성하면), 코드와 관련된 맥락(접두사[코드 위치, 관련 코드/파일의 스니펫], 접미사(생성된 코드가 들어갈 장소에 대한 맥락), PromptElementRanges(프롬프트가 잘 작동하기 위한 기본 정보들)을 AI Model(Codex)에 보냅니다.
비법 소스 2: 모델 호출(Model Invocation)
- Copilot은 인라인/고스트텍스트 그리고 Copilot패널 두가지 채널을 통해서 AI모델을 호출합니다.
- GitHub Copilot의 인라인/고스트텍스트 인터페이스는 제안 속도를 높이고, 반복적인 모델 호출을 줄이고, 사용자의 입력에 따라 제안을 조정하며, 빠른 입력을 처리하기 위해 디바운싱 메커니즘을 사용합니다. 반면, 코파일럿 패널은 더 많은 샘플을 요청하고, 로그 프로브를 사용하여 솔루션을 정렬합니다. 두 인터페이스 모두는 도움이 되지 않는 완료를 방지하기 위해 검사를 수행합니다.
비법 소스 3: 원격 측정 (Telemetry)
- GitHub Copilot는 원격 측정을 통해 사용자 상호작용을 학습하고 제품을 개선합니다. 이는 제안의 수락이나 거부, 코드에 남아 있는 수락된 제안의 지속성, 제안 수락 후 30초 이내에 캡처된 코드 스니펫 등을 포함하며, 사용자는 개인 정보 보호를 위해 이러한 데이터 수집을 거부할 수 있습니다.
AI를 사용해서 고객에게 10배 좋은 제품을 만드는 창업자로서 무엇을 배울 수 있을까요?
- 고객에게 가치를 전달하기 위해서는 모델 그자체로 충분하지 않습니다. 고객의 문제를 해결하기 위해 모델을 잘 이해하고, 이를 잘 사용할 수 있는 Engineering 역량과 빠른 Iteration이 중요합니다.
- 아직, LLM 혹은 AI를 사용해서 좋은 제품을 만드는 방법이 잘 알려지지 않았고 누구도 답을 알고 있다고 말하기 어렵습니다. 따라서, AI를 잘 이해하면서도 고객의 문제를 해결하기 위해서 AI 모델과 다양한 엔지니어링을 결합하려는 스타트업에게 기회가 있습니다.
더 자세한 내용을 보고 싶다면 아래 링크를 참고해주세요. https://bit.ly/copilotinternal
혹시, Engineering 경험과 지식이 뛰어난 분이나 AI 모델에 대한 이해가 높으신 분들 중에서 고객에게 10배 좋은 가치를 만들어내는 데에 관심있는 분이 있으시다면 DM 혹은 minseok.kim0129@gmail.com으로 어떤 문제를 해결해오셨고 앞으로는 어떤 문제를 해결하고 싶으신지 편하게 알려주셔요 🙏
요즘 AI 서비스들을 사용하면서 PC, 인터넷, 모바일 초창기와 비슷하게 어떤 서비스가 유저에게 가치있을지 몰라서 뭐든 해볼 수 있는시기라고 생각합니다. 마치 모바일 초창기에는 LBS(Location Based Service)라는 개념이 있었지만 지금은 대부분 모바일 서비스에서 GPS를 아주 기본적으로 제공하는 것처럼요.
결국 유저의 변하지 않는 니즈를 발견하고 빠르게 변화하는 기술을 잘 활용해서 10배 좋은 서비스를 지속적으로 만들 수 있는 팀이 좋은 제품 그리고 좋은 회사를 만들 수 있다고 믿습니다.
Copilot을 사용하시는 분들 있으신가요? GitHub Copilot은 GitHub와 OpenAI에서 개발한 AI coding assistance인데요. 제 주변에 Copilot을 한번 사용하신 분들은 대부분 꾸준히 사용하시면서 만족하시더라고요.
내가 작성하는 코드 베이스의 맥락을 자세히 이해하고 실시간으로 적합한 정보를 추천하는 것이 Copilot이 가진 가장 큰 장점이라고 생각하는데요. 이런 기능들은 어떻게 지원할 수 있을까요? AI가 알아서 다 추천해주는 걸까요?
최근에 Parth Thakkar이 쓴 Copilot Internals을 읽고 새롭게 배운 내용과 제 생각 한 스푼을 공유합니다.
Copilot이 내가 작성하는 코드에 맥락을 이해하고 실시간으로 답을 주는 데에는 크게 3가지 비법 소스가 존재합니다.
비법 소스 1: 프롬프트 엔지니어링
- 클라이언트가 프롬프트를 보내면(코드를 작성하면), 코드와 관련된 맥락(접두사[코드 위치, 관련 코드/파일의 스니펫], 접미사(생성된 코드가 들어갈 장소에 대한 맥락), PromptElementRanges(프롬프트가 잘 작동하기 위한 기본 정보들)을 AI Model(Codex)에 보냅니다.
비법 소스 2: 모델 호출(Model Invocation)
- Copilot은 인라인/고스트텍스트 그리고 Copilot패널 두가지 채널을 통해서 AI모델을 호출합니다.
- GitHub Copilot의 인라인/고스트텍스트 인터페이스는 제안 속도를 높이고, 반복적인 모델 호출을 줄이고, 사용자의 입력에 따라 제안을 조정하며, 빠른 입력을 처리하기 위해 디바운싱 메커니즘을 사용합니다. 반면, 코파일럿 패널은 더 많은 샘플을 요청하고, 로그 프로브를 사용하여 솔루션을 정렬합니다. 두 인터페이스 모두는 도움이 되지 않는 완료를 방지하기 위해 검사를 수행합니다.
비법 소스 3: 원격 측정 (Telemetry)
- GitHub Copilot는 원격 측정을 통해 사용자 상호작용을 학습하고 제품을 개선합니다. 이는 제안의 수락이나 거부, 코드에 남아 있는 수락된 제안의 지속성, 제안 수락 후 30초 이내에 캡처된 코드 스니펫 등을 포함하며, 사용자는 개인 정보 보호를 위해 이러한 데이터 수집을 거부할 수 있습니다.
AI를 사용해서 고객에게 10배 좋은 제품을 만드는 창업자로서 무엇을 배울 수 있을까요?
- 고객에게 가치를 전달하기 위해서는 모델 그자체로 충분하지 않습니다. 고객의 문제를 해결하기 위해 모델을 잘 이해하고, 이를 잘 사용할 수 있는 Engineering 역량과 빠른 Iteration이 중요합니다.
- 아직, LLM 혹은 AI를 사용해서 좋은 제품을 만드는 방법이 잘 알려지지 않았고 누구도 답을 알고 있다고 말하기 어렵습니다. 따라서, AI를 잘 이해하면서도 고객의 문제를 해결하기 위해서 AI 모델과 다양한 엔지니어링을 결합하려는 스타트업에게 기회가 있습니다.
더 자세한 내용을 보고 싶다면 아래 링크를 참고해주세요. https://bit.ly/copilotinternal
혹시, Engineering 경험과 지식이 뛰어난 분이나 AI 모델에 대한 이해가 높으신 분들 중에서 고객에게 10배 좋은 가치를 만들어내는 데에 관심있는 분이 있으시다면 DM 혹은 minseok.kim0129@gmail.com으로 어떤 문제를 해결해오셨고 앞으로는 어떤 문제를 해결하고 싶으신지 편하게 알려주셔요 🙏
요즘 AI 서비스들을 사용하면서 PC, 인터넷, 모바일 초창기와 비슷하게 어떤 서비스가 유저에게 가치있을지 몰라서 뭐든 해볼 수 있는시기라고 생각합니다. 마치 모바일 초창기에는 LBS(Location Based Service)라는 개념이 있었지만 지금은 대부분 모바일 서비스에서 GPS를 아주 기본적으로 제공하는 것처럼요.
결국 유저의 변하지 않는 니즈를 발견하고 빠르게 변화하는 기술을 잘 활용해서 10배 좋은 서비스를 지속적으로 만들 수 있는 팀이 좋은 제품 그리고 좋은 회사를 만들 수 있다고 믿습니다.
KIM MINSEOK's Notion on Notion
How does copilot work?
Github Copilot을 Reverse Engineering하는 글을 공부하면서 번역/의역, 추가조사한 내용을 정리한 글입니다.
Forwarded from 전종현의 인사이트
"Snowflake announced a new container service and a partnership with Nvidia to make it easier to build generative AI applications making use of all that data and running them on Nvidia GPUs."
https://techcrunch.com/2023/06/27/snowflake-nvidia-partnership-could-make-it-easier-to-build-generative-ai-applications/?utm_source=bensbites&utm_medium=newsletter&utm_campaign=ai-partnerships-acquisitions-and-funding&guccounter=1
https://techcrunch.com/2023/06/27/snowflake-nvidia-partnership-could-make-it-easier-to-build-generative-ai-applications/?utm_source=bensbites&utm_medium=newsletter&utm_campaign=ai-partnerships-acquisitions-and-funding&guccounter=1
TechCrunch
Snowflake-Nvidia partnership could make it easier to build generative AI applications
Snowflake has always been about storing large amounts of unstructured data in the cloud. With two recent acquisitions, Neeva and Streamlit, it will make it easier to search and build applications on top of the data. Today, the company announced a new container…
https://youtu.be/ajkAbLe-0Uk
Major Takeaways:
Product Differentiation: Perplexity AI focuses on providing accurate and trustworthy search results with citations, thereby positioning itself as a superior alternative to AI models like ChatGPT and Bart in terms of search accuracy. They differentiate themselves further by leveraging reasoning engines in combination with a well-ranked index of relevant content to generate quick and accurate answers.
Technology Utilization and Development: Perplexity AI's strategy relies on utilizing well-established AI models such as ChatGPT and Bart, but also developing their own models to address specific aspects of their product. This allows them to create a competitive and unique search experience. Moreover, the company orchestrates various components in their backend to ensure they work together efficiently and reliably.
Business Model and Advertising: The company considers advertising within a chat interface, which could provide relevant and targeted ads based on user profiles and queries, as a promising potential business model. The need for transparency and ethical advertising practices is emphasized.
AI Integration: The future vision for Perplexity AI involves the seamless integration of language models into everyday devices, which will enable natural conversations and immediate responses. The speaker acknowledges the existing limitations but expresses confidence in the continual advancements of the technology.
Data Quality and Training: The quality of training data is highlighted as a key factor in achieving higher levels of reasoning and intelligence in AI models. This is seen as a factor contributing to the lead of OpenAI in the AI market.
Open-source vs. Closed Models: The speaker discusses the implications of open-source models and closed models like Google and OpenAI, noting that the progress in the field depends on algorithmic efficiencies and talented researchers. The dynamics of this will be influenced by whether organizations continue to publish their techniques or opt to stay closed.
Lessons for AI Startup Founders:
Differentiation is Key: In a competitive field, providing a unique value proposition is crucial. This might involve creating more accurate or trustworthy results, or delivering them in a more efficient manner.
Leverage and Develop Technology: While it's beneficial to leverage established AI models, developing your own models to address specific aspects of your product can create a competitive edge.
Backend Efficiency: The success of your startup doesn't only rely on the end product but also how well the backend processes and components are orchestrated.
Ethical Business Practices: In implementing advertising or other monetization methods, maintaining transparency and ethical practices is essential to avoid the risk of alienating users.
Quality of Training Data: As an AI startup, the quality of your training data is paramount. Efforts should be made to curate high-quality data to achieve superior models.
Open Source vs. Closed Debate: The choice between operating with open-source models or closed ones can have implications on your company's future. Founders should consider the pros and cons of each, taking into account factors such as collaboration, progress speed, and knowledge sharing.
Major Takeaways:
Product Differentiation: Perplexity AI focuses on providing accurate and trustworthy search results with citations, thereby positioning itself as a superior alternative to AI models like ChatGPT and Bart in terms of search accuracy. They differentiate themselves further by leveraging reasoning engines in combination with a well-ranked index of relevant content to generate quick and accurate answers.
Technology Utilization and Development: Perplexity AI's strategy relies on utilizing well-established AI models such as ChatGPT and Bart, but also developing their own models to address specific aspects of their product. This allows them to create a competitive and unique search experience. Moreover, the company orchestrates various components in their backend to ensure they work together efficiently and reliably.
Business Model and Advertising: The company considers advertising within a chat interface, which could provide relevant and targeted ads based on user profiles and queries, as a promising potential business model. The need for transparency and ethical advertising practices is emphasized.
AI Integration: The future vision for Perplexity AI involves the seamless integration of language models into everyday devices, which will enable natural conversations and immediate responses. The speaker acknowledges the existing limitations but expresses confidence in the continual advancements of the technology.
Data Quality and Training: The quality of training data is highlighted as a key factor in achieving higher levels of reasoning and intelligence in AI models. This is seen as a factor contributing to the lead of OpenAI in the AI market.
Open-source vs. Closed Models: The speaker discusses the implications of open-source models and closed models like Google and OpenAI, noting that the progress in the field depends on algorithmic efficiencies and talented researchers. The dynamics of this will be influenced by whether organizations continue to publish their techniques or opt to stay closed.
Lessons for AI Startup Founders:
Differentiation is Key: In a competitive field, providing a unique value proposition is crucial. This might involve creating more accurate or trustworthy results, or delivering them in a more efficient manner.
Leverage and Develop Technology: While it's beneficial to leverage established AI models, developing your own models to address specific aspects of your product can create a competitive edge.
Backend Efficiency: The success of your startup doesn't only rely on the end product but also how well the backend processes and components are orchestrated.
Ethical Business Practices: In implementing advertising or other monetization methods, maintaining transparency and ethical practices is essential to avoid the risk of alienating users.
Quality of Training Data: As an AI startup, the quality of your training data is paramount. Efforts should be made to curate high-quality data to achieve superior models.
Open Source vs. Closed Debate: The choice between operating with open-source models or closed ones can have implications on your company's future. Founders should consider the pros and cons of each, taking into account factors such as collaboration, progress speed, and knowledge sharing.
YouTube
No Priors Ep. 9 | With Perplexity AI’s Aravind Srinivas and Denis Yarats
With advances in machine learning, the way we search for information online will never be the same.
This week on the No Priors podcast, we dive into a startup that aims to be the most trustworthy place to search for information online. Perplexity.ai is a…
This week on the No Priors podcast, we dive into a startup that aims to be the most trustworthy place to search for information online. Perplexity.ai is a…
Based on the available data, the usage of ChatGPT in the selected countries is as follows:
1. United States: The United States accounts for 15.32% of the total audience using ChatGPT
2. India: India accounts for 6.32% of the total audience using ChatGPT.
3. Japan: Japan accounts for 3.97% of the total audience using ChatGPT.
4. Canada: Canada accounts for 2.74% of the total audience using ChatGPT.
5. Other countries: The rest of the world accounts for 68.36% of visits to ChatGPT's website.
1. United States: The United States accounts for 15.32% of the total audience using ChatGPT
2. India: India accounts for 6.32% of the total audience using ChatGPT.
3. Japan: Japan accounts for 3.97% of the total audience using ChatGPT.
4. Canada: Canada accounts for 2.74% of the total audience using ChatGPT.
5. Other countries: The rest of the world accounts for 68.36% of visits to ChatGPT's website.
💁♂️ How to Play Long Term Games:
Systems > Goals
Discipline > Motivation
Trust > Distrust
Principles > Tactics
Writing > Reading
Vulnerability > Confidence
North Stars > Low Hanging Fruit
Trends > News
Habits > Sprints
Questions > Answers
Problems > Solutions
People > Projects
Systems > Goals
Discipline > Motivation
Trust > Distrust
Principles > Tactics
Writing > Reading
Vulnerability > Confidence
North Stars > Low Hanging Fruit
Trends > News
Habits > Sprints
Questions > Answers
Problems > Solutions
People > Projects
AI가 게임의 제작부터 게임의 UI/UX까지 많은 부분을 변화시켜놓을 거라고 생각합니다.
지난 몇년간 AI 모델은 엄청난 속도로 변화해왔는데요. 가장 최신의 AI 모델의 발전 역사와 앞으로 예상되는 AI 연구주제를 바탕으로 미래의 게임을 상상해봅니다.
Stable Diffusion 모델이 빠르게 혁신하면서, 게임 아트와 관련해서 다양한 실험이 이루어지고 있습니다. 게임 아트를 기획하고 개발하는 과정에서 AI를 잘 사용한 프로세스는 뭘까요?
이 두가지 질문에 대해서 궁금증이 생기셨다면 아래 구글폼을 작성해주세요 🙂
https://forms.gle/RFJjwqELL9juekP66
지난 몇년간 AI 모델은 엄청난 속도로 변화해왔는데요. 가장 최신의 AI 모델의 발전 역사와 앞으로 예상되는 AI 연구주제를 바탕으로 미래의 게임을 상상해봅니다.
Stable Diffusion 모델이 빠르게 혁신하면서, 게임 아트와 관련해서 다양한 실험이 이루어지고 있습니다. 게임 아트를 기획하고 개발하는 과정에서 AI를 잘 사용한 프로세스는 뭘까요?
이 두가지 질문에 대해서 궁금증이 생기셨다면 아래 구글폼을 작성해주세요 🙂
https://forms.gle/RFJjwqELL9juekP66
Google Docs
AGI Town in Seoul 4회차(6/23 금) 발표자료 신청
I don't have to check hacker news on a daily basis anymore! Thanks for the service!
https://share.snipd.com/show/a7f48397-d9ed-458a-9bda-51b504acddee
https://share.snipd.com/show/a7f48397-d9ed-458a-9bda-51b504acddee
Snipd
Hacker News Recap
A podcast that recaps some of the top posts on Hacker News every day. This is a third-party project, independent from HN and YC. Text and audio generated using…
What era do we live in?
A wide range of AI tasks that used to take 5 years and a research team to accomplish in 2013, now just require API docs and a spare afternoon in 2023.
Not a single PhD in sight. When it comes to shipping AI products, you want engineers, not researchers.
Microsoft, Google, Meta, and the large Foundation Model labs have cornered scarce research talent to essentially deliver “AI Research as a Service” APIs. You can’t hire them, but you can rent them — if you have software engineers on the other end who know how to work with them. There are ~5000 LLM researchers in the world, but ~50m software engineers. Supply constraints dictate that an “in-between” class of AI Engineers will rise to meet demand.
Fire, ready, aim. Instead of requiring data scientists/ML engineers do a laborious data collection exercise before training a single domain specific model that is then put into production, a product manager/software engineer can prompt an LLM, and build/validate a product idea, before getting specific data to finetune.
Let’s say there are 100-1000x more of the latter than the former, and the “fire, ready, aim” workflow of prompted LLM prototypes lets you move 10-100x faster than traditional ML. So AI Engineers will be able to validate AI products say 1,000-10,000x cheaper. It’s Waterfall vs Agile, all over again. AI is Agile.
A wide range of AI tasks that used to take 5 years and a research team to accomplish in 2013, now just require API docs and a spare afternoon in 2023.
Not a single PhD in sight. When it comes to shipping AI products, you want engineers, not researchers.
Microsoft, Google, Meta, and the large Foundation Model labs have cornered scarce research talent to essentially deliver “AI Research as a Service” APIs. You can’t hire them, but you can rent them — if you have software engineers on the other end who know how to work with them. There are ~5000 LLM researchers in the world, but ~50m software engineers. Supply constraints dictate that an “in-between” class of AI Engineers will rise to meet demand.
Fire, ready, aim. Instead of requiring data scientists/ML engineers do a laborious data collection exercise before training a single domain specific model that is then put into production, a product manager/software engineer can prompt an LLM, and build/validate a product idea, before getting specific data to finetune.
Let’s say there are 100-1000x more of the latter than the former, and the “fire, ready, aim” workflow of prompted LLM prototypes lets you move 10-100x faster than traditional ML. So AI Engineers will be able to validate AI products say 1,000-10,000x cheaper. It’s Waterfall vs Agile, all over again. AI is Agile.
새로운 것이 등장하면 그 누구도 전문가가 될 수 없는 시기가 있습니다. 그저 관심 있는 사람들만 관심을 갖고 가지고 놀며 서로 이야기할 뿐입니다. 하지만 결국에는 그 일이 성숙해지고 그 창이 닫힙니다. 진입 장벽이 훨씬 높아진 후에는요.
당신은 AI로 전환하기 위해 너무 늙지 않았습니다.
https://www.latent.space/p/not-old
당신은 AI로 전환하기 위해 너무 늙지 않았습니다.
https://www.latent.space/p/not-old
www.latent.space
You Are Not Too Old (To Pivot Into AI)
Everything important in AI happened in the last 5 years and you can catch up
AI x Design: https://www.figma.com/blog/ai-the-next-chapter-in-design/
혹시 Design 쪽 커리어를 가져가고 있는 분들중 실력과 관심 두가지가 다 있는 지인 분들이 있으실까요?~ ㅎㅎ
5명정도만 모여도 재밌는 이야기 많이 할 수 있을 것 같은데요!
혹시 Design 쪽 커리어를 가져가고 있는 분들중 실력과 관심 두가지가 다 있는 지인 분들이 있으실까요?~ ㅎㅎ
5명정도만 모여도 재밌는 이야기 많이 할 수 있을 것 같은데요!
Figma
AI: The Next Chapter in Design | Figma Blog
AI is more than a product, it’s a platform that will change how and what we design—and who gets involved.
We need to understand function call Open AI recently announced.
https://www.latent.space/p/function-agents#details
https://www.latent.space/p/function-agents#details
www.latent.space
Emergency Pod: OpenAI's new Functions API, up to 75% Price Drop, 4x Context Length (w/ Simon Willison, Riley Goodside, Roie Schwaber…
Listen now | Leading AI Engineers from Scale, Microsoft, Pinecone, Huggingface and more convene to discuss the June 2023 OpenAI updates and the emerging Code x LLM paradigms. Plus: Recursive Function Agents!
Wow he is earning meony $1 mrr bwith two ai services.
📸 http://PhotoAI.com $62K MRR
🖼 http://InteriorAI.com $52K MRR
📸 http://PhotoAI.com $62K MRR
🖼 http://InteriorAI.com $52K MRR
Photo AI
AI Video Generator & Image Generator by Photo AI
Generate photorealistic images and videos of people with AI. Take stunning photos of people with the first AI Photographer! Generate photo and video content for your social media with AI. Save time and money and do an AI photo shoot from your laptop or phone…
I found GitHub to be the best organizer of AI-related newsletters and podcasts. Eureka!!!
https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
GitHub
ai-notes/Resources/Good AI Podcasts and Newsletters.md at main · swyxio/ai-notes
notes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references und...
Continuous Learning_Startup & Investment
I found GitHub to be the best organizer of AI-related newsletters and podcasts. Eureka!!! https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
오늘 발견한 재밌는 깃헙. 재밌는 게 너무 많네 🤣
AI 블로그 운영자이자 팟캐스트 호스트 https://latent.space/
AI note: https://github.com/swyxio/ai-notes/tree/main
- 활용사례
- 초심자/중급자/고수가 읽을 거리
- 커뮤니티
- People
- Reality & Demotivations
- Legal, Ethics, and Privacy
- Alignment, Safety
Good AI Podcasts and Newsletters: https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
AI 블로그 운영자이자 팟캐스트 호스트 https://latent.space/
AI note: https://github.com/swyxio/ai-notes/tree/main
- 활용사례
- 초심자/중급자/고수가 읽을 거리
- 커뮤니티
- People
- Reality & Demotivations
- Legal, Ethics, and Privacy
- Alignment, Safety
Good AI Podcasts and Newsletters: https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
www.latent.space
Latent.Space | Substack
The AI Engineer newsletter + Top 10 US Tech podcast. Exploring AI UX, Agents, Devtools, Infra, Open Source Models. See https://latent.space/about for highlights from Chris Lattner, Andrej Karpathy, George Hotz, Simon Willison, Soumith Chintala et al! Click…