<투자 한글번역 영상 셀렉션>
1. 찰리 멍거 | 인생을 살아가는 데 도움이 되는 몇 가지 조언들
- https://youtu.be/efgIt8MxZ4c
2. 세콰이어 캐피탈 회장 마이클 모리츠, 변화와 흐름에 대한 통찰
- https://youtu.be/OwkaChNCVWE
3. 애스워드 다모다란 : 회사를 밸류에이션 하는 방법
- https://youtu.be/PI5YyNFPDZA
4. 비노드 코슬라 인터뷰 | 썬마이크로시스템즈 창업자, 코슬라벤처스 창업자
- https://youtu.be/Hzfrdc221nc
5. 나심 탈레브 : 그 누구의 조언도 듣지 말고, 내면의 목소리를 듣고 따라야 한다
- https://youtu.be/DFyRT_M5f4k
6. 주식시장을 이기는 투자 | 피터린치 | Peter Lynch
- https://youtu.be/ogNL5buu8JA
7. 워렌버핏이 본질을 바라보는 법
- https://youtu.be/l39su7Ty9ek
8. 피터틸 - "기술의 저주" | The Tech Curse
- https://youtu.be/wzkZi6ROFy0
9. 워렌버핏과 찰리멍거 | 2023년에 남기는 삶의 조언들
- https://youtu.be/gSSzeMPpU3c
10. 구글, 페이스북, 트위터, 페이팔 전부 '사람만 보고 투자했습니다'
- https://youtu.be/wDPZuH3Q_qw
1. 찰리 멍거 | 인생을 살아가는 데 도움이 되는 몇 가지 조언들
- https://youtu.be/efgIt8MxZ4c
2. 세콰이어 캐피탈 회장 마이클 모리츠, 변화와 흐름에 대한 통찰
- https://youtu.be/OwkaChNCVWE
3. 애스워드 다모다란 : 회사를 밸류에이션 하는 방법
- https://youtu.be/PI5YyNFPDZA
4. 비노드 코슬라 인터뷰 | 썬마이크로시스템즈 창업자, 코슬라벤처스 창업자
- https://youtu.be/Hzfrdc221nc
5. 나심 탈레브 : 그 누구의 조언도 듣지 말고, 내면의 목소리를 듣고 따라야 한다
- https://youtu.be/DFyRT_M5f4k
6. 주식시장을 이기는 투자 | 피터린치 | Peter Lynch
- https://youtu.be/ogNL5buu8JA
7. 워렌버핏이 본질을 바라보는 법
- https://youtu.be/l39su7Ty9ek
8. 피터틸 - "기술의 저주" | The Tech Curse
- https://youtu.be/wzkZi6ROFy0
9. 워렌버핏과 찰리멍거 | 2023년에 남기는 삶의 조언들
- https://youtu.be/gSSzeMPpU3c
10. 구글, 페이스북, 트위터, 페이팔 전부 '사람만 보고 투자했습니다'
- https://youtu.be/wDPZuH3Q_qw
YouTube
찰리 멍거 | 인생을 살아가는 데 도움이 되는 몇 가지 조언들
개인적으로 가장 닮고 싶은 인물 중 하나인 찰리 멍거... 그의 2007년도 USC 로스쿨 졸업 연설입니다. 그의 말들은 이해하기가 어려운 면이 있습니다. 여러가지 예시를 섞기도 하고, 주제를 딱 나눠서, 흐름을 나눠서 설명하지 않거든요.. 하지만 그의 말 하나하나를 듣고, 그 의미를 사색을 해보면 정말로 많은 것들을 담아내고 있지 않나 생각이 듭니다. 영상을 보고, 그의 예시들을 생각해보면.. 듣고 넘어갈 간단한 이야기들이 아니라는 생각이 들고는 하죠.…
Forwarded from BZCF | 비즈까페
BZCF 텔레그램 : https://news.1rj.ru/str/bzcftel
유니클로 야나이 다다시 회장의 2005년 직원들에게 작성한 이메일의 일부입니다. 제가 메모처럼 적어두고, 주기적으로 읽어 보는 글입니다.
2005년은 다다시 회장이 다시 유니클로로 돌아왔던 시기였습니다. 당시 유니클로는 적당히 큰 대기업이 되었고, 사람들은 '적당히 하자' 주의가 팽배했었습니다. 규모도 ‘적당히’ 컸다는 것이, 현재의 유니클로와 같이 글로벌한 회사는 아니었고 정말로 '적당히'였습니다.
그런 시기에, 제2의 창업을 선언하고, 기존의 것들을 완전 새로 하자는 말을 했었고요. 그 후 20년이 지나서... 패션 업계에서 유일무이한 아시아 최초로 글로벌한 회사가 되었네요. 없던 것을 창업하는 것도 어렵지만... 이미 세팅된 문화를 갈아 끼우는 것이 더 힘들었을 것이라 생각이 듭니다. 그의 책을 다시 펼쳐 읽다가, 이 구절이 인상 깊게 다가와 필사하였습니다.
https://m.blog.naver.com/bizucafe/223128943351
유니클로 야나이 다다시 회장의 2005년 직원들에게 작성한 이메일의 일부입니다. 제가 메모처럼 적어두고, 주기적으로 읽어 보는 글입니다.
2005년은 다다시 회장이 다시 유니클로로 돌아왔던 시기였습니다. 당시 유니클로는 적당히 큰 대기업이 되었고, 사람들은 '적당히 하자' 주의가 팽배했었습니다. 규모도 ‘적당히’ 컸다는 것이, 현재의 유니클로와 같이 글로벌한 회사는 아니었고 정말로 '적당히'였습니다.
그런 시기에, 제2의 창업을 선언하고, 기존의 것들을 완전 새로 하자는 말을 했었고요. 그 후 20년이 지나서... 패션 업계에서 유일무이한 아시아 최초로 글로벌한 회사가 되었네요. 없던 것을 창업하는 것도 어렵지만... 이미 세팅된 문화를 갈아 끼우는 것이 더 힘들었을 것이라 생각이 듭니다. 그의 책을 다시 펼쳐 읽다가, 이 구절이 인상 깊게 다가와 필사하였습니다.
https://m.blog.naver.com/bizucafe/223128943351
NAVER
회사도 개인도 성장하지 않으면 죽은 것
2005년 연초에 '제2의 창업'의 방침을 여러분에게 제시한 바 있습니다. 작년 한 해는 연초에 말...
theinformation.com/articles/why-youtube-could-give-google-an-edge-in-ai
Summarize this contents and derive key insights as a ai related startup founder and investor via Perplexity copilot feature.
#1 As an AI startup founder and investor, here are some key insights from the provided search results:
Google's ownership of YouTube could give it an edge in AI. YouTube is the single biggest and richest source of imagery, audio, and text trannoscripts on the internet, and Google's researchers have been using it to develop its next large-language model, Gemini. OpenAI has also secretly used data from YouTube to train some of its AI models.
#2 Open-source AI is gaining on Google and ChatGPT. Meta Platforms gave academics access to sophisticated machine-learning models that can understand conversational language, which they turned into open-source software that powered free alternatives to ChatGPT and other proprietary AI software. Free AI models are now "reasonably close" in performance to proprietary models from Google and ChatGPT creator OpenAI, and most software developers will eventually opt to use the free ones
#3 Google and DeepMind are working together to develop software to compete with OpenAI. Known internally as Gemini, the joint effort began in recent weeks, after Google stumbled with Bard, its first attempt to compete with OpenAI's chatbot
#4 Generative AI startups need their own data. Google, Amazon, Meta Platforms, and Microsoft are jostling to develop products that use generative AI. Startups such as Denoscript, which uses generative artificial intelligence to shortcut the laborious video-editing process, are trying to immunize themselves from looming big tech competition
#5 The European antitrust case against Google might or might not matter. The Europeans have concluded, in a preliminary way, that Google favors its own "online display advertising technology services to the detriment of competing providers". This is no small matter, as Google controls the tools most commonly used by both websites selling ad space and marketers buying ad space on websites across the internet. However, it remains to be seen whether this case will have a significant impact on Google's AI-related activities.
#6 Google is preparing to teach foreign languages through Google Search. The project, referred to internally as Tivoli, grew out of its Google Research unit and is likely to be rolled out later this year.
#7 The creator economy is facing challenges and opportunities with AI. Dmitry Shapiro and Sean Thielen, the co-founders of a startup that developed Koji, a tool that offers link-in-bio landing pages to house creators' digital storefronts, are planning to focus on a new AI product they're developing called YouAI. However, venture capitalists such as Ben Mathews are predicting a wave of failures among creator economy startups as they run out of cash
Summarize this contents and derive key insights as a ai related startup founder and investor via Perplexity copilot feature.
#1 As an AI startup founder and investor, here are some key insights from the provided search results:
Google's ownership of YouTube could give it an edge in AI. YouTube is the single biggest and richest source of imagery, audio, and text trannoscripts on the internet, and Google's researchers have been using it to develop its next large-language model, Gemini. OpenAI has also secretly used data from YouTube to train some of its AI models.
#2 Open-source AI is gaining on Google and ChatGPT. Meta Platforms gave academics access to sophisticated machine-learning models that can understand conversational language, which they turned into open-source software that powered free alternatives to ChatGPT and other proprietary AI software. Free AI models are now "reasonably close" in performance to proprietary models from Google and ChatGPT creator OpenAI, and most software developers will eventually opt to use the free ones
#3 Google and DeepMind are working together to develop software to compete with OpenAI. Known internally as Gemini, the joint effort began in recent weeks, after Google stumbled with Bard, its first attempt to compete with OpenAI's chatbot
#4 Generative AI startups need their own data. Google, Amazon, Meta Platforms, and Microsoft are jostling to develop products that use generative AI. Startups such as Denoscript, which uses generative artificial intelligence to shortcut the laborious video-editing process, are trying to immunize themselves from looming big tech competition
#5 The European antitrust case against Google might or might not matter. The Europeans have concluded, in a preliminary way, that Google favors its own "online display advertising technology services to the detriment of competing providers". This is no small matter, as Google controls the tools most commonly used by both websites selling ad space and marketers buying ad space on websites across the internet. However, it remains to be seen whether this case will have a significant impact on Google's AI-related activities.
#6 Google is preparing to teach foreign languages through Google Search. The project, referred to internally as Tivoli, grew out of its Google Research unit and is likely to be rolled out later this year.
#7 The creator economy is facing challenges and opportunities with AI. Dmitry Shapiro and Sean Thielen, the co-founders of a startup that developed Koji, a tool that offers link-in-bio landing pages to house creators' digital storefronts, are planning to focus on a new AI product they're developing called YouAI. However, venture capitalists such as Ben Mathews are predicting a wave of failures among creator economy startups as they run out of cash
The Information
Why YouTube Could Give Google an Edge in AI
Google last month upgraded its Bard chatbot with a new machine-learning model that can better understand conversational language and compete with OpenAI’s ChatGPT. As Google develops a sequel to that model, it may hold a trump card: YouTube. The video site…
Continuous Learning_Startup & Investment
https://youtu.be/rYVPDQfRcL0
AMD has revealed the Mi 300X chip, which has an industry-leading 192 GB memory capacity, 5.2 TB per second memory bandwidth, and is designed for generative AI. It reduces the number of GPUs required and development time needed for deploying the Mi 300X while accelerating customers' time to market, reducing overall development costs and making deployment effortless. The Mi 300A is currently being sampled while the Mi 300X and eight GPU Instinct platform will begin sampling in Q3, with production expected in Q4 of this year.
Continuous Learning_Startup & Investment
https://youtu.be/ltQ9pbFukUo
Chris Lattner and Lex Fridman discuss the potential of large language models (LLMS) in programming, including their ability to predict and generate code. While LLMS can automate mechanical aspects of coding, it is not a replacement for programmers but a helpful complementary tool. The discussion also covers the potential for LLMS to improve productivity and learn different programming languages. Lattner notes that LLMS can be used for documentation and inspiration, but creating reliable scale systems should focus on algebraic reasoning and creating different nets to implement code rather than expensive LLMS.
Whenever I use an AI product that offers a great user experience, I ask myself: "How the F#🤬 did they do it?"
And if the product is open source, I dive deep into the code, the logic, and of course, the almighty prompt.
Today, I was exploring Bamboo AI, a tool that analyzes data using natural language processing.
Being the curious geek that I am, I dove deep into the prompt to see how it was engineered and to see what types of techniques I could pick up to make my own prompts better.
And in this tweet, I will do my best to reverse engineer the prompt into its building blocks.
Feel free to bookmark this tweet for later reference. I've broken down the prompt into simple pieces for you to replicate if you want.
PS: A (slighlty reduced) snippet of the prompt is attached on the images for reference.
And if the product is open source, I dive deep into the code, the logic, and of course, the almighty prompt.
Today, I was exploring Bamboo AI, a tool that analyzes data using natural language processing.
Being the curious geek that I am, I dove deep into the prompt to see how it was engineered and to see what types of techniques I could pick up to make my own prompts better.
And in this tweet, I will do my best to reverse engineer the prompt into its building blocks.
Feel free to bookmark this tweet for later reference. I've broken down the prompt into simple pieces for you to replicate if you want.
PS: A (slighlty reduced) snippet of the prompt is attached on the images for reference.
Here's what I found:
1. Role
The prompt starts by declaring a clearly defined role for the AI. Most prompts do this, as this has become a standard best practice.
2. Goal
A clearly defined goal on top of the role allows the AI to act in accordance to it. Nothing extraordinary with this prompt until now.
The problem is that most people stop crafting their prompts here, and then wonder why their outputs are useless and random more often than not.
3. Clearly defined input
The prompt clearly states what is the expected input the AI will receive.
This part of the prompt is often overlooked, but I've found it greatly reduces the randomness of the output.
4. Clearly Defined output
Similarly, clearly going over the expected output in minute detail will help steer the model in the exact direction that you want.
This will allow you to pinpoint exactly what it should do, and on top of that, will reduce the need for revisions.
Again, most people never even get to this point and then wonder why the AI never gets them right.
Of course it doesn't; it won't get you right if you haven't told it what to do.
5. Revisions
The prompt clearly states that revisions are to be expected and the output probably won't be the final one.
Once again, clearly stating what can happen during the actual use of this tool.
6. Input example
On top of clearly defining what input to expect, the prompt also shows an input example.
"Show, don't just tell" is a good principle to keep in mind when prompting.
This will greatly reduce the randomness of the model and make for more accurate outputs.
7. Output example
Stating the examples of the output is equally important.
This will allow the model to pick up on the input -> output pattern and make its answers way more relevant, contextual and useful.
See a pattern here?
Clearly state what the AI should do and what to expect.
Don't leave it to chance if you want your outputs to be reliable and useful.
And now that we have reverse engineered how this prompt works, you will hopefully have ideas on how to improve your own prompts.
I sure did.
https://twitter.com/Luc_AI_Insights/status/1668792631806050304?s=20
1. Role
The prompt starts by declaring a clearly defined role for the AI. Most prompts do this, as this has become a standard best practice.
2. Goal
A clearly defined goal on top of the role allows the AI to act in accordance to it. Nothing extraordinary with this prompt until now.
The problem is that most people stop crafting their prompts here, and then wonder why their outputs are useless and random more often than not.
3. Clearly defined input
The prompt clearly states what is the expected input the AI will receive.
This part of the prompt is often overlooked, but I've found it greatly reduces the randomness of the output.
4. Clearly Defined output
Similarly, clearly going over the expected output in minute detail will help steer the model in the exact direction that you want.
This will allow you to pinpoint exactly what it should do, and on top of that, will reduce the need for revisions.
Again, most people never even get to this point and then wonder why the AI never gets them right.
Of course it doesn't; it won't get you right if you haven't told it what to do.
5. Revisions
The prompt clearly states that revisions are to be expected and the output probably won't be the final one.
Once again, clearly stating what can happen during the actual use of this tool.
6. Input example
On top of clearly defining what input to expect, the prompt also shows an input example.
"Show, don't just tell" is a good principle to keep in mind when prompting.
This will greatly reduce the randomness of the model and make for more accurate outputs.
7. Output example
Stating the examples of the output is equally important.
This will allow the model to pick up on the input -> output pattern and make its answers way more relevant, contextual and useful.
See a pattern here?
Clearly state what the AI should do and what to expect.
Don't leave it to chance if you want your outputs to be reliable and useful.
And now that we have reverse engineered how this prompt works, you will hopefully have ideas on how to improve your own prompts.
I sure did.
https://twitter.com/Luc_AI_Insights/status/1668792631806050304?s=20
Twitter
Whenever I use an AI product that offers a great user experience, I ask myself: "How the F#🤬 did they do it?"
And if the product is open source, I dive deep into the code, the logic, and of course, the almighty prompt.
Today, I was exploring Bamboo AI,…
And if the product is open source, I dive deep into the code, the logic, and of course, the almighty prompt.
Today, I was exploring Bamboo AI,…
I'm rlly inspired by ambitious projects that were built fast:
> The Eiffel Tower. (2 yrs)
> Disneyland (366 d)
> Empire State (410 d)
> Javanoscript (10 d)
> iPod (290 d)
> Amazon Prime (42 d)
> Git (17 d)
> Xerox Alto (120 d)
blog post by
@patrickc
: https://patrickcollison.com/fast
https://twitter.com/pwang_szn/status/1668921295457894401?s=20
> The Eiffel Tower. (2 yrs)
> Disneyland (366 d)
> Empire State (410 d)
> Javanoscript (10 d)
> iPod (290 d)
> Amazon Prime (42 d)
> Git (17 d)
> Xerox Alto (120 d)
blog post by
@patrickc
: https://patrickcollison.com/fast
https://twitter.com/pwang_szn/status/1668921295457894401?s=20
Twitter
I'm rlly inspired by ambitious projects that were built fast:
> The Eiffel Tower. (2 yrs)
> Disneyland (366 d)
> Empire State (410 d)
> Javanoscript (10 d)
> iPod (290 d)
> Amazon Prime (42 d)
> Git (17 d)
> Xerox Alto (120 d)
blog post by @patrickc: htt…
> The Eiffel Tower. (2 yrs)
> Disneyland (366 d)
> Empire State (410 d)
> Javanoscript (10 d)
> iPod (290 d)
> Amazon Prime (42 d)
> Git (17 d)
> Xerox Alto (120 d)
blog post by @patrickc: htt…