https://www.linkedin.com/posts/andrewchen_the-real-story-of-how-facebook-almost-acquired-activity-7076984976591753218-J8l7?utm_source=share&utm_medium=member_desktop
Excellent essay from Noam Bardin:
“The real story of how Facebook almost acquired Waze, but we ended up with Google”
https://lnkd.in/g76G8u-G
Lots of great learnings, summarized by chatGPT 😎
1. The co-founders of Waze established a valuation framework before entertaining acquisition offers. They decided to reject offers less than $750M and accept offers above $1B, but would consider proposals in the $750M-$1B range depending on the acquirer.
2. Waze approached potential strategic partners to help accelerate user acquisition, including Microsoft, Amazon, and Facebook, leading to potential acquisition discussions.
3. The founders established relationships with potential acquirers' product teams well in advance, providing a critical foundation for the acquisition process.
4. Initial negotiations with Google ended in a $450M offer which was rejected based on the pre-established valuation framework. This prompted backlash from the board but the founders remained firm.
5. Facebook offered to acquire Waze for $1B swiftly after being informed of a competing offer. Despite initial enthusiasm, the due diligence process revealed gaps and tension between the Waze and Facebook teams, leading to the deal falling through.
6. Following the leak of the Facebook deal, Google presented an unsolicited term sheet of $1.15B. Despite accusations of information leakage, Waze's fiduciary duty led them to consider the offer, leading to a fallout with Facebook.
7. With no counteroffer from Facebook, Waze accepted Google's offer and closed the transaction in eight days.
8. In hindsight, despite the potential financial benefits of a Facebook deal, the Waze co-founder believed Google was the right choice due to cultural fit, their commitment to Waze's independence, and Facebook's subsequent controversies.
9. The lessons learned included: building relationships with potential acquirers early, having a clear valuation framework, recognizing partnership discussions as catalysts for acquisition, understanding the personal nature of acquisitions, and being aware of the divergence in interests between founders and investors during an acquisition.
10. The final key lesson was understanding the power of negotiation, having a red line, and being willing to walk away to secure a better deal.
Excellent essay from Noam Bardin:
“The real story of how Facebook almost acquired Waze, but we ended up with Google”
https://lnkd.in/g76G8u-G
Lots of great learnings, summarized by chatGPT 😎
1. The co-founders of Waze established a valuation framework before entertaining acquisition offers. They decided to reject offers less than $750M and accept offers above $1B, but would consider proposals in the $750M-$1B range depending on the acquirer.
2. Waze approached potential strategic partners to help accelerate user acquisition, including Microsoft, Amazon, and Facebook, leading to potential acquisition discussions.
3. The founders established relationships with potential acquirers' product teams well in advance, providing a critical foundation for the acquisition process.
4. Initial negotiations with Google ended in a $450M offer which was rejected based on the pre-established valuation framework. This prompted backlash from the board but the founders remained firm.
5. Facebook offered to acquire Waze for $1B swiftly after being informed of a competing offer. Despite initial enthusiasm, the due diligence process revealed gaps and tension between the Waze and Facebook teams, leading to the deal falling through.
6. Following the leak of the Facebook deal, Google presented an unsolicited term sheet of $1.15B. Despite accusations of information leakage, Waze's fiduciary duty led them to consider the offer, leading to a fallout with Facebook.
7. With no counteroffer from Facebook, Waze accepted Google's offer and closed the transaction in eight days.
8. In hindsight, despite the potential financial benefits of a Facebook deal, the Waze co-founder believed Google was the right choice due to cultural fit, their commitment to Waze's independence, and Facebook's subsequent controversies.
9. The lessons learned included: building relationships with potential acquirers early, having a clear valuation framework, recognizing partnership discussions as catalysts for acquisition, understanding the personal nature of acquisitions, and being aware of the divergence in interests between founders and investors during an acquisition.
10. The final key lesson was understanding the power of negotiation, having a red line, and being willing to walk away to secure a better deal.
Linkedin
Andrew Chen on LinkedIn: The real story of how Facebook almost acquired Waze, but we ended up with… | 25 comments
Excellent essay from Noam Bardin:
“The real story of how Facebook almost acquired Waze, but we ended up with Google”
https://lnkd.in/g76G8u-G
Lots of great… | 25 comments on LinkedIn
“The real story of how Facebook almost acquired Waze, but we ended up with Google”
https://lnkd.in/g76G8u-G
Lots of great… | 25 comments on LinkedIn
I like his point of view and learned many things from the leader of one of the largest travel platform.
https://youtu.be/aZ-BjJZxNoA
In this section, Airbnb CEO Brian Chesky discusses his company's approach to AI and how they plan to use it for personalization. Chesky explains that there are several large language models, or base models, which he compares to highways. On top of these base models, companies can build more personalized and tuned models based on their own customer data. Chesky's vision for Airbnb's use of AI involves building robust customer profiles to personalize travel recommendations and becoming the ultimate AI concierge for travelers. He explains that this will require designing unique AI interfaces beyond just text inputs and combining art and science to understand human psychology. In the short-term, Chesky plans to increase productivity by making his engineers 30% more efficient.
Airbnb CEO Brian Chesky discusses the importance of using productivity tools, such as co-pilot and chat CPT, to maximize productivity and efficiency. Chesky also delves into the need for unique interfaces that are custom-designed to meet the specific needs of each task. He adds that AI will be critical for personalizing the customer experience and improving the matching process in the future, allowing for authentic and unique experiences for each individual customer. However, Chesky acknowledges that there is also a risk of machines becoming so advanced that they become difficult to discern from humans, and that identity authentication will be a critical factor going forward.
Airbnb CEO Brian Chesky discusses the importance of brand authenticity and of building a robust personal profile through verifying customers’ identities. He also expresses excitement about the possibilities of AI matching users with delightful experiences, even things they didn’t know would make them happy. Chesky believes that AI will disrupt traditional business models, but also create millions of new startups as it becomes more accessible. He argues that trying to ban AI is like trying to ban electricity, and encourages people to view AI as a tool to be embraced rather than a threat to be feared.
the benefits of AI as a creative tool and the importance of thinking of AI as a tool for creativity. He discusses how AI helps him discover first principles in really interesting ideas, but also notes that we do not know what jobs can be created because they have not been created yet. Finally, they touch on how marketplaces like Airbnb, Etsy and Uber allow people to create new careers for themselves and how this will be incredible for society.
https://youtu.be/aZ-BjJZxNoA
In this section, Airbnb CEO Brian Chesky discusses his company's approach to AI and how they plan to use it for personalization. Chesky explains that there are several large language models, or base models, which he compares to highways. On top of these base models, companies can build more personalized and tuned models based on their own customer data. Chesky's vision for Airbnb's use of AI involves building robust customer profiles to personalize travel recommendations and becoming the ultimate AI concierge for travelers. He explains that this will require designing unique AI interfaces beyond just text inputs and combining art and science to understand human psychology. In the short-term, Chesky plans to increase productivity by making his engineers 30% more efficient.
Airbnb CEO Brian Chesky discusses the importance of using productivity tools, such as co-pilot and chat CPT, to maximize productivity and efficiency. Chesky also delves into the need for unique interfaces that are custom-designed to meet the specific needs of each task. He adds that AI will be critical for personalizing the customer experience and improving the matching process in the future, allowing for authentic and unique experiences for each individual customer. However, Chesky acknowledges that there is also a risk of machines becoming so advanced that they become difficult to discern from humans, and that identity authentication will be a critical factor going forward.
Airbnb CEO Brian Chesky discusses the importance of brand authenticity and of building a robust personal profile through verifying customers’ identities. He also expresses excitement about the possibilities of AI matching users with delightful experiences, even things they didn’t know would make them happy. Chesky believes that AI will disrupt traditional business models, but also create millions of new startups as it becomes more accessible. He argues that trying to ban AI is like trying to ban electricity, and encourages people to view AI as a tool to be embraced rather than a threat to be feared.
the benefits of AI as a creative tool and the importance of thinking of AI as a tool for creativity. He discusses how AI helps him discover first principles in really interesting ideas, but also notes that we do not know what jobs can be created because they have not been created yet. Finally, they touch on how marketplaces like Airbnb, Etsy and Uber allow people to create new careers for themselves and how this will be incredible for society.
YouTube
Airbnb CEO Brian Chesky on early rejection, customer focus & AI’s future in hospitality | E1735
(0:00) Airbnb’s Brain Chesky joins Jason
(1:29) Brian’s experience with early rejection
(14:48) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist
(16:05) Airbnb’s company structure and focusing on product first
(25:26)…
(1:29) Brian’s experience with early rejection
(14:48) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist
(16:05) Airbnb’s company structure and focusing on product first
(25:26)…
👍1
Continuous Learning_Startup & Investment
안녕하세요, AGI Town in Seoul에서 다가오는 금요일 저녁 6:30에 역삼역 부근에서 '게임/엔터 업계에서의 AI 적용'이라는 주제로 밋업을 진행합니다. 🏄♂️ 혹시 게임업계에서 AI를 어떻게 적용하는지 고민하고 계셨던 분, AI 리서처/개발자로서 게임 분야에 활용에 대해서 고민했거나 관심이 크신 분이라면 이번 Meetup에 참석하셔서 같이 토론해봐요! 이번 밋업에서는 아래 주제에 대해서 다룰 예정입니다. 🌟 HYBE IM의 조영조님이…
혹시 이번주 금요일에 진행되는 오프라인 밋업에 후원(샌드위치/커피 구매비)을 해주실 수 있는 팀이 있으실까요~?
기업, VC 중에서 간단히 후원해주실 수 있는 분들은 DM으로 후원 문의 부탁드립니다. 후원기업 로고는 행사에서 사용될 장표 하단에 노출되고 행사 시작 전후로 해당 후원사실에 대해서 공유드립니다. 관심있으신 분들은 @MatthewMinseokKim으로 메세지 주세요!
기업, VC 중에서 간단히 후원해주실 수 있는 분들은 DM으로 후원 문의 부탁드립니다. 후원기업 로고는 행사에서 사용될 장표 하단에 노출되고 행사 시작 전후로 해당 후원사실에 대해서 공유드립니다. 관심있으신 분들은 @MatthewMinseokKim으로 메세지 주세요!
Continuous Learning_Startup & Investment
혹시 이번주 금요일에 진행되는 오프라인 밋업에 후원(샌드위치/커피 구매비)을 해주실 수 있는 팀이 있으실까요~? 기업, VC 중에서 간단히 후원해주실 수 있는 분들은 DM으로 후원 문의 부탁드립니다. 후원기업 로고는 행사에서 사용될 장표 하단에 노출되고 행사 시작 전후로 해당 후원사실에 대해서 공유드립니다. 관심있으신 분들은 @MatthewMinseokKim으로 메세지 주세요!
LOVO (https://lovo.ai/) 팀에서 이번 밋업은 후원해주신다고 합니다 :❤️
후원은 항상 환영합니다 ❤️
LOVO팀에서 data scientist랑 MLOps 개발자를 적극 채용하고 계신다고 하니 자세한 내용은 채용 페이지를 참고해주셔요 🤗
https://orbisailovo.notion.site/LOVO-db490c88a5384f778e913c614b7f6530
후원은 항상 환영합니다 ❤️
LOVO팀에서 data scientist랑 MLOps 개발자를 적극 채용하고 계신다고 하니 자세한 내용은 채용 페이지를 참고해주셔요 🤗
https://orbisailovo.notion.site/LOVO-db490c88a5384f778e913c614b7f6530
LOVO AI
LOVO: Free AI Voice Generator & Text to Speech
Award-winning AI Voice Generator and text to speech software with 500+ voices in 100 languages. Realistic AI Voices with Online Video Editor. Clone your own voice.
👍1
Forwarded from [충간지의 글로벌 의료기기/디지털 헬스 연구소]
정말 역사적인 순간에 살고 있네요.
https://www.linkedin.com/posts/jihyun-maria-lee-9a1270b5_ozempic-activity-7058080402103562241-n42b?utm_source=share&utm_medium=member_ios
https://www.linkedin.com/posts/jihyun-maria-lee-9a1270b5_ozempic-activity-7058080402103562241-n42b?utm_source=share&utm_medium=member_ios
Linkedin
Jihyun Maria Lee on LinkedIn: #ozempic
[격변하는 글로벌 제약사 시총 순위 ft. 마법의 약]
글로벌 제약사 시총 순위가 역사적인 순간을 앞둔 것 같습니다. 요지부동 시총1위 존슨앤존슨이 3위로 밀려나는 순간이 조만간 올 수 있겠다는 생각이 드네요.
아래 그래프는 어제 기준으로 전세계 시가총액 Top5 제약사들의 10년…
글로벌 제약사 시총 순위가 역사적인 순간을 앞둔 것 같습니다. 요지부동 시총1위 존슨앤존슨이 3위로 밀려나는 순간이 조만간 올 수 있겠다는 생각이 드네요.
아래 그래프는 어제 기준으로 전세계 시가총액 Top5 제약사들의 10년…
Forwarded from 전종현의 인사이트
OpenAI가 파인튜닝된 모델을 사고 팔 수 있는 마켓플레이스를 준비중이라는 디인포메이션의 기사.
플러그인보다 훨씬 강력할 수 있겠다는 생각이 드네요.
https://www.theinformation.com/articles/openai-considers-creating-an-app-store-for-ai-software?rc=jfxtml
플러그인보다 훨씬 강력할 수 있겠다는 생각이 드네요.
https://www.theinformation.com/articles/openai-considers-creating-an-app-store-for-ai-software?rc=jfxtml
The Information
OpenAI Considers Creating an App Store for AI Software
OpenAI—an early mover in releasing chatbots powered by large-language models—is contemplating another initiative to extend its influence in the world of artificial intelligence. The company is considering launching a marketplace in which customers could sell…
Are we at the beginning of a new era of small models? Here is our newest LLM trained fully in my team at Microsoft Research:
*phi-1 achieves 51% on HumanEval w. only 1.3B parameters & 7B tokens training dataset*
Any other >50% HumanEval model is >1000x bigger (e.g., WizardCoder from last week is 10x in model size and 100x in dataset size).
How did we achieve this? It can be summarized in 5 words:
*Textbooks Are All You Need*
https://lnkd.in/gFUJaafT
*phi-1 achieves 51% on HumanEval w. only 1.3B parameters & 7B tokens training dataset*
Any other >50% HumanEval model is >1000x bigger (e.g., WizardCoder from last week is 10x in model size and 100x in dataset size).
How did we achieve this? It can be summarized in 5 words:
*Textbooks Are All You Need*
https://lnkd.in/gFUJaafT
Continuous Learning_Startup & Investment
Are we at the beginning of a new era of small models? Here is our newest LLM trained fully in my team at Microsoft Research: *phi-1 achieves 51% on HumanEval w. only 1.3B parameters & 7B tokens training dataset* Any other >50% HumanEval model is >1000x bigger…
Can small, custom LLMs do the job? Another controversial, amazing paper, this time from MSFT Research. What's the secret--textbook quality data.
They describe phi-1, a new large language model specifically for python coding that only has only 1.3B parameters, is trained with only 7B tokens, and claims to achieve nearly SOTA accuracy on the Human-Eval benchmark. They also claim that it "displays surprising emergent properties" after it is finetuned:
"We hypothesize that such high-quality data dramatically improves the learning efficiency of language models for code as they provide clear, self-contained, instructive, and balanced examples of coding concepts and skills"
Notice that while phi-1 does seem to perform well in evaluations, it is still a research model. It has trouble with variations in its prompts, and does not deal well with longer prompts. It's not going to compete with StarCoder or ChatGPT, so don't expect to make a new Flask app with it.
I could not find the model so I can't evaluate it myself; if anyone knows how or does, please post it in the comments.
It seems that, like the Falcon models, having great data lets you do great things.
"Textbooks Are All You Need:" https://lnkd.in/g8YdiWMP
They describe phi-1, a new large language model specifically for python coding that only has only 1.3B parameters, is trained with only 7B tokens, and claims to achieve nearly SOTA accuracy on the Human-Eval benchmark. They also claim that it "displays surprising emergent properties" after it is finetuned:
"We hypothesize that such high-quality data dramatically improves the learning efficiency of language models for code as they provide clear, self-contained, instructive, and balanced examples of coding concepts and skills"
Notice that while phi-1 does seem to perform well in evaluations, it is still a research model. It has trouble with variations in its prompts, and does not deal well with longer prompts. It's not going to compete with StarCoder or ChatGPT, so don't expect to make a new Flask app with it.
I could not find the model so I can't evaluate it myself; if anyone knows how or does, please post it in the comments.
It seems that, like the Falcon models, having great data lets you do great things.
"Textbooks Are All You Need:" https://lnkd.in/g8YdiWMP
This time, Ralph Clark and I planned for a get together with our better halves, and got a chance to reminisce old times, and catch up on family and friends. Lots of wine too.
이 친구는 알토스에서 1996년에 첫 투자한 회사 재무를 맡으면서 인연이 시작되었고... 그후 우리가 투자한 두개 회사 재무/대표를 맡으면서 계속 이어갔다. 지금은 (우리가 투자 하지 않은) 상장회사 대표로서 시애틀로 이사가서 오랫동안 못보던 사이였는데 지난주 우연히 길거리에서 마주쳐서 저녁을 같이 하게 되었다.
오랜 이야기 나눴는데... 그중 공감 깊었던 것은:
어린 나이에 어설프게 성공하지 않은게 너무 다행 이였다. 괜히 내가 뛰어나서 성공했다고 착각하고 같은 성공을 할거라 기대하면서 지낸 사람들은 불쌍해 보인다. 운 좋았다 생각하고 착실하게 노력하는 사람들은 잘 하고 있더라.
(참고로 이분이 우리랑 같이 한 첫회사는 상장해서 조단위 회사로 갔다가 버블이 꺼지면서 망했고... 두번째 회사도 투자금 회수도 못할정도 가격에 팔렸고...세번째 회사는 좋은 가격으로 매각 했었다. 회사가 망해도 (손실 나도) 서로에게 신뢰를 주면 이렇게 인연이 계속 된다.)
이 친구는 알토스에서 1996년에 첫 투자한 회사 재무를 맡으면서 인연이 시작되었고... 그후 우리가 투자한 두개 회사 재무/대표를 맡으면서 계속 이어갔다. 지금은 (우리가 투자 하지 않은) 상장회사 대표로서 시애틀로 이사가서 오랫동안 못보던 사이였는데 지난주 우연히 길거리에서 마주쳐서 저녁을 같이 하게 되었다.
오랜 이야기 나눴는데... 그중 공감 깊었던 것은:
어린 나이에 어설프게 성공하지 않은게 너무 다행 이였다. 괜히 내가 뛰어나서 성공했다고 착각하고 같은 성공을 할거라 기대하면서 지낸 사람들은 불쌍해 보인다. 운 좋았다 생각하고 착실하게 노력하는 사람들은 잘 하고 있더라.
(참고로 이분이 우리랑 같이 한 첫회사는 상장해서 조단위 회사로 갔다가 버블이 꺼지면서 망했고... 두번째 회사도 투자금 회수도 못할정도 가격에 팔렸고...세번째 회사는 좋은 가격으로 매각 했었다. 회사가 망해도 (손실 나도) 서로에게 신뢰를 주면 이렇게 인연이 계속 된다.)
❤1
오픈소스가 쏘아올린 작은 공 - SAM.
Meta가 요즘 계속 오픈소스로 재미를 보고 있는 것 같은데, LlaMa 이외에도 SAM도 폭발적으로 확산되고 있네요.
김진성 교수님이 Segment Anything Model (SAM) for Radiation Oncology 논문을 소개해주셔서 이 참에 잠시 찾아보면서 깜짝 놀랐네요. 4월5일 Meat에서 SAM을 발표한 이후로 github 별표는 벌써 3.5만개를 넘어섰고 arXiv 논문들도 어마어마하다는.
그중에서도 의료영상 분할 쪽만해도 제법 되고 있고, SAM 관련 서베이 논문들은 계속 쏟아져 나오고, 목록을 정리하고 있는 github 리포들도 많더군요. 이미 어느 정도 생태계를 굳혔다고 말해도 될 것 같네요.
잠깐 20분 정도 찾은 것들만해도 이 정도 링크들이니. 정말 오픈소스의 힘이란 ..... #SAM
Awesome Segment Anything
https://github.com/Hedlen/awesome-segment-anything
Segment Anything Model (SAM) for Medical Image Segmentation.
https://github.com/YichiZhang98/SAM4MIS
Segment Anything Model (SAM) for Radiation Oncology
https://arxiv.org/abs/2306.11730
Segment Anything
https://arxiv.org/abs/2304.02643
Segment Anything Model for Medical Image Analysis: an Experimental Study
https://arxiv.org/abs/2304.10517
Segment Anything in Medical Images
https://arxiv.org/abs/2304.12306
SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and More
https://arxiv.org/abs/2304.09148
SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model
https://arxiv.org/abs/2304.05396
When SAM Meets Medical Images: An Investigation of Segment Anything Model (SAM) on Multi-phase Liver Tumor Segmentation
https://arxiv.org/abs/2304.08506
Segment Anything Model for Medical Images?
https://arxiv.org/abs/2304.14660
SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM
https://arxiv.org/abs/2304.05622
SAM on Medical Images: A Comprehensive Study on Three Prompt Modes
https://arxiv.org/abs/2305.00035
Computer-Vision Benchmark Segment-Anything Model (SAM) in Medical Images: Accuracy in 12 Datasets
https://arxiv.org/abs/2304.09324
Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation
https://arxiv.org/abs/2304.12620
Zero-shot performance of the Segment Anything Model (SAM) in 2D medical imaging: A comprehensive evaluation and practical guidelines
https://arxiv.org/abs/2305.00109
Personalize Segment Anything Model with One Shot
https://arxiv.org/abs/2305.03048
How Segment Anything Model (SAM) Boost Medical Image Segmentation?
https://arxiv.org/abs/2305.03678
Customized Segment Anything Model for Medical Image Segmentation
https://arxiv.org/abs/2304.13785
Segment Anything Model (SAM) Enhanced Pseudo Labels for Weakly Supervised Semantic Segmentation
https://arxiv.org/abs/2305.05803
Segment Anything Model (SAM) Meets Glass: Mirror and Transparent Objects Cannot Be Easily Detected
https://arxiv.org/abs/2305.00278
Segment Anything in High Quality
https://arxiv.org/abs/2306.01567
Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging
https://arxiv.org/abs/2304.04155
SAM3D: Zero-Shot 3D Object Detection via Segment Anything Model
https://arxiv.org/abs/2306.02245
DeSAM: Decoupling Segment Anything Model for Generalizable Medical Image Segmentation
https://arxiv.org/abs/2306.00499
A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets Prompt Engineering
https://arxiv.org/abs/2306.06211
A Comprehensive Survey on Segment Anything Model for Vision and Beyond
https://arxiv.org/abs/2305.08196
Meta가 요즘 계속 오픈소스로 재미를 보고 있는 것 같은데, LlaMa 이외에도 SAM도 폭발적으로 확산되고 있네요.
김진성 교수님이 Segment Anything Model (SAM) for Radiation Oncology 논문을 소개해주셔서 이 참에 잠시 찾아보면서 깜짝 놀랐네요. 4월5일 Meat에서 SAM을 발표한 이후로 github 별표는 벌써 3.5만개를 넘어섰고 arXiv 논문들도 어마어마하다는.
그중에서도 의료영상 분할 쪽만해도 제법 되고 있고, SAM 관련 서베이 논문들은 계속 쏟아져 나오고, 목록을 정리하고 있는 github 리포들도 많더군요. 이미 어느 정도 생태계를 굳혔다고 말해도 될 것 같네요.
잠깐 20분 정도 찾은 것들만해도 이 정도 링크들이니. 정말 오픈소스의 힘이란 ..... #SAM
Awesome Segment Anything
https://github.com/Hedlen/awesome-segment-anything
Segment Anything Model (SAM) for Medical Image Segmentation.
https://github.com/YichiZhang98/SAM4MIS
Segment Anything Model (SAM) for Radiation Oncology
https://arxiv.org/abs/2306.11730
Segment Anything
https://arxiv.org/abs/2304.02643
Segment Anything Model for Medical Image Analysis: an Experimental Study
https://arxiv.org/abs/2304.10517
Segment Anything in Medical Images
https://arxiv.org/abs/2304.12306
SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and More
https://arxiv.org/abs/2304.09148
SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model
https://arxiv.org/abs/2304.05396
When SAM Meets Medical Images: An Investigation of Segment Anything Model (SAM) on Multi-phase Liver Tumor Segmentation
https://arxiv.org/abs/2304.08506
Segment Anything Model for Medical Images?
https://arxiv.org/abs/2304.14660
SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM
https://arxiv.org/abs/2304.05622
SAM on Medical Images: A Comprehensive Study on Three Prompt Modes
https://arxiv.org/abs/2305.00035
Computer-Vision Benchmark Segment-Anything Model (SAM) in Medical Images: Accuracy in 12 Datasets
https://arxiv.org/abs/2304.09324
Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation
https://arxiv.org/abs/2304.12620
Zero-shot performance of the Segment Anything Model (SAM) in 2D medical imaging: A comprehensive evaluation and practical guidelines
https://arxiv.org/abs/2305.00109
Personalize Segment Anything Model with One Shot
https://arxiv.org/abs/2305.03048
How Segment Anything Model (SAM) Boost Medical Image Segmentation?
https://arxiv.org/abs/2305.03678
Customized Segment Anything Model for Medical Image Segmentation
https://arxiv.org/abs/2304.13785
Segment Anything Model (SAM) Enhanced Pseudo Labels for Weakly Supervised Semantic Segmentation
https://arxiv.org/abs/2305.05803
Segment Anything Model (SAM) Meets Glass: Mirror and Transparent Objects Cannot Be Easily Detected
https://arxiv.org/abs/2305.00278
Segment Anything in High Quality
https://arxiv.org/abs/2306.01567
Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging
https://arxiv.org/abs/2304.04155
SAM3D: Zero-Shot 3D Object Detection via Segment Anything Model
https://arxiv.org/abs/2306.02245
DeSAM: Decoupling Segment Anything Model for Generalizable Medical Image Segmentation
https://arxiv.org/abs/2306.00499
A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets Prompt Engineering
https://arxiv.org/abs/2306.06211
A Comprehensive Survey on Segment Anything Model for Vision and Beyond
https://arxiv.org/abs/2305.08196
GitHub
GitHub - Hedlen/awesome-segment-anything: Tracking and collecting papers/projects/others related to Segment Anything.
Tracking and collecting papers/projects/others related to Segment Anything. - Hedlen/awesome-segment-anything
Continuous Learning_Startup & Investment
Are we at the beginning of a new era of small models? Here is our newest LLM trained fully in my team at Microsoft Research: *phi-1 achieves 51% on HumanEval w. only 1.3B parameters & 7B tokens training dataset* Any other >50% HumanEval model is >1000x bigger…
"Textbooks Are All You Need" is making rounds:
twitter.com/SebastienBubec…
reminding me of my earlier tweet :). TinyStories is also an inspiring read:
twitter.com/EldanRonen/sta…
We'll probably see a lot more creative "scaling down" work: prioritizing data quality and diversity over quantity, a lot more synthetic data generation, and small but highly capable expert models.
twitter.com/SebastienBubec…
reminding me of my earlier tweet :). TinyStories is also an inspiring read:
twitter.com/EldanRonen/sta…
We'll probably see a lot more creative "scaling down" work: prioritizing data quality and diversity over quantity, a lot more synthetic data generation, and small but highly capable expert models.
파라미터 효율적 파인튜닝 기법 LoRA(Low-Rank Adaptation of Large Language Models)는 간명한 구조, 우수한 효과, 유연한 확장성으로 인해 크게 주목받았다. 이는 유출된 구글의 내부 문건 'We Have No Moat, and Neither Does OpenAI'에서도 언급된 사실이다.
LoRA는 언어 모델에서 시작됐지만 행렬 (그리고 당연히 다차원 텐서까지) 학습이 존재하는 곳이라면 어디든 적용이 가능하기 때문에 이미지 생성을 위한 디퓨전 모델에도 적용되기 시작했다. (Shimo Ryu가 스테이블 디퓨전에 최초로 접목시켰다.)
LoRA 구조의 단순성 때문에 다양한 변종이 태동하지 않을까 생각했는데 아니나 다를까, 이미지 생성 씬에서는 LoCon, LoHa 같은 기법이 유행 중이다. LoCon는 합성곱 레이어에 단순히 LoRA를 확장 적용한 것인 반면, LoHa는 특기할 만하다.
LoHa(LoRA with Hadamard Product Representation)는 Kohaku-Blueleaf가 스테이블 디퓨전 웹 UI 생태계에 가져온 것으로 사실 비공식 명칭이다. 이 구현물은 2021년 포스텍 논문 'FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning'(https://arxiv.org/abs/2108.06098)에 근거한다.
논문의 핵심 아이디어를 짧게 언급하자면, 모델 가중치를 두 개의 저차원 행렬 곱으로 분해하는 기존 방식(LoRA) 대신, 저차원 행렬 곱 두 쌍의 아마다르 곱(Hadamard Product)으로 분해하여 학습하는 것이다. 이러면 뭐가 좋을까? LoRA로 재구성한 행렬의 계수(Rank)가 (적어도) R이라면 같은 파라미터 개수로 LoHa 방식은 R 제곱 이상인 계수를 달성할 수 있다. 그러니까 동일한 모델 크기로 더 높은 표현력을 확보하게 되는 것이다. 물론 약간의 연산 비용이 더 추가되지만 말이다.
이미지 생성 씬에서는 LoHa가 LoRA보다 스타일 학습에 더 효과적이라는 평이 많지만 여전히 학술적으로 검증된 바 없다. 재밌는 건 LoHa라고 불리는 이 FedPara 기법은 생성형 AI와 무관하게 연합 학습의 맥락에서 만들어졌다는 점이다. 디바이스의 통신량(파라미터 개수)을 줄이면서 모델의 표현력을 최대한 잃지 않기 위해 고안된 방법이다.
LoRA는 언어 모델에서 시작됐지만 행렬 (그리고 당연히 다차원 텐서까지) 학습이 존재하는 곳이라면 어디든 적용이 가능하기 때문에 이미지 생성을 위한 디퓨전 모델에도 적용되기 시작했다. (Shimo Ryu가 스테이블 디퓨전에 최초로 접목시켰다.)
LoRA 구조의 단순성 때문에 다양한 변종이 태동하지 않을까 생각했는데 아니나 다를까, 이미지 생성 씬에서는 LoCon, LoHa 같은 기법이 유행 중이다. LoCon는 합성곱 레이어에 단순히 LoRA를 확장 적용한 것인 반면, LoHa는 특기할 만하다.
LoHa(LoRA with Hadamard Product Representation)는 Kohaku-Blueleaf가 스테이블 디퓨전 웹 UI 생태계에 가져온 것으로 사실 비공식 명칭이다. 이 구현물은 2021년 포스텍 논문 'FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning'(https://arxiv.org/abs/2108.06098)에 근거한다.
논문의 핵심 아이디어를 짧게 언급하자면, 모델 가중치를 두 개의 저차원 행렬 곱으로 분해하는 기존 방식(LoRA) 대신, 저차원 행렬 곱 두 쌍의 아마다르 곱(Hadamard Product)으로 분해하여 학습하는 것이다. 이러면 뭐가 좋을까? LoRA로 재구성한 행렬의 계수(Rank)가 (적어도) R이라면 같은 파라미터 개수로 LoHa 방식은 R 제곱 이상인 계수를 달성할 수 있다. 그러니까 동일한 모델 크기로 더 높은 표현력을 확보하게 되는 것이다. 물론 약간의 연산 비용이 더 추가되지만 말이다.
이미지 생성 씬에서는 LoHa가 LoRA보다 스타일 학습에 더 효과적이라는 평이 많지만 여전히 학술적으로 검증된 바 없다. 재밌는 건 LoHa라고 불리는 이 FedPara 기법은 생성형 AI와 무관하게 연합 학습의 맥락에서 만들어졌다는 점이다. 디바이스의 통신량(파라미터 개수)을 줄이면서 모델의 표현력을 최대한 잃지 않기 위해 고안된 방법이다.
안녕하세요. 오랜만에 핀다 테크 블로그 글이 올라와서 공유 합니다.
회사의 조직을 구성할때, 기능 중심 조직/프로덕트 중심 조직으로 구성하는데
핀다는 2번째 프로덕트 중심 조직 입니다.
구성원 모두가 프로덕트(서비스)의 주인이 되어 원팀으로 담당한 서비스의 고객을 만족시키기 위해 지속적인 업무를 수행합니다.
서비스의 개선과 빠른 의사결정을 위해 프로덕트팀 내 교차기능(PO, 디자이너, 개발자 BE/FE) 인원들이 업무를 수행합니다. 빠르고 주기적으로 스스로의 업무를 회고하며 개선하는 "Empirical Process"를 따르고 있습니다.
애자일 조직에서 채용하는 스크럼 프레임워크를 기반으로 각 프로덕트 조직에서 조금씩 유연하게 따르고 있습니다. 개인적으로 일하는 방식이 매우 짜임새 있게 구성된 프로덕트 조직이 있는데, 해당 조직의 시니어 개발자이신 Hyeong Rae Kim님 작성하신 "일하는 방식"의 글입니다. 추천합니다.
항상 "Product Principle"을 일깨워 주시는 최성호 대표님께 감사드립니다.
https://medium.com/finda-tech/%EC%9A%B0%EB%A6%AC%EC%9D%98-%EA%B0%9C%EB%B0%9C%EB%AC%B8%ED%99%94%EB%8A%94-%EC%9D%B4%EB%A0%87%EA%B2%8C-%EC%84%B1%EC%9E%A5%ED%95%A9%EB%8B%88%EB%8B%A4-8f57b06ca549
회사의 조직을 구성할때, 기능 중심 조직/프로덕트 중심 조직으로 구성하는데
핀다는 2번째 프로덕트 중심 조직 입니다.
구성원 모두가 프로덕트(서비스)의 주인이 되어 원팀으로 담당한 서비스의 고객을 만족시키기 위해 지속적인 업무를 수행합니다.
서비스의 개선과 빠른 의사결정을 위해 프로덕트팀 내 교차기능(PO, 디자이너, 개발자 BE/FE) 인원들이 업무를 수행합니다. 빠르고 주기적으로 스스로의 업무를 회고하며 개선하는 "Empirical Process"를 따르고 있습니다.
애자일 조직에서 채용하는 스크럼 프레임워크를 기반으로 각 프로덕트 조직에서 조금씩 유연하게 따르고 있습니다. 개인적으로 일하는 방식이 매우 짜임새 있게 구성된 프로덕트 조직이 있는데, 해당 조직의 시니어 개발자이신 Hyeong Rae Kim님 작성하신 "일하는 방식"의 글입니다. 추천합니다.
항상 "Product Principle"을 일깨워 주시는 최성호 대표님께 감사드립니다.
https://medium.com/finda-tech/%EC%9A%B0%EB%A6%AC%EC%9D%98-%EA%B0%9C%EB%B0%9C%EB%AC%B8%ED%99%94%EB%8A%94-%EC%9D%B4%EB%A0%87%EA%B2%8C-%EC%84%B1%EC%9E%A5%ED%95%A9%EB%8B%88%EB%8B%A4-8f57b06ca549
Medium
우리의 개발문화는 이렇게 성장합니다
안녕하세요, FINDA 현금그로스 PG 자산/신용관리 PT 백엔드 개발자 김형래 입니다.
❤1
https://youtu.be/QWvrCuuFsjg?list=PLlrxD0HtieHjolPmqWVyk446uLMPWo4oP
Nvidia도 회사가 보유한 데이터를 바탕으로 가공해서 대형 모델을 학습시키거나 혹은 대형 모델을 Finetunning시켜주는 인프라쪽을 준비하고 있네요. AWS, Google, MS 등 빅테크 뿐만아니라 Mosaic ML 등 여러 스타트업 플레이어들도 이 시장을 보고 있을텐데 좀 더 살펴봐야겠네요 ㅎㅎ
Nvidia도 회사가 보유한 데이터를 바탕으로 가공해서 대형 모델을 학습시키거나 혹은 대형 모델을 Finetunning시켜주는 인프라쪽을 준비하고 있네요. AWS, Google, MS 등 빅테크 뿐만아니라 Mosaic ML 등 여러 스타트업 플레이어들도 이 시장을 보고 있을텐데 좀 더 살펴봐야겠네요 ㅎㅎ
YouTube
Build Customize and Deploy LLMs At-Scale on Azure with NVIDIA NeMo | DISFP08
Enterprises need to customize LLMs or build one from scratch to take full advantage of Generative AI. This technical session covers how NVIDIA NeMo allows enterprises to build their custom language/image models or customize an existing foundation model and…
Similar approach from MS https://youtu.be/2meEvuWAyXs
YouTube
Build and maintain your company Copilot with Azure ML and GPT-4 | BRK211H
Large AI models and AI-embedded applications like ChatGPT are transforming the way we live and work, made possible by the confluence of advancements in big data, algorithms, and powerful AI supercomputers. Harnessing these technologies for real-world applications…