In 2008, Fengji was leading "Asura" at Tencent, coincidentally another Monkey King game.
The game eventually lost it’s way as the focus shifted from fun to revenue.
Fengji believed that games should be “fun first.”
In 2014, Feng Ji left with 7 colleagues to form Game Science.
They started with mobile games to pay the bills.
Their first two games ended up as complete flops.
Feng Ji held onto his dream of creating a single-player masterpiece. He was just waiting for the right moment.
By 2016, Steam's data showed 1/3 of its active users were from China.
Development skills has caught on. Chinese gamers craved high-quality experiences.
It was clear the market was ready.
Feng Ji said, "We're willing to burn ourselves, but we're not moths flying to the fire.”
League of Legends saved Feng.
He forged a close bond with Daniel Wu. They became late-night gaming buddies.
In 2017, Wu bet big on Feng Ji's vision, buying 20% of Game Science for $8.5M.
Despite losses on earlier projects, Wu kept faith.
A true ride or die partner.
Feng Ji hated the mobile game model of in-app purchases and endless monetization.
He believed it killed the true essence of gaming.
This resulted to the company splitting into 2 teams.
One team to keep making mobile games, while he moved on to fulfill his single player dream.
On February 25, 2018, Game Science takes the plunge into AAA development.
The team went all in. They quit jobs, sold properties, and went 4-5 years without income.
Wu provided additional funding and contributed a "large chunk" of the $70M budget.
Black Myth: Wukong was born.
Feng Ji’s vision was clear: create a global game rooted in traditional Chinese culture.
- The team read the novel “Journey to the West” 100+ times.
- They visited countless cultural sites.
- Created 1.2 billion models for the Monkey King’s armor.
Authenticity was everything.
Creating China’s first AAA game was no joke:
The team grew from 7 to 30 and couldn’t find required talent.
They faced challenges in adapting new technologies such as the Unreal Engine.
Wu and Feng called themselves “two drowning rats.”
They were on the brink of failure.
August 2020 was the turning point.
A 13-minute gameplay trailer went viral.
2M views on YouTube, 25M on Bilibili.
• 10,000 job applications
• The team grew to 140 employees
• Tencent bought a 5% stake
What was meant to be a recruitment video became a global sensation.
"When you are at the peak of confidence, you are also staring at the valley of foolishness."
https://x.com/WillieChou/status/1832780019187228843
The game eventually lost it’s way as the focus shifted from fun to revenue.
Fengji believed that games should be “fun first.”
In 2014, Feng Ji left with 7 colleagues to form Game Science.
They started with mobile games to pay the bills.
Their first two games ended up as complete flops.
Feng Ji held onto his dream of creating a single-player masterpiece. He was just waiting for the right moment.
By 2016, Steam's data showed 1/3 of its active users were from China.
Development skills has caught on. Chinese gamers craved high-quality experiences.
It was clear the market was ready.
Feng Ji said, "We're willing to burn ourselves, but we're not moths flying to the fire.”
League of Legends saved Feng.
He forged a close bond with Daniel Wu. They became late-night gaming buddies.
In 2017, Wu bet big on Feng Ji's vision, buying 20% of Game Science for $8.5M.
Despite losses on earlier projects, Wu kept faith.
A true ride or die partner.
Feng Ji hated the mobile game model of in-app purchases and endless monetization.
He believed it killed the true essence of gaming.
This resulted to the company splitting into 2 teams.
One team to keep making mobile games, while he moved on to fulfill his single player dream.
On February 25, 2018, Game Science takes the plunge into AAA development.
The team went all in. They quit jobs, sold properties, and went 4-5 years without income.
Wu provided additional funding and contributed a "large chunk" of the $70M budget.
Black Myth: Wukong was born.
Feng Ji’s vision was clear: create a global game rooted in traditional Chinese culture.
- The team read the novel “Journey to the West” 100+ times.
- They visited countless cultural sites.
- Created 1.2 billion models for the Monkey King’s armor.
Authenticity was everything.
Creating China’s first AAA game was no joke:
The team grew from 7 to 30 and couldn’t find required talent.
They faced challenges in adapting new technologies such as the Unreal Engine.
Wu and Feng called themselves “two drowning rats.”
They were on the brink of failure.
August 2020 was the turning point.
A 13-minute gameplay trailer went viral.
2M views on YouTube, 25M on Bilibili.
• 10,000 job applications
• The team grew to 140 employees
• Tencent bought a 5% stake
What was meant to be a recruitment video became a global sensation.
"When you are at the peak of confidence, you are also staring at the valley of foolishness."
https://x.com/WillieChou/status/1832780019187228843
Forwarded from SNEW스뉴
😂허구헌날 VC에게 까이는 스타트업 창업자분들에게 힘이 될만한 옛날 얘기.
1976년 스티브 잡스가 애플을 창업한 후, 초기 투자금을 모으기 위해 누굴 만나 어떻게 까이고, 누굴 소개 받아 어떻게 투자로 연결되었는 지 자세히 도표로 정리.
여기 보면 별 시덥지않은 이유로 투자 거절을 하고, 평생 이불킥한 거물 VC 이름들이 주루룩 나온다.
스티브 잡스도 초기엔 개무시 당하고 까인 게 한 두번이 아닌데, 나 정도면 양호하다고 위로하는 밤이 되시길...^^
참, 최초 투자가 이뤄진 계기는, 자기는 투자를 안하지만 대신 다른 투자가를 소개해준 사람이 있었기에 가능했다. 그러니 까였다고 좌절 말고, 깐 사람과도 좋은 관계를 유지하시라. "꺼진 불도 다시 보자!"
1. 클라이너 퍼킨스의 톰 퍼킨스와 유진 클라이너
벤처 캐피털 업계의 전설적 인물들이지만, 이들은 잡스와의 만남조차 거부했다. 잡스의 비전을 알아보지 못한 것이다.
2. 빌 드레이퍼
드레이퍼는 잡스와 워즈니악을 오만하다고 판단, 투자를 거절했다. 제품의 잠재력보다 개인의 성격에 집중한 결정이었다.
3. 피치 존슨
가정용 컴퓨터의 개념을 이해하지 못해 투자를 거절했다. "요리법을 저장하는 데 쓰려고?" 라고 물었다고 한다.
4. 스탠 베이트
1만 달러에 애플의 10%를 살 수 있는 기회를 거절했다. 잡스의 외모를 이유로 그를 신뢰하지 않았기 때문이다.
5. 놀란 부쉬넬
아타리의 창업자인 부쉬넬은 5만 달러에 애플의 3분의 1을 살 수 있는 기회를 거절했다. 하지만 잡스를 돈 발렌타인에게 소개했다.
6. 돈 발렌타인
세코이아 캐피털의 창립자인 발렌타인은 직접 투자하지는 않았지만, 잡스를 마이크 마쿨라에게 소개했다.
7. 마이크 마쿨라
마쿨라는 9만1000달러를 투자해 애플의 26%를 확보, 최초의 엔젤 투자자가 되었다. 그는 또한 레지스 매키나를 설득해 애플의 마케팅을 맡게 했다.
8. 레지스 매키나
애플의 상징적인 로고 제작에 참여했다.
9. 행크 스미스
벤록의 스미스는 30만 달러를 투자해 애플의 10%를 확보했다.
이 과정은 단순한 자금 조달 이상의 의미를 갖는다. 잡스의 성공은 끈질긴 인내와 네트워킹의 힘을 보여준다. 그는 수많은 거절 속에서도 포기하지 않고 계속해서 문을 두드렸고, 결국 그의 비전을 이해하는 사람들을 만나게 되었다.
모든 이가 당신의 아이디어를 이해할 필요는 없다. 중요한 것은 끊임없이 노력하고 네트워크를 확장하며, 당신의 비전을 공유할 수 있는 적임자를 찾는 것이다.
https://www.facebook.com/share/p/yYhPJkn3gJofbDSh/?
1976년 스티브 잡스가 애플을 창업한 후, 초기 투자금을 모으기 위해 누굴 만나 어떻게 까이고, 누굴 소개 받아 어떻게 투자로 연결되었는 지 자세히 도표로 정리.
여기 보면 별 시덥지않은 이유로 투자 거절을 하고, 평생 이불킥한 거물 VC 이름들이 주루룩 나온다.
스티브 잡스도 초기엔 개무시 당하고 까인 게 한 두번이 아닌데, 나 정도면 양호하다고 위로하는 밤이 되시길...^^
참, 최초 투자가 이뤄진 계기는, 자기는 투자를 안하지만 대신 다른 투자가를 소개해준 사람이 있었기에 가능했다. 그러니 까였다고 좌절 말고, 깐 사람과도 좋은 관계를 유지하시라. "꺼진 불도 다시 보자!"
1. 클라이너 퍼킨스의 톰 퍼킨스와 유진 클라이너
벤처 캐피털 업계의 전설적 인물들이지만, 이들은 잡스와의 만남조차 거부했다. 잡스의 비전을 알아보지 못한 것이다.
2. 빌 드레이퍼
드레이퍼는 잡스와 워즈니악을 오만하다고 판단, 투자를 거절했다. 제품의 잠재력보다 개인의 성격에 집중한 결정이었다.
3. 피치 존슨
가정용 컴퓨터의 개념을 이해하지 못해 투자를 거절했다. "요리법을 저장하는 데 쓰려고?" 라고 물었다고 한다.
4. 스탠 베이트
1만 달러에 애플의 10%를 살 수 있는 기회를 거절했다. 잡스의 외모를 이유로 그를 신뢰하지 않았기 때문이다.
5. 놀란 부쉬넬
아타리의 창업자인 부쉬넬은 5만 달러에 애플의 3분의 1을 살 수 있는 기회를 거절했다. 하지만 잡스를 돈 발렌타인에게 소개했다.
6. 돈 발렌타인
세코이아 캐피털의 창립자인 발렌타인은 직접 투자하지는 않았지만, 잡스를 마이크 마쿨라에게 소개했다.
7. 마이크 마쿨라
마쿨라는 9만1000달러를 투자해 애플의 26%를 확보, 최초의 엔젤 투자자가 되었다. 그는 또한 레지스 매키나를 설득해 애플의 마케팅을 맡게 했다.
8. 레지스 매키나
애플의 상징적인 로고 제작에 참여했다.
9. 행크 스미스
벤록의 스미스는 30만 달러를 투자해 애플의 10%를 확보했다.
이 과정은 단순한 자금 조달 이상의 의미를 갖는다. 잡스의 성공은 끈질긴 인내와 네트워킹의 힘을 보여준다. 그는 수많은 거절 속에서도 포기하지 않고 계속해서 문을 두드렸고, 결국 그의 비전을 이해하는 사람들을 만나게 되었다.
모든 이가 당신의 아이디어를 이해할 필요는 없다. 중요한 것은 끊임없이 노력하고 네트워크를 확장하며, 당신의 비전을 공유할 수 있는 적임자를 찾는 것이다.
https://www.facebook.com/share/p/yYhPJkn3gJofbDSh/?
Facebook
Log in or sign up to view
See posts, photos and more on Facebook.
https://openai.com/index/learning-to-reason-with-llms/
We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.
In our tests, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. We also evaluated o1 on GPQA diamond, a difficult intelligence benchmark which tests for expertise in chemistry, physics and biology. In order to compare models to humans, we recruited experts with PhDs to answer GPQA-diamond questions. We found that o1 surpassed the performance of those human experts, becoming the first model to do so on this benchmark. These results do not imply that o1 is more capable than a PhD in all respects — only that the model is more proficient in solving some problems that a PhD would be expected to solve.
Chain of Thought
Similar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem. Through reinforcement learning, o1 learns to hone its chain of thought and refine the strategies it uses. It learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working. This process dramatically improves the model’s ability to reason. To illustrate this leap forward, we showcase the chain of thought from o1-preview on several difficult problems below.
We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.
In our tests, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. We also evaluated o1 on GPQA diamond, a difficult intelligence benchmark which tests for expertise in chemistry, physics and biology. In order to compare models to humans, we recruited experts with PhDs to answer GPQA-diamond questions. We found that o1 surpassed the performance of those human experts, becoming the first model to do so on this benchmark. These results do not imply that o1 is more capable than a PhD in all respects — only that the model is more proficient in solving some problems that a PhD would be expected to solve.
Chain of Thought
Similar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem. Through reinforcement learning, o1 learns to hone its chain of thought and refine the strategies it uses. It learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working. This process dramatically improves the model’s ability to reason. To illustrate this leap forward, we showcase the chain of thought from o1-preview on several difficult problems below.
Openai
Learning to reason with LLMs
We are introducing OpenAI o1, a new large language model trained with reinforcement learning to perform complex reasoning. o1 thinks before it answers—it can produce a long internal chain of thought before responding to the user.
Continuous Learning_Startup & Investment
https://openai.com/index/learning-to-reason-with-llms/ We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies…
𝐅𝐢𝐧𝐞 𝐓𝐮𝐧𝐢𝐧𝐠, 𝐈𝐧𝐟𝐞𝐫𝐞𝐧𝐜𝐞, 𝐚𝐧𝐝 𝐎𝐫𝐜𝐡𝐞𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧. o1 represents a shift from ever scaling pre-training to fine-tuning, inference, and orchestration—similar to how single web servers evolved into complex architectures. OpenAI's blog notes, “The constraints on scaling this approach differ substantially from those of LLM pretraining,” reinforcing this change.
𝐒𝐓𝐄𝐌 𝐒𝐭𝐫𝐨𝐧𝐠. There's definite progress on the STEM front, as Kevin Scott notes. I asked about complex thermodynamics and carbon nanotube processes; the results reminded me of grading homework, where steps matter as much as answers. Coding improvements were notable, with 4o better at debugging and development. Previously, Google DeepMind had no peer in hard sciences, but OpenAI may now be entering their lane.
𝐑𝐞𝐬𝐮𝐥𝐭𝐬 𝐯𝐬. 𝐑𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠. There's much discussion about chain-of-thought to enhance reasoning. We asked questions requiring multiple reasoning steps. What does it mean when the result is correct but reasoning is wrong? In examples below, it got the right answer but the wrong reasoning - miscounting or involving cookies (though I appreciate a fresh cookie). Our CTO, Vibhu Mittal, noted this resembles System 1 and System 2 thinking. In humans, we'd expect System 2 to override System 1, but here it seems the opposite.
𝐋𝐚𝐭𝐞𝐧𝐜𝐲 𝐢𝐬 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠. AI response delays will now be seen as the AI thinking hard. Increased latency for GPT's advancement seems unavoidable due to required multiple steps and non-parallelized orchestration. I appreciate clever UX touches like the phone vibrating as different processing aspects progress.
𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐚𝐬 𝐔𝐗. There's room to grow in language, important as AI becomes the interface to computation, knowledge, culture, and action. As Ethan Mollick notes, o1 is “not a better writer than GPT-4o”. It feels more like search results than dialog and collaboration. Perhaps that's intentional?
In developing our own GPT4-class model for enterprise AI transformation, we've valued enhancing inference and orchestration layers while prioritizing dialog and collaborative intelligence. I appreciate how GPTo1 expands AI towards STEM and look forward to exploring more. Meanwhile, to celebrate, we've sent a Strawberry Pi to OpenAI headquarters and hope they enjoy it!
𝐒𝐓𝐄𝐌 𝐒𝐭𝐫𝐨𝐧𝐠. There's definite progress on the STEM front, as Kevin Scott notes. I asked about complex thermodynamics and carbon nanotube processes; the results reminded me of grading homework, where steps matter as much as answers. Coding improvements were notable, with 4o better at debugging and development. Previously, Google DeepMind had no peer in hard sciences, but OpenAI may now be entering their lane.
𝐑𝐞𝐬𝐮𝐥𝐭𝐬 𝐯𝐬. 𝐑𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠. There's much discussion about chain-of-thought to enhance reasoning. We asked questions requiring multiple reasoning steps. What does it mean when the result is correct but reasoning is wrong? In examples below, it got the right answer but the wrong reasoning - miscounting or involving cookies (though I appreciate a fresh cookie). Our CTO, Vibhu Mittal, noted this resembles System 1 and System 2 thinking. In humans, we'd expect System 2 to override System 1, but here it seems the opposite.
𝐋𝐚𝐭𝐞𝐧𝐜𝐲 𝐢𝐬 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠. AI response delays will now be seen as the AI thinking hard. Increased latency for GPT's advancement seems unavoidable due to required multiple steps and non-parallelized orchestration. I appreciate clever UX touches like the phone vibrating as different processing aspects progress.
𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐚𝐬 𝐔𝐗. There's room to grow in language, important as AI becomes the interface to computation, knowledge, culture, and action. As Ethan Mollick notes, o1 is “not a better writer than GPT-4o”. It feels more like search results than dialog and collaboration. Perhaps that's intentional?
In developing our own GPT4-class model for enterprise AI transformation, we've valued enhancing inference and orchestration layers while prioritizing dialog and collaborative intelligence. I appreciate how GPTo1 expands AI towards STEM and look forward to exploring more. Meanwhile, to celebrate, we've sent a Strawberry Pi to OpenAI headquarters and hope they enjoy it!
Forwarded from 전종현의 인사이트
구글과 비슷하게 AI 바이오 모델을 만드는 팀이 있어서 찾아보니 Dimension, Thrive Capital, OpenAI, Conviction 등으로부터 $30m 투자받은 팀이라고 한다. 창업자는 OpenAI 출신인 Joshua Meier
여담으로 얼마전에 '알파폴드: AI 신약개발 혁신'이란 책을 선물받았는데 내용이 알차보이더라. 바이오스펙테이터 책들이 퀄리티가 정말 좋은 것 같음.
https://www.chaidiscovery.com/blog/introducing-chai-1
여담으로 얼마전에 '알파폴드: AI 신약개발 혁신'이란 책을 선물받았는데 내용이 알차보이더라. 바이오스펙테이터 책들이 퀄리티가 정말 좋은 것 같음.
https://www.chaidiscovery.com/blog/introducing-chai-1
Chaidiscovery
Chai Discovery
Building frontier artificial intelligence to predict and reprogram the interactions between biochemical molecules.
To advance beyond the capabilities of today's models, we need spatially intelligent AI that can model the world and reason about objects, places, and interactions in 3D space and time.
We aim to lift AI models from the 2D plane of pixels to full 3D worlds - both virtual and real - endowing them with spatial intelligence as rich as our own. Human spatial intelligence evolved over millennia; but in this time of extraordinary progress, we see the opportunity to imbue AI with this ability in the near term.
World Labs was founded by visionary AI pioneer Fei-Fei Li along with Justin Johnson, Christoph Lassner, and Ben Mildenhall; each a world-renowned technologist in computer vision and graphics. We are bringing together the most formidable slate of pixel talent ever assembled - from AI research to systems engineering to product design - creating a tight feedback loop between our spatially intelligent foundation models and products that will empower our users.
We aim to lift AI models from the 2D plane of pixels to full 3D worlds - both virtual and real - endowing them with spatial intelligence as rich as our own. Human spatial intelligence evolved over millennia; but in this time of extraordinary progress, we see the opportunity to imbue AI with this ability in the near term.
World Labs was founded by visionary AI pioneer Fei-Fei Li along with Justin Johnson, Christoph Lassner, and Ben Mildenhall; each a world-renowned technologist in computer vision and graphics. We are bringing together the most formidable slate of pixel talent ever assembled - from AI research to systems engineering to product design - creating a tight feedback loop between our spatially intelligent foundation models and products that will empower our users.
샘젤
부모님이 내게 남긴 유산은 지능, 호기심, 추진력, 회복탄력성 그리고 자기 결단력이었다. 부모님은 내게 배움에 대한 헌신과 그것을 실생활에 적용하는 방법, 관습에 도전하는 것, 남들이 머물 때 떠나는 법, 위함을 인식하고 대비하는 것에 대한 이해를 심어주었다.
부모님이 내게 남긴 유산은 지능, 호기심, 추진력, 회복탄력성 그리고 자기 결단력이었다. 부모님은 내게 배움에 대한 헌신과 그것을 실생활에 적용하는 방법, 관습에 도전하는 것, 남들이 머물 때 떠나는 법, 위함을 인식하고 대비하는 것에 대한 이해를 심어주었다.