Continuous Learning_Startup & Investment – Telegram
Continuous Learning_Startup & Investment
2.4K subscribers
513 photos
5 videos
16 files
2.72K links
We journey together through the captivating realms of entrepreneurship, investment, life, and technology. This is my chronicle of exploration, where I capture and share the lessons that shape our world. Join us and let's never stop learning!
Download Telegram
SNS가 발전하며 취향의 알고리즘화와 함께 사람들과의 관계를 느슨하게 해주는 코로나 시대가 도래하자, 소비는 대중과의 동조화보다 서브컬쳐 내의 동조화의 흐름으로 바뀌었다. 보고 싶은 것 보는 세상.

https://blog.naver.com/emp930204/223393465471 #소비재 #ㅇㅈ
👍1
보이저엑스는 인공지능 스타트업이지만 인공지능 때문에 큰일이라고 생각한다. 회사도 큰일이고 구성원도 큰일이다.
엘지전자는 폰 잘 만드는 회사였지만 스마트폰 대응을 잘 못 했고, 삼성전자는 반도체 잘 만드는 회사였지만 인공지능 반도체 대응을 제대로 못 해서 고생중이다.
보이저엑스는 인공지능 스타트업이지만 인공지능 대응 제대로 못 하면 큰 일 날 것 같다. 모든 인공지능 스타트업도 마찬가지, 모든 소프트웨어 회사 심지어 구글과 애플도 마찬가지다.
인공지능이 아닌 다른 회사들은 더하고. 게임이나 쇼핑등은 당연하고 반도체니 자동차니 조선이니 하는 제조업도 그렇고 패션이니 영화니 음악이니... 의료니 국방이니 교육이니, 아무튼, *모든* 산업이 다 바뀔 거다.
모든 산업의 회사나 조직이 그렇다면 그게 어디 회사만 그렇겠나. 모든 직군의 직업인 직장인도 마찬가지로 엄청난 영향을 받을 거다. 엄청나다라는 말로 다 표현하기 힘들다고 본다.
그나마 그걸 알고 있는 회사나 조직이나 개인은 다행이라고 본다. 대비할 시간이라도 가질 수 있을 거기에.
인공지능 혁명은 아직 시작도 안 되었고. 산업혁명 같은 단어로는 담을 수도 없다.
FSD 1개월 무료 체험 시행과 동시에 차량 인도 시 FSD 체험 의무화(hard requirement)하라는 일론의 이메일. 변곡점이라고 생각 🤔

#TSLA #테슬라
Neuralink's patient plays chess with mind 🧠 

Neuralink, Elon Musk's venture into brain-computer interfaces (BCI), showcased a landmark achievement by enabling Noland Arbaugh, a paralyzed 29-year-old, to play chess and Civilization using only his thoughts. The live demonstration of Arbaugh's interaction with the BCI highlighted the device's capability to interpret brain signals into movements, marking a significant stride towards enhancing the quality of life for individuals with disabilities. Despite prior scrutiny, Neuralink's progress, alongside its competitors, underscores the evolving landscape of BCI technology, awaiting FDA approval to potentially transform the lives of those living with paralysis.
Idea #1: LLMs as Agents
- LLMs have the potential to be powerful agents, defined as (1) choosing a sequence of actions to take - through reasoning/planning or hard-coded chains – and (2) executing that sequence of actions
- @AndrewYNg and @hwchase17: some of the capabilities of agents are fairly well understood and robust (tool use, reflection, chaining together actions in @LangChainAI) and are already entering production applications, while other capabilities are more emergent (planning/reasoning, multiple agents, memory)
- Examples: @zapier or @glean for actions, @cognition_labs for reasoning
- @karpathy had an elegant prediction that self-contained agents spinning up & down is roughly where are headed in the manifestation of AGI into the different nooks and crannies of the economy.

Idea #2: Planning & Reasoning
- Planning & reasoning was a major emphasis at our event and a close cousin to the “agents” topic, as the most critical and actively researched sub-topic of agents.
- If you make the comparison to AlphaGo and other AI gameplay, Step 1 (pre-training/imitation learning) only takes you so far, while Step 2 (reinforcement learning, monte carlo tree search, value function iteration) is what actually made those AIs superhuman.
- A similar analogy holds for LLMs, where we have only done pre-training but haven’t added the inference-time compute to reach superhuman level performance.
- These insights are not solely lessons from gameplay, they are broad-sweeping lessons from 70 years of AI research. The two methods that scale arbitrarily and generally are search and learning (see Richard Sutton's The Bitter Lesson). Search has been under-explored for its importance.
Exciting times for AI research...

Idea #3: Practical AI Use in Production
- Smaller/cheaper/but still “pretty smart” models were a consistent theme in our event. The customer example that was most salient was @ServiceNow, which is focused on costs and holding gross margins and has been running 10x smaller models on A100s.
- In addition, we discussed speed/latency, expanding context windows/RAG, AI safety, interpretability, and the CIO as “on the rise” as the key buyer for AI that makes enterprises more efficient internally.

Idea #4: What to Expect from the Foundation Model Companies
- Bigger smarter models
- Less big, less expensive, pretty smart models
- More developer platform capabilities
- Different focus areas: @MistralAI on developer and open-source, @AnthropicAI on the enterprise

Idea #5: Implications for AI Startups!
- The model layer is changing rapidly and getting better, faster, cheaper
- Smart: focus on building applications that will get better as the models get smarter
- Not smart: don't spend time patching holes in the current models that will disappear as the models themselves improve, this is not a durable place to build a company

Assume that the foundation will get smarter and cheaper, and all the little nuances (latency, etc) smooth over... what great applications can you build?

I feel lucky for what an exceptional AI community we are part of (or an AI "coral reef" as @karpathy would say), and how information dense the exchange of ideas was.

https://x.com/sonyatweetybird/status/1772356006481420447?s=46&t=h5Byg6Wosg8MJb4pbPSDow
👍1
The integration and deployment of large language model (LLM)-based intelligent agents have been fraught with challenges that compromise their efficiency and efficacy. Among these issues are sub-optimal scheduling and resource allocation of agent requests over the LLM, the difficulties in maintaining context during interactions between agent and LLM, and the complexities inherent in integrating heterogeneous agents with different capabilities and specializations. The rapid increase of agent quantity and complexity further exacerbates these issues, often leading to bottlenecks and sub-optimal utilization of resources. Inspired by these challenges, this paper presents AIOS, an LLM agent operating system, which embeds large language model into operating systems (OS). Specifically, AIOS is designed to optimize resource allocation, facilitate context switch across agents, enable concurrent execution of agents, provide tool service for agents, and maintain access control for agents. We present the architecture of such an operating system, outline the core challenges it aims to resolve, and provide the basic design and implementation of the AIOS. Our experiments on concurrent execution of multiple agents demonstrate the reliability and efficiency of our AIOS modules. Through this, we aim to not only improve the performance and efficiency of LLM agents but also to pioneer for better development and deployment of the AIOS ecosystem in the future. The project is open-source at this https URL.
옳고 그름보다 중요한 것은 마음의 따뜻함입니다. 모든 사람은 각자의 전투를 치르며 사는데, 이를 알고서 서로에게 친절해야 합니다. 옳고 그른 것보다 마음이 따뜻한 사람이 되는 것이 훨씬 중요합니다. 자비로운 마음으로 자기 자신과 타인을 대할 줄 아는 사람이 될 때, 진정한 자기 존중과 타인 존중이 이루어집니다.
4
That is one of works i would like to do someday.
Make American dream again!
Excited to release DBRX, a 132 billion parameter mixture of experts language model with 36 billion active parameters.

It’s not only a super capable model, but has many nice properties at inference time because of its MoE architecture. Long context (up to 32K tokens), large batch size, and other compute bound workloads will especially benefit from the sparsity win over similarly sized dense models. Instead of going through all the parameters in the model, only the active parameters need to be passed through - a FLOPs win allowing for high model quality without compromising on inference speed.