[paper][meta][large concept models]
“The current established technology of LLMs is to process input and generate output at the token level. This is in sharp contrast to humans who operate at multiple levels of abstraction, well beyond single words, to analyze information and to generate creative content. In this paper, we present an attempt at an architecture which operates on an explicit higher-level semantic representation, which we name a “concept”. Concepts are language- and modality-agnostic and represent a higher level idea or action in a flow”
https://scontent-ams4-1.xx.fbcdn.net/v/t39.2365-6/470149925_936340665123313_5359535905316748287_n.pdf?_nc_cat=103&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=_Kelt2jn-pkQ7kNvgFRhMPH&_nc_zt=14&_nc_ht=scontent-ams4-1.xx&_nc_gid=AsqaBwO9TftbIqsIF6KCPA3&oh=00_AYAAj36Wbvgp4TU0V0JPoyHGs-_FesxPYaEwDvdGZcbtNw&oe=676768D2
github: https://github.com/facebookresearch/large_concept_model
“The current established technology of LLMs is to process input and generate output at the token level. This is in sharp contrast to humans who operate at multiple levels of abstraction, well beyond single words, to analyze information and to generate creative content. In this paper, we present an attempt at an architecture which operates on an explicit higher-level semantic representation, which we name a “concept”. Concepts are language- and modality-agnostic and represent a higher level idea or action in a flow”
https://scontent-ams4-1.xx.fbcdn.net/v/t39.2365-6/470149925_936340665123313_5359535905316748287_n.pdf?_nc_cat=103&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=_Kelt2jn-pkQ7kNvgFRhMPH&_nc_zt=14&_nc_ht=scontent-ams4-1.xx&_nc_gid=AsqaBwO9TftbIqsIF6KCPA3&oh=00_AYAAj36Wbvgp4TU0V0JPoyHGs-_FesxPYaEwDvdGZcbtNw&oe=676768D2
github: https://github.com/facebookresearch/large_concept_model
[ai][genesis]
Amazing step toward with generating models. You can find multiple videos in the link:
https://genesis-embodied-ai.github.io/
Amazing step toward with generating models. You can find multiple videos in the link:
https://genesis-embodied-ai.github.io/
👍1
[ai][llm]
Large scale multimodal agents society
“In this paper, taking e-commerce
scenarios as an example, we present LMAgent, a very large-scale and multimodal agents society based on multimodal LLMs.
In LMAgent, besides freely chatting with friends, the agents can autonomously browse, purchase, and review products, even perform live streaming e-commerce. To simulate this complex system, we introduce a self-consistency prompting mechanism to augment agents’ multimodal capabilities, resulting in significantly improved decision-making performance over the existing multi-agent system. Moreover, we propose a fast memory mechanism combined with the small-world model to enhance system efficiency, which supports more than 10,000 agent simulations in a society. Experiments on agents’ behavior show that these agents achieve comparable performance to humans in behavioral indicators. “
https://arxiv.org/pdf/2412.09237
Large scale multimodal agents society
“In this paper, taking e-commerce
scenarios as an example, we present LMAgent, a very large-scale and multimodal agents society based on multimodal LLMs.
In LMAgent, besides freely chatting with friends, the agents can autonomously browse, purchase, and review products, even perform live streaming e-commerce. To simulate this complex system, we introduce a self-consistency prompting mechanism to augment agents’ multimodal capabilities, resulting in significantly improved decision-making performance over the existing multi-agent system. Moreover, we propose a fast memory mechanism combined with the small-world model to enhance system efficiency, which supports more than 10,000 agent simulations in a society. Experiments on agents’ behavior show that these agents achieve comparable performance to humans in behavioral indicators. “
https://arxiv.org/pdf/2412.09237
🌟 Happy New Year, everyone! 🌟
This year has been incredible, and I’m so grateful for each of you—our channel grew to 500+ amazing members! 🎉 Thank you for your support, engagement, and encouragement along the way.
I hope the articles, videos, and books shared here added value to your journey. Let’s make this new year even more inspiring and impactful together. Here’s to learning, growing, and achieving great things in 2024!
Cheers to a fantastic year ahead! 🥂✨Love you all 💛
This year has been incredible, and I’m so grateful for each of you—our channel grew to 500+ amazing members! 🎉 Thank you for your support, engagement, and encouragement along the way.
I hope the articles, videos, and books shared here added value to your journey. Let’s make this new year even more inspiring and impactful together. Here’s to learning, growing, and achieving great things in 2024!
Cheers to a fantastic year ahead! 🥂✨Love you all 💛
❤15🎉4
[neural networks]
A great video with building nerual network from scratch
https://www.youtube.com/watch?v=w8yWXqWQYmU
A great video with building nerual network from scratch
https://www.youtube.com/watch?v=w8yWXqWQYmU
YouTube
Building a neural network FROM SCRATCH (no Tensorflow/Pytorch, just numpy & math)
Kaggle notebook with all the code: https://www.kaggle.com/wwsalmon/simple-mnist-nn-from-scratch-numpy-no-tf-keras
Blog article with more/clearer math explanation: https://www.samsonzhang.com/2020/11/24/understanding-the-math-behind-neural-networks-by-building…
Blog article with more/clearer math explanation: https://www.samsonzhang.com/2020/11/24/understanding-the-math-behind-neural-networks-by-building…
👍1
[ai][agents]
https://huyenchip.com//2025/01/07/agents.html
https://huyenchip.com//2025/01/07/agents.html
Chip Huyen
Agents
Intelligent agents are considered by many to be the ultimate goal of AI. The classic book by Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (Prentice Hall, 1995), defines the field of AI research as “the study and design of rational…
[ai][deepseek][paper]
https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf
Model Summary
Architecture: Innovative Load Balancing Strategy and Training Objective
On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing.
We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance. It can also be used for speculative decoding for inference acceleration.
Pre-Training: Towards Ultimate Training Efficiency
We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model.
Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap. This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead.
At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours.
https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf
Model Summary
Architecture: Innovative Load Balancing Strategy and Training Objective
On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing.
We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance. It can also be used for speculative decoding for inference acceleration.
Pre-Training: Towards Ultimate Training Efficiency
We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model.
Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap. This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead.
At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours.
[llm] Super denoscriptive video from Andrej Karpathy
https://www.youtube.com/watch?v=7xTGNNLPyMI
https://www.youtube.com/watch?v=7xTGNNLPyMI
YouTube
Deep Dive into LLMs like ChatGPT
This is a general audience deep dive into the Large Language Model (LLM) AI technology that powers ChatGPT and related products. It is covers the full training stack of how the models are developed, along with mental models of how to think about their "psychology"…
[cs][memory allocator][from scratch]
https://arjunsreedharan.org/post/148675821737/memory-allocators-101-write-a-simple-memory
https://arjunsreedharan.org/post/148675821737/memory-allocators-101-write-a-simple-memory
Tumblr
Memory Allocators 101 - Write a simple memory allocator
Code related to this article: github.com/arjun024/memalloc
This article is about writing a simple memory allocator in C.
We will implement malloc(), calloc(), realloc() and free().
This is a beginner...
This article is about writing a simple memory allocator in C.
We will implement malloc(), calloc(), realloc() and free().
This is a beginner...
👍1