[leadership][it’s not ai]
Great to see how leaders express their thoughts on making things done as well as how they treat theirselves in the times when you need to make product successfull. Some things you can read through the lines. 3 mins read and it’s worth it.
https://www.notion.so/blog/5-principles-for-effective-ai-leadership-without-deep-expertise
Great to see how leaders express their thoughts on making things done as well as how they treat theirselves in the times when you need to make product successfull. Some things you can read through the lines. 3 mins read and it’s worth it.
https://www.notion.so/blog/5-principles-for-effective-ai-leadership-without-deep-expertise
Notion
5 principles for effective AI leadership without deep expertise
In leadership roles, especially technical-leadership roles, there are few subjects you will be asked about more often than AI. But what if, like me until recently, you have lots of technical experience but have yet to dive meaningfully into AI development?
[data structures][paper]
Cache-Oblivious Algorithms
and Data Structures
https://erikdemaine.org/papers/BRICS2002/paper.pdf
Cache-Oblivious Algorithms
and Data Structures
https://erikdemaine.org/papers/BRICS2002/paper.pdf
[genAI][clone your c-lvl]
The promise of human behavioral simulation—general-purpose computational agents that replicate human behavior across domains—could enable broad applications in policymaking and social science. We present a novel agent architecture that simulates the attitudes and behaviors of 1,052 real individuals—applying large language models to qualitative interviews about their lives, then measuring how well these agents replicate the attitudes and behaviors of the individuals that they represent. The generative agents replicate participants' responses on the General Social Survey 85% as accurately as participants replicate their own answers two weeks later, and perform comparably in predicting personality traits and outcomes in experimental
replications.
https://arxiv.org/pdf/2411.10109
The promise of human behavioral simulation—general-purpose computational agents that replicate human behavior across domains—could enable broad applications in policymaking and social science. We present a novel agent architecture that simulates the attitudes and behaviors of 1,052 real individuals—applying large language models to qualitative interviews about their lives, then measuring how well these agents replicate the attitudes and behaviors of the individuals that they represent. The generative agents replicate participants' responses on the General Social Survey 85% as accurately as participants replicate their own answers two weeks later, and perform comparably in predicting personality traits and outcomes in experimental
replications.
https://arxiv.org/pdf/2411.10109
[video][motivation]
While this channel supposed to be pure technical I read / listen / watch some other resources to get idea about life and choices. Some times we are not getting what we thought we could and get stressed out although we put some much into making things for the better.
I’d like to share one of the worth watching / listening videos
https://youtu.be/3iMc8uF46C0?si=suiCgH4lwmyRA60A
While this channel supposed to be pure technical I read / listen / watch some other resources to get idea about life and choices. Some times we are not getting what we thought we could and get stressed out although we put some much into making things for the better.
I’d like to share one of the worth watching / listening videos
https://youtu.be/3iMc8uF46C0?si=suiCgH4lwmyRA60A
YouTube
I'm 40. If You're In Your 20's or 30's, Watch This
📩 Subscribe to my weekly newsletter: https://simonalexanderong.com/shots-of-energy/
📚 Get the new paperback version of my bestselling book Energize: https://getenergizebook.com/
👥 JOIN the waitlist for my new online community coming soon here: https://…
📚 Get the new paperback version of my bestselling book Energize: https://getenergizebook.com/
👥 JOIN the waitlist for my new online community coming soon here: https://…
👍4❤1
[ai][paper]
“Enabling effective collaboration among LLMs is a crucial step toward developing autonomous systems capable of solving complex problems. Al- though LLMs are typically used as single-model generators, where humans critique and refine their outputs, the potential for jointly trained collaborative models remains largely unexplored. Despite promising results in multi-agent communication and debate settings, little progress has been made
in training models to work together on tasks. In this paper, we present a first step towards ’Multi-agent LLM training’ (MALT) on reasoning problems. Our approach employs a sequential multi-
agent setup with heterogeneous LLMs assigned specialized roles: a generator, verifier, and refinement model iteratively solving problems. We propose a trajectory-expansion-based synthetic
data generation process and a credit assignment strategy driven by joint outcome-based rewards.
This enables our post-training setup to utilize both positive and negative trajectories to autonomously improve each model’s specialized capabilities as part of a joint sequential system. We evaluate our approach on MATH, GSM8k, and CSQA, where
MALT using Llama 3.1 8B models achieves rela tive improvements of 14.14%, 7.12%, and 9.40% respectively over the same baseline model. This demonstrates an early advance in multi-agent cooperative capabilities for performance on mathematical and common sense reasoning questions. More generally, our work provides a concrete direction for research around multi-agent LLM
training approaches.”
https://arxiv.org/abs/2412.01928
“Enabling effective collaboration among LLMs is a crucial step toward developing autonomous systems capable of solving complex problems. Al- though LLMs are typically used as single-model generators, where humans critique and refine their outputs, the potential for jointly trained collaborative models remains largely unexplored. Despite promising results in multi-agent communication and debate settings, little progress has been made
in training models to work together on tasks. In this paper, we present a first step towards ’Multi-agent LLM training’ (MALT) on reasoning problems. Our approach employs a sequential multi-
agent setup with heterogeneous LLMs assigned specialized roles: a generator, verifier, and refinement model iteratively solving problems. We propose a trajectory-expansion-based synthetic
data generation process and a credit assignment strategy driven by joint outcome-based rewards.
This enables our post-training setup to utilize both positive and negative trajectories to autonomously improve each model’s specialized capabilities as part of a joint sequential system. We evaluate our approach on MATH, GSM8k, and CSQA, where
MALT using Llama 3.1 8B models achieves rela tive improvements of 14.14%, 7.12%, and 9.40% respectively over the same baseline model. This demonstrates an early advance in multi-agent cooperative capabilities for performance on mathematical and common sense reasoning questions. More generally, our work provides a concrete direction for research around multi-agent LLM
training approaches.”
https://arxiv.org/abs/2412.01928
arXiv.org
MALT: Improving Reasoning with Multi-Agent LLM Training
Large Language Models (LLMs) often produce answers with a single chain-of-thought, which restricts their ability to explore reasoning paths or self-correct flawed outputs in complex tasks. In this...
👍1
[book][Algorithms for Modern Hardware]
This is an upcoming high performance computing book noscriptd “Algorithms for Modern Hardware” by Sergey Slotin.
Its intended audience is everyone from performance engineers and practical algorithm researchers to undergraduate computer science students who have just finished an advanced algorithms course and want to learn more practical ways to speed up a program.
All book materials are hosted on GitHub, with code in a separate repository.
https://en.algorithmica.org/
This is an upcoming high performance computing book noscriptd “Algorithms for Modern Hardware” by Sergey Slotin.
Its intended audience is everyone from performance engineers and practical algorithm researchers to undergraduate computer science students who have just finished an advanced algorithms course and want to learn more practical ways to speed up a program.
All book materials are hosted on GitHub, with code in a separate repository.
https://en.algorithmica.org/
🔥4
[system programming][baseline]
What every system programmer should know in 12 pages:
https://assets.bitbashing.io/papers/concurrency-primer.pdf
What every system programmer should know in 12 pages:
https://assets.bitbashing.io/papers/concurrency-primer.pdf
🔥3
[paper][meta][large concept models]
“The current established technology of LLMs is to process input and generate output at the token level. This is in sharp contrast to humans who operate at multiple levels of abstraction, well beyond single words, to analyze information and to generate creative content. In this paper, we present an attempt at an architecture which operates on an explicit higher-level semantic representation, which we name a “concept”. Concepts are language- and modality-agnostic and represent a higher level idea or action in a flow”
https://scontent-ams4-1.xx.fbcdn.net/v/t39.2365-6/470149925_936340665123313_5359535905316748287_n.pdf?_nc_cat=103&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=_Kelt2jn-pkQ7kNvgFRhMPH&_nc_zt=14&_nc_ht=scontent-ams4-1.xx&_nc_gid=AsqaBwO9TftbIqsIF6KCPA3&oh=00_AYAAj36Wbvgp4TU0V0JPoyHGs-_FesxPYaEwDvdGZcbtNw&oe=676768D2
github: https://github.com/facebookresearch/large_concept_model
“The current established technology of LLMs is to process input and generate output at the token level. This is in sharp contrast to humans who operate at multiple levels of abstraction, well beyond single words, to analyze information and to generate creative content. In this paper, we present an attempt at an architecture which operates on an explicit higher-level semantic representation, which we name a “concept”. Concepts are language- and modality-agnostic and represent a higher level idea or action in a flow”
https://scontent-ams4-1.xx.fbcdn.net/v/t39.2365-6/470149925_936340665123313_5359535905316748287_n.pdf?_nc_cat=103&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=_Kelt2jn-pkQ7kNvgFRhMPH&_nc_zt=14&_nc_ht=scontent-ams4-1.xx&_nc_gid=AsqaBwO9TftbIqsIF6KCPA3&oh=00_AYAAj36Wbvgp4TU0V0JPoyHGs-_FesxPYaEwDvdGZcbtNw&oe=676768D2
github: https://github.com/facebookresearch/large_concept_model
[ai][genesis]
Amazing step toward with generating models. You can find multiple videos in the link:
https://genesis-embodied-ai.github.io/
Amazing step toward with generating models. You can find multiple videos in the link:
https://genesis-embodied-ai.github.io/
👍1
[ai][llm]
Large scale multimodal agents society
“In this paper, taking e-commerce
scenarios as an example, we present LMAgent, a very large-scale and multimodal agents society based on multimodal LLMs.
In LMAgent, besides freely chatting with friends, the agents can autonomously browse, purchase, and review products, even perform live streaming e-commerce. To simulate this complex system, we introduce a self-consistency prompting mechanism to augment agents’ multimodal capabilities, resulting in significantly improved decision-making performance over the existing multi-agent system. Moreover, we propose a fast memory mechanism combined with the small-world model to enhance system efficiency, which supports more than 10,000 agent simulations in a society. Experiments on agents’ behavior show that these agents achieve comparable performance to humans in behavioral indicators. “
https://arxiv.org/pdf/2412.09237
Large scale multimodal agents society
“In this paper, taking e-commerce
scenarios as an example, we present LMAgent, a very large-scale and multimodal agents society based on multimodal LLMs.
In LMAgent, besides freely chatting with friends, the agents can autonomously browse, purchase, and review products, even perform live streaming e-commerce. To simulate this complex system, we introduce a self-consistency prompting mechanism to augment agents’ multimodal capabilities, resulting in significantly improved decision-making performance over the existing multi-agent system. Moreover, we propose a fast memory mechanism combined with the small-world model to enhance system efficiency, which supports more than 10,000 agent simulations in a society. Experiments on agents’ behavior show that these agents achieve comparable performance to humans in behavioral indicators. “
https://arxiv.org/pdf/2412.09237
🌟 Happy New Year, everyone! 🌟
This year has been incredible, and I’m so grateful for each of you—our channel grew to 500+ amazing members! 🎉 Thank you for your support, engagement, and encouragement along the way.
I hope the articles, videos, and books shared here added value to your journey. Let’s make this new year even more inspiring and impactful together. Here’s to learning, growing, and achieving great things in 2024!
Cheers to a fantastic year ahead! 🥂✨Love you all 💛
This year has been incredible, and I’m so grateful for each of you—our channel grew to 500+ amazing members! 🎉 Thank you for your support, engagement, and encouragement along the way.
I hope the articles, videos, and books shared here added value to your journey. Let’s make this new year even more inspiring and impactful together. Here’s to learning, growing, and achieving great things in 2024!
Cheers to a fantastic year ahead! 🥂✨Love you all 💛
❤15🎉4
[neural networks]
A great video with building nerual network from scratch
https://www.youtube.com/watch?v=w8yWXqWQYmU
A great video with building nerual network from scratch
https://www.youtube.com/watch?v=w8yWXqWQYmU
YouTube
Building a neural network FROM SCRATCH (no Tensorflow/Pytorch, just numpy & math)
Kaggle notebook with all the code: https://www.kaggle.com/wwsalmon/simple-mnist-nn-from-scratch-numpy-no-tf-keras
Blog article with more/clearer math explanation: https://www.samsonzhang.com/2020/11/24/understanding-the-math-behind-neural-networks-by-building…
Blog article with more/clearer math explanation: https://www.samsonzhang.com/2020/11/24/understanding-the-math-behind-neural-networks-by-building…
👍1