Machine-Learning Kronecker Coefficients
The Kronecker coefficients are the decomposition multiplicities of the tensor product of two irreducible representations of the symmetric group. There is no known combinatorial denoscription of the Kronecker coefficients, and it is an NP-hard problem to decide whether a given Kronecker coefficient is zero or not.
In this paper, the author shows that standard machine-learning algorithms such as NNs, CNNs and Gradient Boosting Decision Trees may be trained to predict with high accuracy whether a given Kronecker coefficient is zero or not.
The Kronecker coefficients are the decomposition multiplicities of the tensor product of two irreducible representations of the symmetric group. There is no known combinatorial denoscription of the Kronecker coefficients, and it is an NP-hard problem to decide whether a given Kronecker coefficient is zero or not.
In this paper, the author shows that standard machine-learning algorithms such as NNs, CNNs and Gradient Boosting Decision Trees may be trained to predict with high accuracy whether a given Kronecker coefficient is zero or not.
Scalable and Adaptive Log-based Anomaly Detection with Expert in the Loop
The authors present SeaLog, a scalable and adaptive log-based anomaly detection framework designed to meet the practical requirements of accuracy, lightweight design, and adaptiveness in cloud systems. SeaLog utilizes a trie-based detection agent for lightweight and adaptive anomaly detection in a streaming manner. It also incorporates expert feedback, including utilizing LLMs as an expert, to continuously enhance the system’s accuracy. Experimental results on two public datasets and an industrial dataset from CloudX showed that SeaLog is effective, achieving F1 scores between 0.908 and 0.990
The authors present SeaLog, a scalable and adaptive log-based anomaly detection framework designed to meet the practical requirements of accuracy, lightweight design, and adaptiveness in cloud systems. SeaLog utilizes a trie-based detection agent for lightweight and adaptive anomaly detection in a streaming manner. It also incorporates expert feedback, including utilizing LLMs as an expert, to continuously enhance the system’s accuracy. Experimental results on two public datasets and an industrial dataset from CloudX showed that SeaLog is effective, achieving F1 scores between 0.908 and 0.990
Analysis of ChatGPT on Source Code
The paper explores the use of LLMs and in particular ChatGPT in programming, source code analysis, and code generation. While these models can save time and provide highly accurate results, they are not yet advanced enough to replace human programmers entirely. The paper investigates the potential applications of LLMs and ChatGPT in various areas, such as
- code creation,
- code documentation,
- bug detection,
- refactoring, and
- more.
The paper explores the use of LLMs and in particular ChatGPT in programming, source code analysis, and code generation. While these models can save time and provide highly accurate results, they are not yet advanced enough to replace human programmers entirely. The paper investigates the potential applications of LLMs and ChatGPT in various areas, such as
- code creation,
- code documentation,
- bug detection,
- refactoring, and
- more.
👍2
ICAART 2024
16th International Conference on Agents and Artificial Intelligence
February 24 - 26, 2024
Rome, Italy
Upcoming Submission Deadlines
Regular Paper Submission: October 9, 2023
Position Paper Submission: November 17, 2023
Doctoral Consortium Paper Submission: January 1, 2024
16th International Conference on Agents and Artificial Intelligence
February 24 - 26, 2024
Rome, Italy
Upcoming Submission Deadlines
Regular Paper Submission: October 9, 2023
Position Paper Submission: November 17, 2023
Doctoral Consortium Paper Submission: January 1, 2024
Understanding DeepMind's Sorting Algorithm
AlphaDev is an artificial intelligence system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades.
AlphaDev is an artificial intelligence system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades.
NeurIPS 2023 Competition Track Program
Special Topics in Machine Learning:
- NeurIPS 2023 Machine Unlearning Competition
- Privacy Preserving Federated Learning Document VQA
- Causal Structure Learning from Event Sequences and Prior Knowledge
- Practical Vector Search Challenge 2023
Natural Language Processing and LLMs
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day
- TDC 2023 (LLM Edition): The Trojan Detection Challenge
Multi-Agent Learning
- The NeurIPS 2023 Neural MMO Challenge: Multi-Task Reinforcement Learning and Curriculum Generation
- Lux AI Challenge Season 2 NeurIPS Edition
- Melting Pot Contest
Special Topics in Machine Learning:
- NeurIPS 2023 Machine Unlearning Competition
- Privacy Preserving Federated Learning Document VQA
- Causal Structure Learning from Event Sequences and Prior Knowledge
- Practical Vector Search Challenge 2023
Natural Language Processing and LLMs
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day
- TDC 2023 (LLM Edition): The Trojan Detection Challenge
Multi-Agent Learning
- The NeurIPS 2023 Neural MMO Challenge: Multi-Task Reinforcement Learning and Curriculum Generation
- Lux AI Challenge Season 2 NeurIPS Edition
- Melting Pot Contest
Forwarded from Consciousnesses
Daniel Dennett — Counterfeit People
In this interview, Dr. Tim Scarfe speaks with renowned philosopher Daniel Dennett about the potential dangers of AI and the concept of "Counterfeit People." Dennett raises concerns about AI being used to create artificial colleagues, and argues that preventing counterfeit AI individuals is crucial for societal trust and security.
- Intro
- Main show kick off
- Counterfeit People
- Reversibility
- Reontologisation
- Realism
- Adversarial LLMs are out to get us
- Exploring mental trajectories and Chomsky
- Gilbert Ryle and Ghost in machine and competition in academia
- 2 Black boxes thought experiment / intentional stance
- Chinese room
- Singularitarianism
- Emergence of consciousness and semanticity
In this interview, Dr. Tim Scarfe speaks with renowned philosopher Daniel Dennett about the potential dangers of AI and the concept of "Counterfeit People." Dennett raises concerns about AI being used to create artificial colleagues, and argues that preventing counterfeit AI individuals is crucial for societal trust and security.
- Intro
- Main show kick off
- Counterfeit People
- Reversibility
- Reontologisation
- Realism
- Adversarial LLMs are out to get us
- Exploring mental trajectories and Chomsky
- Gilbert Ryle and Ghost in machine and competition in academia
- 2 Black boxes thought experiment / intentional stance
- Chinese room
- Singularitarianism
- Emergence of consciousness and semanticity
LLM Powered Autonomous Agents
Building agents with LLM as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.
- Agent System Overview
- Component One: Planning
Task Decomposition
Self-Reflection
- Component Two: Memory
Types of Memory
Maximum Inner Product Search (MIPS)
- Component Three: Tool Use
- Case Studies
Scientific Discovery Agent
Generative Agents Simulation
Proof-of-Concept Examples
- Challenges
- Citation
- References
Building agents with LLM as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.
- Agent System Overview
- Component One: Planning
Task Decomposition
Self-Reflection
- Component Two: Memory
Types of Memory
Maximum Inner Product Search (MIPS)
- Component Three: Tool Use
- Case Studies
Scientific Discovery Agent
Generative Agents Simulation
Proof-of-Concept Examples
- Challenges
- Citation
- References
❤2
LongNet: Scaling Transformers to 1,000,000,000 Tokens
The authors introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. They proposed dilated attention, which expands the attentive field exponentially as the distance grows.
The authors introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. They proposed dilated attention, which expands the attentive field exponentially as the distance grows.
Why AI Matters And How To Deal With The Coming Change w/ Emad Mostaque
Emad Mostaque (Stability AI):
- in five years there will be no more programmers
- by the end of next year, ChatGPT will be in mobile, without internet
- AI decentralization is a key element; the goal of Stability AI is to enable everyone to have a personalized AI system that reflects their own narratives and unique perspectives
Emad Mostaque (Stability AI):
- in five years there will be no more programmers
- by the end of next year, ChatGPT will be in mobile, without internet
- AI decentralization is a key element; the goal of Stability AI is to enable everyone to have a personalized AI system that reflects their own narratives and unique perspectives