ICAART 2024
16th International Conference on Agents and Artificial Intelligence
February 24 - 26, 2024
Rome, Italy
Upcoming Submission Deadlines
Regular Paper Submission: October 9, 2023
Position Paper Submission: November 17, 2023
Doctoral Consortium Paper Submission: January 1, 2024
16th International Conference on Agents and Artificial Intelligence
February 24 - 26, 2024
Rome, Italy
Upcoming Submission Deadlines
Regular Paper Submission: October 9, 2023
Position Paper Submission: November 17, 2023
Doctoral Consortium Paper Submission: January 1, 2024
Understanding DeepMind's Sorting Algorithm
AlphaDev is an artificial intelligence system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades.
AlphaDev is an artificial intelligence system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades.
NeurIPS 2023 Competition Track Program
Special Topics in Machine Learning:
- NeurIPS 2023 Machine Unlearning Competition
- Privacy Preserving Federated Learning Document VQA
- Causal Structure Learning from Event Sequences and Prior Knowledge
- Practical Vector Search Challenge 2023
Natural Language Processing and LLMs
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day
- TDC 2023 (LLM Edition): The Trojan Detection Challenge
Multi-Agent Learning
- The NeurIPS 2023 Neural MMO Challenge: Multi-Task Reinforcement Learning and Curriculum Generation
- Lux AI Challenge Season 2 NeurIPS Edition
- Melting Pot Contest
Special Topics in Machine Learning:
- NeurIPS 2023 Machine Unlearning Competition
- Privacy Preserving Federated Learning Document VQA
- Causal Structure Learning from Event Sequences and Prior Knowledge
- Practical Vector Search Challenge 2023
Natural Language Processing and LLMs
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day
- TDC 2023 (LLM Edition): The Trojan Detection Challenge
Multi-Agent Learning
- The NeurIPS 2023 Neural MMO Challenge: Multi-Task Reinforcement Learning and Curriculum Generation
- Lux AI Challenge Season 2 NeurIPS Edition
- Melting Pot Contest
Forwarded from Consciousnesses
Daniel Dennett — Counterfeit People
In this interview, Dr. Tim Scarfe speaks with renowned philosopher Daniel Dennett about the potential dangers of AI and the concept of "Counterfeit People." Dennett raises concerns about AI being used to create artificial colleagues, and argues that preventing counterfeit AI individuals is crucial for societal trust and security.
- Intro
- Main show kick off
- Counterfeit People
- Reversibility
- Reontologisation
- Realism
- Adversarial LLMs are out to get us
- Exploring mental trajectories and Chomsky
- Gilbert Ryle and Ghost in machine and competition in academia
- 2 Black boxes thought experiment / intentional stance
- Chinese room
- Singularitarianism
- Emergence of consciousness and semanticity
In this interview, Dr. Tim Scarfe speaks with renowned philosopher Daniel Dennett about the potential dangers of AI and the concept of "Counterfeit People." Dennett raises concerns about AI being used to create artificial colleagues, and argues that preventing counterfeit AI individuals is crucial for societal trust and security.
- Intro
- Main show kick off
- Counterfeit People
- Reversibility
- Reontologisation
- Realism
- Adversarial LLMs are out to get us
- Exploring mental trajectories and Chomsky
- Gilbert Ryle and Ghost in machine and competition in academia
- 2 Black boxes thought experiment / intentional stance
- Chinese room
- Singularitarianism
- Emergence of consciousness and semanticity
LLM Powered Autonomous Agents
Building agents with LLM as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.
- Agent System Overview
- Component One: Planning
Task Decomposition
Self-Reflection
- Component Two: Memory
Types of Memory
Maximum Inner Product Search (MIPS)
- Component Three: Tool Use
- Case Studies
Scientific Discovery Agent
Generative Agents Simulation
Proof-of-Concept Examples
- Challenges
- Citation
- References
Building agents with LLM as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.
- Agent System Overview
- Component One: Planning
Task Decomposition
Self-Reflection
- Component Two: Memory
Types of Memory
Maximum Inner Product Search (MIPS)
- Component Three: Tool Use
- Case Studies
Scientific Discovery Agent
Generative Agents Simulation
Proof-of-Concept Examples
- Challenges
- Citation
- References
❤2
LongNet: Scaling Transformers to 1,000,000,000 Tokens
The authors introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. They proposed dilated attention, which expands the attentive field exponentially as the distance grows.
The authors introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. They proposed dilated attention, which expands the attentive field exponentially as the distance grows.
Why AI Matters And How To Deal With The Coming Change w/ Emad Mostaque
Emad Mostaque (Stability AI):
- in five years there will be no more programmers
- by the end of next year, ChatGPT will be in mobile, without internet
- AI decentralization is a key element; the goal of Stability AI is to enable everyone to have a personalized AI system that reflects their own narratives and unique perspectives
Emad Mostaque (Stability AI):
- in five years there will be no more programmers
- by the end of next year, ChatGPT will be in mobile, without internet
- AI decentralization is a key element; the goal of Stability AI is to enable everyone to have a personalized AI system that reflects their own narratives and unique perspectives
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Based on the RLHF approach, the authors use the optimal solution to the KL-constrained reward maximization objective. Applying the Bradley-Terry model, the authors get a DPO problem that does not explicitly contain the reward model. This avoids reinforcement learning. At the same time, the reward model is implicitly present.
The resulting algorithm is stable, performant, and computationally lightweight, eliminating the need for fitting a reward model, sampling from the LM during fine-tuning, or performing significant hyperparameter tuning. Fine-tuning with DPO exceeds RLHF’s ability to control sentiment of generations and improves response quality in summarization and single-turn dialogue while being substantially simpler to implement and train.
Based on the RLHF approach, the authors use the optimal solution to the KL-constrained reward maximization objective. Applying the Bradley-Terry model, the authors get a DPO problem that does not explicitly contain the reward model. This avoids reinforcement learning. At the same time, the reward model is implicitly present.
The resulting algorithm is stable, performant, and computationally lightweight, eliminating the need for fitting a reward model, sampling from the LM during fine-tuning, or performing significant hyperparameter tuning. Fine-tuning with DPO exceeds RLHF’s ability to control sentiment of generations and improves response quality in summarization and single-turn dialogue while being substantially simpler to implement and train.
PdfGptIndexer
PdfGptIndexer is a tool for indexing and searching PDF text data using OpenAI's GPT-2 model and FAISS. The PdfGptIndexer operates in several stages:
1. It first processes a specified folder of PDF documents, extracting the text and splitting it into manageable chunks using a GPT-2 tokenizer from the Transformers library.
2. Each text chunk is then embedded using the OpenAI GPT-2 model through the LangChain library.
3. These embeddings are stored in a FAISS index, providing a compact and efficient storage method.
4. Finally, a query interface allows you to retrieve relevant information from the indexed data by asking questions. The application fetches and displays the most relevant text chunk.
PdfGptIndexer is a tool for indexing and searching PDF text data using OpenAI's GPT-2 model and FAISS. The PdfGptIndexer operates in several stages:
1. It first processes a specified folder of PDF documents, extracting the text and splitting it into manageable chunks using a GPT-2 tokenizer from the Transformers library.
2. Each text chunk is then embedded using the OpenAI GPT-2 model through the LangChain library.
3. These embeddings are stored in a FAISS index, providing a compact and efficient storage method.
4. Finally, a query interface allows you to retrieve relevant information from the indexed data by asking questions. The application fetches and displays the most relevant text chunk.
Automatic Static Bug Detection for Machine Learning Libraries: Are We There Yet?
The authors address a question of practical effectiveness and usefulness of static bug detectors for machine learning libraries. They analyze five popular and widely used static bug detectors, namely Flawfinder, RATS, Cppcheck, Facebook Infer, and Clang static analyzer on a curated dataset of software bugs gathered from four popular machine learning libraries including Mlpack, MXNet, PyTorch, and TensorFlow with a total of 410 known bugs. The study shows that static bug detectors find a negligible amount of all bugs accounting for 6/410 bugs (0.01%). Also the study reveals several findings that can serve as applicable guidelines for improving static bug detection for ML libraries.
The authors address a question of practical effectiveness and usefulness of static bug detectors for machine learning libraries. They analyze five popular and widely used static bug detectors, namely Flawfinder, RATS, Cppcheck, Facebook Infer, and Clang static analyzer on a curated dataset of software bugs gathered from four popular machine learning libraries including Mlpack, MXNet, PyTorch, and TensorFlow with a total of 410 known bugs. The study shows that static bug detectors find a negligible amount of all bugs accounting for 6/410 bugs (0.01%). Also the study reveals several findings that can serve as applicable guidelines for improving static bug detection for ML libraries.