ml4se – Telegram
ml4se
505 subscribers
446 photos
1 file
524 links
Machine Learning for Software Engineering
Download Telegram
ICAART 2024
16th International Conference on Agents and Artificial Intelligence
February 24 - 26, 2024
Rome, Italy

Upcoming Submission Deadlines
Regular Paper Submission: October 9, 2023
Position Paper Submission: November 17, 2023
Doctoral Consortium Paper Submission: January 1, 2024
Understanding DeepMind's Sorting Algorithm

AlphaDev is an artificial intelligence system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades.
Forwarded from Consciousnesses
Daniel Dennett — Counterfeit People

In this interview, Dr. Tim Scarfe speaks with renowned philosopher Daniel Dennett about the potential dangers of AI and the concept of "Counterfeit People." Dennett raises concerns about AI being used to create artificial colleagues, and argues that preventing counterfeit AI individuals is crucial for societal trust and security.

- Intro
- Main show kick off
- Counterfeit People
- Reversibility
- Reontologisation
- Realism
- Adversarial LLMs are out to get us
- Exploring mental trajectories and Chomsky
- Gilbert Ryle and Ghost in machine and competition in academia
- 2 Black boxes thought experiment / intentional stance
- Chinese room
- Singularitarianism
- Emergence of consciousness and semanticity
LLM Powered Autonomous Agents

Building agents with LLM as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.

- Agent System Overview
- Component One: Planning
Task Decomposition
Self-Reflection
- Component Two: Memory
Types of Memory
Maximum Inner Product Search (MIPS)
- Component Three: Tool Use
- Case Studies
Scientific Discovery Agent
Generative Agents Simulation
Proof-of-Concept Examples
- Challenges
- Citation
- References
2
LongNet: Scaling Transformers to 1,000,000,000 Tokens

The authors introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. They proposed dilated attention, which expands the attentive field exponentially as the distance grows.
Why AI Matters And How To Deal With The Coming Change w/ Emad Mostaque

Emad Mostaque (Stability AI):
- in five years there will be no more programmers
- by the end of next year, ChatGPT will be in mobile, without internet
- AI decentralization is a key element; the goal of Stability AI is to enable everyone to have a personalized AI system that reflects their own narratives and unique perspectives
Direct Preference Optimization: Your Language Model is Secretly a Reward Model

Based on the RLHF approach, the authors use the optimal solution to the KL-constrained reward maximization objective. Applying the Bradley-Terry model, the authors get a DPO problem that does not explicitly contain the reward model. This avoids reinforcement learning. At the same time, the reward model is implicitly present.

The resulting algorithm is stable, performant, and computationally lightweight, eliminating the need for fitting a reward model, sampling from the LM during fine-tuning, or performing significant hyperparameter tuning. Fine-tuning with DPO exceeds RLHF’s ability to control sentiment of generations and improves response quality in summarization and single-turn dialogue while being substantially simpler to implement and train.
PdfGptIndexer

PdfGptIndexer is a tool for indexing and searching PDF text data using OpenAI's GPT-2 model and FAISS. The PdfGptIndexer operates in several stages:
1. It first processes a specified folder of PDF documents, extracting the text and splitting it into manageable chunks using a GPT-2 tokenizer from the Transformers library.
2. Each text chunk is then embedded using the OpenAI GPT-2 model through the LangChain library.
3. These embeddings are stored in a FAISS index, providing a compact and efficient storage method.
4. Finally, a query interface allows you to retrieve relevant information from the indexed data by asking questions. The application fetches and displays the most relevant text chunk.
Automatic Static Bug Detection for Machine Learning Libraries: Are We There Yet?

The authors address a question of practical effectiveness and usefulness of static bug detectors for machine learning libraries. They analyze five popular and widely used static bug detectors, namely Flawfinder, RATS, Cppcheck, Facebook Infer, and Clang static analyzer on a curated dataset of software bugs gathered from four popular machine learning libraries including Mlpack, MXNet, PyTorch, and TensorFlow with a total of 410 known bugs. The study shows that static bug detectors find a negligible amount of all bugs accounting for 6/410 bugs (0.01%). Also the study reveals several findings that can serve as applicable guidelines for improving static bug detection for ML libraries.
Using Commandline To Process CSV files

- to print the first column of a CSV file: awk -F, '{print $1}' file.csv
- to print the first and third columns of a CSV file: awk -F, '{print $1 "," $3}' file.csv
- to print only the lines of a CSV file that contain a specific string: grep "string" file.csv
- to sort a CSV file based on the values in the second column: sort -t, -k2 file.csv
- to remove the first row of a CSV file (the header row): tail -n +2 file.csv
- to remove duplicates from a CSV file based on the values in the first column: awk -F, '!seen[$1]++' file.csv
- to calculate the sum of the values in the third column of a CSV file: awk -F, '{sum+=$3} END {print sum}' file.csv
- to convert a CSV file to a JSON array: jq -R -r 'split(",") | {name:.[0],age:.[1]}' file.csv
- to convert a CSV file to a SQL INSERT statement: awk -F, '{printf "INSERT INTO table VALUES (\"%s\", \"%s\", \"%s\");\n", $1, $2, $3}' file.csv
2
Optimising the Software Development Process with Artificial Intelligence

Contents
- 1 Introduction
Part I Planning and Analysis
- 2 Artificial Intelligence in Software Project Management
- 3 Requirements Engineering
- 4 Leveraging Artificial Intelligence for Model-based Software Analysis and Design
Part II Development and Deployment
- 5 Statistical Models and Machine Learning to Advance Code Completion: Are We There Yet?
- 6 Cloud Development and Deployment
Part III Testing and Maintenance
- 7 Automated Support for Unit Test Generation
- 8 Artificial Intelligence Techniques in System Testing
- 9 Intelligent Software Maintenance
Part IV AI Techniques from Scratch
- 10 Metaheuristics in a Nutshell
- 11 Foundations of Machine Learning for Software Engineering
🔥2
Self-consistency for open-ended generations

Although individual generations sampled from the large-scale pre-trained language models often yield high-quality results, multiple samplings can produce certain generations of substantially higher quality than the average output of the model.

Recently for the special case of problems that have fixed answer, a simple approach, called self-consistency was suggested for selecting the best answer from multiple generations (Wang et al. 2022). In that paper, the authors sample multiple generations from the LLM, extract the predicted answer from each generation and select the answer with the most number of votes. However, it is important to note that the self-consistency approach is not applicable to prompts that are open-ended and do not have fixed answers.

In this paper, the authors introduce a generalized framework for self-consistency that extends its applicability beyond problems that have fixed-answer answers.