ml4se – Telegram
ml4se
505 subscribers
446 photos
1 file
524 links
Machine Learning for Software Engineering
Download Telegram
Recommending Root-Cause and Mitigation Steps for Cloud Incidents using Large Language Models

In this work, the authors do the first large-scale study to evaluate the effectiveness of LLMs for helping engineers root cause and mitigate production incidents. Human evaluation with actual incident owners shows the efficacy and future potential of using artificial intelligence for resolving cloud incidents.
Code Execution with Pre-trained Language Models

Code execution is a fundamental aspect of programming language semantics that reflects the exact behavior of the code. However, most pretrained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures. In this paper, the authors aim to teach pretrained models the real-world code execution process. They propose CodeExecutor, a Transformer-based model that learns to execute arbitrary programs and predict their execution traces.
Searching by Code: a New SearchBySnippet Dataset and SnippeR Retrieval Model for Searching by Code Snippets

The authors argue that using a code snippet (and possibly an associated traceback) as a query and looking for answers with bugfixing instructions and code samples is a natural use case that is not covered by existing approaches. The paper presents a new SearchBySnippet dataset implementing the search-by-code use case based on StackOverflow data; it turns out that in this setting, existing architectures fall short of the simplest BM25 baseline even after fine-tuning.
CCT-Code: Cross-Consistency Training for Multilingual Clone Detection and Code Search

Understanding semantic similarity is an important aspect of language processing. The authors present a new method CCT-LM that improves this ability via a novel CCT pretraining approach and demonstrate its viability on the clone detection and code search tasks. The proposed CCT-LM model outperforms strong baselines in all presented tasks, proving that CCT pretraining provides better semantic similarity understanding for a language model.
CodeT5+: Open Code LLMs for Code Understanding and Generation

Salesforce AI Research proposes CodeT5+, a family of encoder-decoder LLMs for code in which component modules can be flexibly combined to suit a wide range of downstream code tasks. Such flexibility is enabled by a mixture of pretraining objectives. These objectives cover span denoising, contrastive learning, text-code matching, and causal LM pretraining tasks, on both unimodal and bimodal multilingual code corpora.

The authors observe state-of-the-art model performance on various code-related tasks, such as code generation and completion, math programming, and text-to-code retrieval tasks. Particularly, the instruction-tuned CodeT5+ 16B achieves new SoTA results of 35.0% pass@1 and 54.5% pass@10 on the HumanEval.

- CodeT5+ 220M and 770M
- CodeT5+ 220M-py and 770M-py that are further tuned on Python subset
- CodeT5+: 2B, 6B, and 16B
- InstructCodeT5+ 16B

[GitHub]
🔥3
LLMs and Text-to-SQL task

* LLMs and SQL — writing prompts for Text-to-SQL task
* Evaluating the Text-to-SQL Capabilities of Large Language Models — it is assumed that some queries from the target domain are available
* A Generic Prompt for an LLM that enables NL-to-SQL across Domains and Compositions — completely a cross-domain setting
* How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings — zero-shot, single-domain, and cross-domain text-to-SQL settings
* Divide and Prompt: Chain of Thought Prompting for Text-to-SQL — a new paradigm for prompting Text-to-SQL tasks, which first divides the task into subtasks, and then approach each subtask through CoT
👍3
MIT: Generative AI for Constructive Communication Evaluation and New Research Methods

Advances in large language models recently popularized by ChatGPT represent a remarkable leap forward in language processing by machines.
* What does this mean for us, how can we make the most of these advancements, and what are the risks?
* What research opportunities have opened up?
* What kinds of evaluation are called for?

[Schedule]
Code Alpaca: An Instruction-following LLaMA Model trained on code generation instructions

The project aims to build and share an instruction-following LLaMA model for code generation. The repository contains data, code for fine-tuning the model.

- instruction-following data
- demo
👍5
ICSE 2024

Important dates:
- Fri 2 Jun 2023 Research Track First Cycle: Acceptance Notification
- Mon 10 Jul 2023 Research Track First Cycle: Revision due
- Tue 1 Aug 2023 Research Track Second Cycle: Submissions Deadline
- Thu 17 Aug 2023 Workshops Workshop Proposal Submissions Deadline
- Thu 24 Aug 2023 Research Track First Cycle: Final Decisions
- Thu 14 Sep 2023 Workshops Workshop Proposal Acceptance Notification
- Thu 14 Sep 2023 New Ideas and Emerging Results Submission Deadline
- Fri 15 Sep 2023 Research Track First Cycle: Camera-ready Submission
Microsoft AI Plugin Ecosystem

Microsoft is adopting the same open plugin standard that OpenAI introduced for ChatGPT, enabling interoperability across ChatGPT and the breadth of Microsoft’s copilot offerings. That means developers can now use one platform to build plugins that work across both business and consumer surfaces, including ChatGPT, Bing, Dynamics 365 Copilot, Microsoft 365 Copilot and Windows Copilot. Microsoft also announced it is bringing Bing to ChatGPT as the default search experience.
PERFOGRAPH: A Numerical Aware Program Graph Representation for Performance Optimization and Program Analysis

The remarkable growth and significant success of machine learning have expanded its applications into programming languages and program analysis. However, a key challenge in adopting the latest machine learning methods is the representation of programming languages, which directly impacts the ability of machine learning methods to reason about programs.

To overcome the limitations and challenges of current program representations, the authors propose a novel graph-based program representation called PerfoGraph.

The experimental results demonstrate that PERFOGRAPH outperforms existing representations and sets new state-of-the-art results by reducing the error rate by 7.4% (AMD dataset) and 10% (NVIDIA dataset) in the well-known Device Mapping challenge.
CodeTF: One-stop Transformer Library for State-of-the-art Code LLM (Salesforce)

The authors we present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence. CodeTF is designed with a unified interface to enable rapid access and development across different types of models, datasets and tasks. The library supports a collection of pretrained Code LLM models and popular code benchmarks, including a standardized interface to train and serve code LLMs efficiently, and data features such as language-specific parsers and utility functions for extracting code attributes.
👍2
AI for Low-Code for AI

LowCoder is the first low-code tool for developing AI pipelines that supports both a visual programming interface (LowCoder_VP) and an AI-powered natural language interface (LowCoder_NL). The authors leverage this tool to provide some of the first insights into whether and how two modalities (visual, e.g. drag-and-drop, and natural language instructions) help programmers by conducting a user study. They task 20 developers with varying levels of AI expertise with implementing four ML pipelines using LowCoder, replacing the LowCoder_NL component with a simple keyword search in half the tasks.

LowCoder helped developers compose (85% of tasks) and iterate (72.5% of tasks) over AI pipelines. Furthermore, LowCoder_NL helped users discover previously-unknown operators in 75% of tasks, compared to just 22.5% (12.5% in the NL condition and 32.5% in the keyword condition) using web search.

[LowCoder Artifacts]