Master’s Thesis Presentation • Software Engineering • Large Language Models for Build System Maintenance: An Empirical Study of CodeGen’s Next-Line Prediction

Friday, January 10, 2025 10:00 am - 11:00 am EST (GMT -05:00)

Please note: This master’s thesis presentation will take place in DC 2314 and online.

Akinbowale Akin-Taylor, Master’s candidate
David R. Cheriton School of Computer Science

Supervisors: Professors Shane McIntosh, Mei Nagappan

Build systems play a crucial role in software development and are responsible for compiling source code into executable programs. Despite their importance, build systems often receive limited attention because their impact is not directly visible to end users. This oversight can lead to inadequate maintenance, frequent build failures, and disruptions that require additional resources. Recognising and addressing the maintenance needs of build systems is essential to preventing costly disruptions and ensuring efficient software production.

In this thesis, I explore whether applying a Large Language Model (LLM) can reduce the burden of maintaining build systems. I aim to determine whether the prior content in build specifications provides sufficient context for an LLM to generate subsequent lines accurately. I conduct an empirical study on CodeGen, a state-of-the-art Large Language Model (LLM), using a dataset of 13,343 Maven build files. The dataset consists of the Expert dataset from the Apache Software Foundation (ASF) for fine-tuning (9,426 build files) and the Generalised dataset from GitHub for testing (3,917 build files). I observe that (i) fine-tuning on a small portion of data (i.e., 11% of fine-tuning datasets) provides the largest improvement in performance by 13.93% (ii) When applied to the Generalised dataset, the fine-tuned model retains 83.86% of its performance, indicating that it is not overfitted. Upon further investigation, I classify build-code content into functional and metadata subgroups based on enclosing tags. The fine-tuned model performs substantially better in suggesting functional than metadata build-code. The findings highlight the potential of leveraging LLMs like CodeGen to relieve the maintenance challenges associated with build systems, particularly in functional content. My thesis highlights the limitations of large language models in suggesting the metadata components of build code. Future research should focus on developing approaches to enhance the accuracy and effectiveness of metadata generation.


To attend this master’s thesis presentation in person, please go to DC 2314. You can also attend virtually on Zoom.