Please note: This master’s thesis presentation will take place in DC 3317 and online.
Godsfavour Kio, Master’s candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Mei Nagappan
The rise of large language models (LLMs) has sparked significant interest in their application to software engineering tasks. However, as new and more capable LLMs emerge, existing evaluation benchmarks (such as HumanEval and MBPP) are no longer sufficient for gauging their potential. While benchmarks like SWE-Bench and SWE-Bench-Java provide a foundation for evaluating these models on real-world challenges, publicly available datasets face potential contamination risks, compromising their reliability for assessing generalization.
To address these limitations, we introduce SWE-Bench-Secret, a private dataset carefully selected to evaluate AI agents on software engineering tasks spanning multiple years, including some originating after the models’ training data cutoff. Derived from three popular GitHub repositories, it comprises 457 task instances designed to mirror SWE-Bench’s structure while maintaining strict data secrecy. Evaluations on a lightweight subset, SWE-Secret-Lite, reveal significant performance gaps between public and private datasets, highlighting the increased difficulty models face when dealing with tasks that extend beyond familiar patterns found in publicly available data.
Additionally, we provide a secure mechanism that allows researchers to submit their agents for evaluation without exposing the dataset.
Our findings emphasize the need for improved logical reasoning and adaptability in AI agents, particularly when confronted with tasks that lie outside well-known public training data distributions. By introducing a contamination-free evaluation framework and a novel secret benchmark, this work strengthens the foundation for advancing benchmarking methodologies and promoting the development of more versatile, context-aware AI agents.
To attend this master’s thesis presentation in person, please go to DC 3317. You can also attend virtually on Zoom.