Master’s Thesis Presentation • Software Engineering • Classifying Code as Human Authored or GPT-4 GeneratedExport this event to calendar

Tuesday, April 30, 2024 — 3:00 PM to 4:00 PM EDT

Please note: This master’s thesis presentation will take place in DC 2310 and online.

Joy Idialu, Master’s candidate
David R. Cheriton School of Computer Science

Supervisors: Professors Mei Nagappan, Jo Atlee

Artificial intelligence (AI) assistants such as GitHub Copilot and ChatGPT, built on large language models like GPT-4, are revolutionizing how programming tasks are performed, raising questions about whether generative AI models author code. Such questions are of particular interest to educators, who worry that these tools enable a new form of academic dishonesty, in which students submit AI-generated code as their work. Our research explores the viability of using code stylometry and machine learning to distinguish between GPT-4 generated and human-authored code and attempts to explain the predictions.

Our study comprises two analyses, each based on different datasets, one sourced from CodeChef and the other from an introductory programming course. Both datasets encompass human-authored solutions alongside AI-authored solutions generated by GPT-4. The human-authored solutions selected were before 2021 to ensure that the solutions were not contaminated with contributions from an AI coding assistant. The initial analysis serves to establish the potential of our approach, while the subsequent analysis extends our approach to actual programming assignments.

In our first analysis, our classifier outperforms the baselines, achieving an F1-score and AUC-ROC score of 0.91. Even a variant of our classifier, which excludes gameable features (features susceptible to manipulation e.g., empty lines, whitespace), maintains a good performance, achieving an F1-score and AUC-ROC score of 0.89. We also conducted an evaluation based on the difficulty level of programming problems, revealing little to no differences across the difficulty levels. Specifically, the F1-score and AUC-ROC remained consistent with scores of 0.89 for easy and medium problems and a slight decrease to 0.87 for harder problems. These results highlight the promise of our approach regardless of the complexity of the programming tasks.

In our second analysis, our classifier, trained and evaluated on programming assignments achieved an F1-score of 0.69 and an AUC-ROC of 0.73. A subsequent evaluation applied this classifier to assignments submitted in 2023, a period after the release of Copilot and ChatGPT; we identified 13 out of 54 submissions as GPT-4 generated with an accuracy rate of 73%. We believe educators should recognize and proactively address this emerging trend within academic settings.


To attend this master’s thesis presentation in person, please go to DC 2310. You can also attend virtually using Zoom at https://uwaterloo.zoom.us/j/96841203491.

Location 
DC - William G. Davis Computer Research Centre
Hybrid: DC 2310 | Online master’s thesis presentation
200 University Avenue West

Waterloo, ON N2L 3G1
Canada
Event tags 

S M T W T F S
28
29
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
  1. 2024 (129)
    1. June (1)
    2. May (10)
    3. April (41)
    4. March (27)
    5. February (25)
    6. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)