Please note: This PhD seminar will take place in DC 2310.
Murray Dunne, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Sebastian Fischmeister
With the rise of AI coding assistants comes a commensurate rise in the presence of AI generated code in modern embedded systems. Developers looking to save time may turn to network message parsing code as a convenient application for Large Language Model (LLM)-generated code due to the availability of well specified and public network standard documents as prompt material. In our previous work we showed that LLM-generated code exhibits many characteristic weaknesses from a fuzzing perspective. We now extend this work with a static analysis perspective on the weaknesses of LLM-generated networking code. We discuss the weakness of LLM-generated code specifically as it concerns memory management. Finally, we cover an exploration of the effects of prompting approaches on LLM-generate code.