Bushra
Aloraini,
PhD
candidate
David
R.
Cheriton
School
of
Computer
Science
Software bugs significantly reduce system quality and greatly increases cost. Indeed, it is estimated that software bugs cost the worldwide economy US$1.1 trillion in 2016. Some of the bugs may be security vulnerabilities, which are exploitable code errors that may be manipulated to compromise the intended operation of the software in a malicious way. Static Application Security Testing (SAST) tools help to find security vulnerabilities in software. However, these tools are known to produce a large number of false positive warnings.
In this talk, we will discuss our study where we investigated some SAST-produced warnings and their evolution over time to measure the false and true positive rates of such tools. We examine 116 large and popular C++ projects using six state-of-the-art open source and commercial SAST tools that detect security vulnerabilities. We examine the different types of warnings from the various tools. In order to track what happened to a warning over time, we use a novel framework that traces source code lines across different commits.
Our findings show that the pattern of the warnings is stable through time for all studied SAST tools. Also, most of the tools focus on input validation and representation, API abuse, and Code quality warnings. We observe that these types of warnings are detected in real-world in comparable patterns, but input validation and representation type of warning is detected significantly in real-world compared to others. In addition, we find that API abuse is fixed faster than other warnings types generally.