Please note: This master’s thesis presentation will take place online.
Mehran Meidani, Master’s candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Shane McIntosh
Dependency graphs are at the heart of software analytics tasks like change impact analysis, test selection, and maintenance analysis. Despite their importance, current approaches to extract and analyze dependency graphs overlook configuration settings and code-adjacent artifacts in large software systems. These shortcomings directly affect the results of the aforementioned analytics tasks. Indeed, changing a software application with many build-time configuration settings may introduce unexpected side effects. For example, a change intended to be specific to a platform (e.g., Windows) or product configuration (e.g., community edition) might impact other platforms or configurations. Moreover, a change intended to apply to a set of platforms or configurations may be unintentionally limited to a subset of platforms. In addition to build-time configuration settings, software projects require a broad range of expertise to develop. For example, to produce a video game, engineers, like software developers, and artists, like 3D designers, must iterate on the same project simultaneously. In such projects, a change to the work products of any of the teams can impact the work of other teams. As a result, any analytics tasks should consider intra- and inter-dependencies among artifacts produced by different teams. For instance, the focus of the quality assurance team for a change local to a team differs from one that impacts others.
Indeed, understanding the exposure of changes is an important risk mitigation step in change-based development approaches. In this thesis, we first present DiPiDi, a new approach to assess the exposure of source code changes under different build-time configuration settings by statically analyzing build specifications. To evaluate our approach, we produce a prototype implementation of DiPiDi for the CMake build system. We measure the effectiveness and efficiency of developers when performing five tasks in which they must identify the deliverable(s) and conditions under which a source code change will propagate. We assign participants into three groups: without explicit tool support, supported by existing impact analysis tools, and supported by DiPiDi. While our study does not have the statistical power to make generalized quantitative claims, we manually analyze the full distribution of our study’s results and show that DiPiDi results in a net benefit for its users. Through our experimental evaluation, we show that DiPiDi is associated with a 36 percentage point improvement in F1-score on average when identifying impacted deliverables and an average reduction of 0.62 units of distance when ranking impacted patches. Furthermore, DiPiDi results in a 42% average task time reduction for our participants when compared to a competing impact analysis approach. DiPiDi’s improvements to both effectiveness and efficiency are especially prevalent in complex programs with many compile-time configurations.
Next, to extract and analyze cross-disciplinary dependencies, we propose a multidisciplinary dependency graph. We instantiate our idea by developing tools that extract dependencies and construct the graph at a multinational video game organization with more than 18,000 employees. Our analysis of the historical data from a recently launched video game project demonstrates that 41% of the studied source code changes impact other teams’ artifacts, highlighting the importance of analyzing inter-artifact dependencies. We also observe that 66% of the studied changes do not modify the graph, suggesting that prior graph versions are often accurate for analytics tasks (e.g., impact analysis); however, rapid incremental approaches are needed to update the graph and ensure its usefulness for all types of changes. The enhanced dependency graph presented in this thesis can be leveraged to develop a new generation of risk assessment, build failure prediction, and code review prioritization tools.