PhD candidate Hung Pham has received an ACM SIGSOFT Distinguished Paper Award at ASE 2020, the 35th IEEE/ACM International Conference on Automated Software Engineering. He is cosupervised by Professor Lin Tan at Purdue University and adjunct professor at the Department of Electrical and Computer Engineering at the University of Waterloo and Cheriton School of Computer Science Professor Yaoliang Yu.
Hung’s paper, titled “Problems and opportunities in training deep-learning software systems: an analysis of variance,” is one of the first studies to examine variance in deep-learning models and the extent to which computer science researchers and computing professionals are aware of this variance.
Deep learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher-level features from raw input data. Deep-learning training algorithms are not deterministic. This means that identical training runs — runs that use the same training data, algorithm, and network — will produce different models with different levels of accuracy and training time.
To explore variability in deep-learning models, Hung and his coauthors conducted experiments on three datasets with six popular networks. The results revealed large accuracy differences among identical training runs. Even after excluding weak models, the researchers found that the accuracy difference was still almost 11 percent. In addition, factors related to implementation alone caused accuracy differences across identical training runs to be up to 2.9%, per-class accuracy differences to be up to 52.4%, and training time to convergence differences to be up to 145.3%.
Hung and his coauthors then surveyed 901 participants to determine the extent to which computer scientists and practitioners are aware of this variance.
“The results of the survey found that about four out of every five participants were unaware of or unsure about any implementation-level variance,” said Professor Yaoliang Yu. “Hung looked a bit deeper into this by also conducting a literature survey, which found that only around one of five papers in recent top software engineering, artificial intelligence, and systems conferences has used multiple identical training runs to quantify the variance of their deep-learning approaches.”
“Our award-winning paper raises awareness of deep-learning variance, which poses opportunities as well as challenges for researchers and practitioners,” added Professor Lin Tan. “We hope it will also direct researchers to tackle challenging tasks such as creating deterministic deep-learning implementations to facilitate debugging as well as improving the reproducibility of deep-learning systems and results.”
Along with Hung Pham and his cosupervisors Professors Tan and Yu, the study coauthors are Thibaud Lutellier, a PhD candidate at the Department of Electrical and Computer Engineering at University of Waterloo; Shangshu Qian, Jiannan Wang, and Jonathan Rosenthal, all PhD students at Purdue University; and Nachiappan Nagappan, partner researcher at Microsoft Research.
Hung will present this team work at ASE 2020, which is being held virtually this year from September 21 to 25. ASE is the premier research forum for automated software engineering. Each year, the international conference brings together researchers and practitioners across academia and industry to discuss foundations, techniques, and tools for automating the analysis, design, implementation, testing, and maintenance of large software systems.
Citation
Viet
Hung
Pham,
Shangshu
Qian,
Jiannan
Wang,
Thibaud
Lutellier,
Jonathan
Rosenthal,
Lin
Tan,
Yaoliang
Yu,
Nachiappan
Nagappan.
Problems
and
opportunities
in
training
deep-learning
software
systems:
an
analysis
of
variance.
ASE
2020
Research
Papers.