Hung Pham receives an ACM SIGSOFT Distinguished Paper Award at ASE 2020

Thursday, August 27, 2020

PhD candidate Hung Pham has received an ACM SIGSOFT Distinguished Paper Award at ASE 2020, the 35th IEEE/ACM International Conference on Automated Software Engineering. He is cosupervised by Professor Lin Tan at Purdue University and adjunct professor at the Department of Electrical and Computer Engineering at the University of Waterloo and Cheriton School of Computer Science Professor Yaoliang Yu.

Hung’s paper, titled “Problems and opportunities in training deep-learning software systems: an analysis of variance,” is one of the first studies to examine variance in deep-learning models and the extent to which computer science researchers and computing professionals are aware of this variance.

photo of Hung Pham, Lin Tan and Yaoliang Yu

L to R: Cheriton School of Computer Science PhD student Hung Pham; Professor Lin Tan, Department of Computer Science at Purdue University and adjunct professor at the Department of Electrical and Computer Engineering at the University of Waterloo; Cheriton School of Computer Science Professor Yaoliang Yu

Deep learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher-level features from raw input data. Deep-learning training algorithms are not deterministic. This means that identical training runs — runs that use the same training data, algorithm, and network — will produce different models with different levels of accuracy and training time.

To explore variability in deep-learning models, Hung and his coauthors conducted experiments on three datasets with six popular networks. The results revealed large accuracy differences among identical training runs. Even after excluding weak models, the researchers found that the accuracy difference was still almost 11 percent. In addition, factors related to implementation alone caused accuracy differences across identical training runs to be up to 2.9%, per-class accuracy differences to be up to 52.4%, and training time to convergence differences to be up to 145.3%.

Hung and his coauthors then surveyed 901 participants to determine the extent to which computer scientists and practitioners are aware of this variance.

“The results of the survey found that about four out of every five participants were unaware of or unsure about any implementation-level variance,” said Professor Yaoliang Yu. “Hung looked a bit deeper into this by also conducting a literature survey, which found that only around one of five papers in recent top software engineering, artificial intelligence, and systems conferences has used multiple identical training runs to quantify the variance of their deep-learning approaches.”

“Our award-winning paper raises awareness of deep-learning variance, which poses opportunities as well as challenges for researchers and practitioners,” added Professor Lin Tan. “We hope it will also direct researchers to tackle challenging tasks such as creating deterministic deep-learning implementations to facilitate debugging as well as improving the reproducibility of deep-learning systems and results.”

Along with Hung Pham and his cosupervisors Professors Tan and Yu, the study coauthors are Thibaud Lutellier, a PhD candidate at the Department of Electrical and Computer Engineering at University of Waterloo; Shangshu Qian, Jiannan Wang, and Jonathan Rosenthal, all PhD students at Purdue University; and Nachiappan Nagappan, partner researcher at Microsoft Research.

Hung will present this team work at ASE 2020, which is being held virtually this year from September 21 to 25. ASE is the premier research forum for automated software engineering. Each year, the international conference brings together researchers and practitioners across academia and industry to discuss foundations, techniques, and tools for automating the analysis, design, implementation, testing, and maintenance of large software systems.


Citation
Viet Hung Pham, Shangshu Qian, Jiannan Wang, Thibaud Lutellier, Jonathan Rosenthal, Lin Tan, Yaoliang Yu, Nachiappan Nagappan. Problems and opportunities in training deep-learning software systems: an analysis of variance. ASE 2020 Research Papers.

  1. 2024 (22)
    1. March (13)
    2. February (1)
    3. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (50)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)