Context-Dependent Evaluation of Tools for NL RE Tasks: Recall vs. Precision, and Beyond

Daniel M. Berry

Cheriton School of Computer Science
University of Waterloo
Waterloo, ON, Canada

Abstract:

Natural language processing has been used since the 1980s to construct tools for performing natural language (NL) requirements engineering (RE) tasks. The RE field has often adopted information retrieval (IR) algorithms for use in implementing these NL RE tools.

Traditionally, the methods for evaluating an NL RE tool have been inherited from the IR field without adapting them to the requirements of the RE context in which the NL RE tool is used.

This talk discusses the problem and considers the evaluation of tools for a number of NL RE tasks in a number of contexts. The talk is aimed at helping the RE field begin to consistently evaluate each of its tools according to the requirements of the tool's task.

I benefited from discussions with and a panel with Jane Cleland-Huang, Alessio Ferrari, Walid Maalej, John Mylopoulos, and Didar Zowghi