Tutors of courses like CS 115 and CS 135 likely do not need to worry about the details of I/O, as their courses deal with direct functions in Scheme instead of input or output side effects. However, languages like C rely solely on I/O to test the programs.
In order to do I/O tests, you need to know what file is being used for input, what file to compare the output against, and how to do compare the outputs.
In addition to the options specified in options.ss files in
/u/csXXX/marking/assignment/suite/in
, BitterSuite can
also read any files that may be necessary to complete testing.
One special file that all languages understand is input
. When a file with this
name appears in the hierarchy, it is redirected as standard input for all tests at
the same level or below.
Typically, this is used straightforwardly as the input that should be used to run a particular test or set of tests. However, it can be abused to make other testing tasks easier for tests that do not involve any I/O directly, such as SchemeModuleCreateDataFromInput.
BitterSuite supports two paradigms for output files.
This is a feature that is at least partly implemented in the version of BitterSuite in the development repository in the Fall 2010 term (NB!: Once this version is the default, this sentence should be removed). In addition to specifying input
files in the directory hierarchy, it is also acceptable to specify output
files (note to developers: expected
probably would have been a better name, or perhaps the verbose expected_output
?).
BitterSuite will compare output generated by student tests against
files in the answers directory, /u/csXXX/marking/assignment/suite/answers/
.
While output files could be placed here manually, it requires understanding
of BitterSuite's naming convention (which, while not complicated, is a detail
you shouldn't want to be concerned with); instead, they are typically
generated automatically from a sample solution by RSTA.
To generate model output:
/u/csXXX/marking/assignment/solution/
rsta assignment suite
.
The answers
directory should then be generated automatically by running your BitterSuite testing hierarchy
against the model solutions.
BitterSuite provides a single default program to do output comparison. This compares the output of
the test against the model output using diff -ibB -q
on the two files, assigning a mark of 100%
if this command+options determines the files are the same, and a mark of 0% otherwise along with
a message to see the output comparison. The assumption is that this should work well for
the majority of I/O tests.
This behaviour can be overridden with another program by creating a file (let's assume it's called altdiff
),
placing it in the directory /u/csXXX/marking/assignment/suite/provided
,
and applying it to tests with the option (diff altdiff)
.
This command will receive two command-line parameters, each of which is the path to one of the files to compare. Conceptually, the next step is simple: compare these two files according to some criteria to determine the grade to assign.
Unfortunately, the details of this alternate command currently are not quite as straightforward as may be expected. There are two major requirements:
It should be possible to simplify this in two ways (see BitterSuiteImprovePit):
TODO: Actually put examples here.