This is a checklist to go through after finishing test setups and before running tests on every student's submission.
1. All latest versions of scripts should be used, and marking/provided_library always has the latest versions.
2. Except for the number of tests, for most of the time everything in test.pt and test..0 should be exactly the same like disallowed.rkt file(s), recursion/local restrictions, etc. Generally you should copy the latest version of test.pt into test.0 and then add tests to it. If there is anything different in terms of setup, there should be a very good reason for it. "diff" ing all the files like options.rkt, disallowed.rkt, etc in the two setups is a good way to make sure there are no issues.
3. options.rkt in each test suite
5. Stepper Questions are set up properly - refer to StepperQuestions
6. Disallowed Functions/Restrictions in the Questions
Testing your test setup with a function call that is allowed in this assignment but disallowed in the previous assignment in every file would be a good testing approach. The test result should not complain about anything.
10. Running tests on some students' submissions after collecting students' stepper questions (if they exist) is important.
1. The language level at the beginning of [file]-tc.rkt should be the same as the one indicated in the assignment.
2. The collector at the beginning of [file]-tc.rkt should be the one the assignment is looking for.
3. file-list.txt should contain all the files students need to submit.
4. test-results (a directory next to test-cases) should be empty before running tests on students' submissions.
5. make-tc in [file]-tc.rkt should have no typoes.
6. Pretending to be a student and seeing if you can understand all the messages regarding missing tests (in make-tc) is a good way.
7. For [function-name/valid?] function you made to collect tests, it should cause no run-time errors if check-expect/within/error contains any data type.
8. All the helper functions you used for the test cases setup should work as you intended.
9. Running tests on your submission that has no missing test is important. The result should not complain any missing test.
10. Running tests on some other ISAs' and running tests on a large load of submissions from students would be a good approach. There should be no surprising errors.
11. You can take a look at a few of the students' test results and solutions, and see if everything works as intended.