This is a checklist to go through after finishing test setups and before running tests on every student's submission.

Check Autotesting Setups

1. All latest versions of scripts should be used, and marking/provided_library always has the latest versions.

2. Except for the number of tests, for most of the time everything in and test..0 should be exactly the same like disallowed.rkt file(s), recursion/local restrictions, etc. Generally you should copy the latest version of into test.0 and then add tests to it. If there is anything different in terms of setup, there should be a very good reason for it. "diff" ing all the files like options.rkt, disallowed.rkt, etc in the two setups is a good way to make sure there are no issues.

3. options.rkt in each test suite

  • The current language level should match the language level stated in the assignment.
  • The loadcode files and file names should be correct.
  • disallowed.rkt in modules in each question is set up properly. If you include other files in "modules" part, make sure to include disallowed.rkt at the end of the list (don't include the same disallowed.rkt in the middle).
4. The Permission of files/folders should be executable and readable.

5. Stepper Questions are set up properly - refer to StepperQuestions

6. Disallowed Functions/Restrictions in the Questions

  • disallowed.rkt should contain functions that cannot be used under the current teaching language.
  • "false?" and "member" sometimes may not be in the permitted function list on the assignment instruction page, but they are usually acceptable functions for assignments. You should better ask instructors to decide whether to include them if they are not in the permitted function list. (whitelist->blacklist should be the latest version which automatically includes member if member? is included)
  • If lambda is allowed, λ (lambda in the form of symbol) should be allowed as well.
  • Testing your test setup with a function call that is allowed in this assignment but disallowed in the previous assignment in every file would be a good testing approach. The test result should not complain about anything.

  • Testing your test setup with a function that is banned in the question/assignment would be a good testing approach.
  • Checking lambda/local/recursion in your submission would be a good idea if there are restrictions about them.
7. For questions that involve including (require ...)
  • Have tests based on your judgment. For example, if some files require another file, checking a submission that includes the required file and one that does not include this file would be a good approach.
8. Review all the correctness tests
  • All the tests must follow the restrictions (if there are) in the question like the order of the values in a list, the name of the string (increasing in lexicographic order for example), positive numbers only, etc.
9. Running tests on some ISAs' and instructors' solutions would be a good idea. The test result should not surprise you.

10. Running tests on some students' submissions after collecting students' stepper questions (if they exist) is important.

  • Students should receive marks for stepper questions (if they really did these questions).

Check Testcases Setups

1. The language level at the beginning of [file]-tc.rkt should be the same as the one indicated in the assignment.

2. The collector at the beginning of [file]-tc.rkt should be the one the assignment is looking for.

3. file-list.txt should contain all the files students need to submit.

4. test-results (a directory next to test-cases) should be empty before running tests on students' submissions.

5. make-tc in [file]-tc.rkt should have no typoes.

6. Pretending to be a student and seeing if you can understand all the messages regarding missing tests (in make-tc) is a good way.

7. For [function-name/valid?] function you made to collect tests, it should cause no run-time errors if check-expect/within/error contains any data type.

  • eg: Don't forget to include (list? (second fcn-app)) before (ormap/andmap ... (second fcn-app)).

8. All the helper functions you used for the test cases setup should work as you intended.

9. Running tests on your submission that has no missing test is important. The result should not complain any missing test.

10. Running tests on some other ISAs' and running tests on a large load of submissions from students would be a good approach. There should be no surprising errors.

11. You can take a look at a few of the students' test results and solutions, and see if everything works as intended.

-- Xiyu Chen - 2021-12-14


Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r3 - 2021-12-15 - XiyuChen
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback