Support for extra extra hints texts (feedback, correction hints)
The updated version of the inclass-exam supports now extra hint texts for lecturers and students. These extra information is displayed after the question, and is used either for feedback to the student (e.g. Einsicht) or as information for the grader (evaluation guidelines, esp. useful when multiple graders are involved). It can be controlled via setup what kind of information the question author can provide. The providable feedback is controlled via the feedback level:
Feedback level - The feedback level can be * "full": two input fields for feedback for correct and incorrect answers * "single": single input fields for feedback * "none": no input field for feedback. - The feedback level is specified in the forms used in the pull down menus for creating "New" test items. - The feedback level is applicable to all test item types (SC, MC, Text, ShortText, Ordering, Upload, Composite, Pool) - The predefined setup is as follows: "single" is used for all item type, except for "Pool" questions (since the feedback is taken from the replacement question from the Pool) - Nothing has change here
Correction notes: - optionally, correction notes can be specified. Since in the past, the general feedback was not shown to the students, it was sometimes abused for correction hints for the lecturer. Therefore, we have now a separate field - Correction notes can be added to all item types (SC, MC, Text, ShortText, Ordering, Upload, Composite, Pool) and are turned on by default (except for Pool).
Visibility of general feedback and correction notes - general feedback is shown (when provided) * full exam-protocol is shown * filtering single question * filtering single submission (view with revision selection) * exam-review for students (Einsicht) - corrections notes are shown (when provided) * full exam-protocol is shown * filtering single question * filtering single submission (view with revision selection) - when general feedback is not provided, the exam protocol etc. looks as before - when correction notes are not provided, the exam protocol etc. looks as before
- Negative feedback is provided, * when percentage is known, and * percentage is < 50% - when negative feedback is displayed, the positive feedback is not displayed and vice versa
For composite questions, the system supports hint texts for the full composite question and additional ones for every single part.
Support for pool questions in the test-item family
Features: - select random questions from some folder (Currently siblings, i.e., folders of the same package instance)
- one exam can have multiple pool questions, potentially from other pools
- pools can be links to other folders (which are no siblings)
- The current folder can be used as well as a pool folder. In this case, other not-used items can be selected as replacement items given these match the specified filter characteristics
- Question filtering * filter by item type: allow one to select from all/some/some item types (mc, sc, short text, reordering, composite, ...): use as replacement items only examples of these types
* filter by minutes and/or points: use as replacement items only items with matching points/minutes
* filter by language: use as replacement items only items in a certain language
* Filter per item name pattern: use as replacement items only items matching a name. When (short-) names are used systematically, one can e.g. use the date in the name and specify only items from e.g. one year ago, via "*2020*", or from some chapter "*ch01", ... Certainly, it is also possible to use different item folders for this
Potential further steps:
- Currently, exams containing pool questions are treated as auto-correctable (which implies automated exam-review (Einsicht), when from the question pools only question from strictly closed questions are selected (MC, SC, Ordering). Depending on the detailed settings, also other item types could be possible (via correct-when), but this requires a deeper analysis of every question, which is so far not performed.
- Categorized items: technically, the infrastructure is mostly here to allow also filtering by categories. This would allow one to select e.g. from technical questions, case examples, knowledge transfer questions, research-oriented questions... which are orthogonal to the filtering currently available. Every lecturer can define own categories depending on their needs, we could provide university-wide category-trees, etc. Of course, one could also define separate pools for these purposes, but categorizing is probably more convenient and more flexible.
- Performance: when question pools become large (500+ questions) and the cohorts as well (500+), the current version might require more tuning. The only critical time is the exam-start, where the random question placeholders have to be resolved for every student. The approach from this weekend uses basic caching, but maybe this has to be extended.
- Protecting selected questions: Question pools are more detached from an exam than single exercises, a lecturer might have in mind. One should not allow one to delete questions/question pools when these are in use. Probably deletion should be a move to a trash-can, which is actually, an issue for all exams, but getting more important with pool questions.
- Handling of potential duplicates: When items are pulled from a question pool, a replacement item is selected by making sure that this item does not occur already in the query selection. Therefore, one can safely draw two questions from the same question pool without fearing that a student gets the same question twice.
This duplicate checking might require some fine-tuning: * the system checks for duplicates in an exam via POOL/NAME. * if a lecturer uses two pool-questions in an exam pointing to the same pool, the systems makes sure, the same question is not used twice. * however, if a teacher adds a question q1 to pool1 and the same question to pool2, these two instances have different item ids and are regarded as two different questions. One could make a test for only checking NAME (without POOL), but then it might be the case that certain questions are not accepted although these are different, because they have the same name.
- Statistics: so far, i have not provided any special answer statistics for treating pool questions.