Support for pool questions in the test-item family
Features: - select random questions from some folder (Currently siblings, i.e., folders of the same package instance)
- one exam can have multiple pool questions, potentially from other pools
- pools can be links to other folders (which are no siblings)
- The current folder can be used as well as a pool folder. In this case, other not-used items can be selected as replacement items given these match the specified filter characteristics
- Question filtering * filter by item type: allow one to select from all/some/some item types (mc, sc, short text, reordering, composite, ...): use as replacement items only examples of these types
* filter by minutes and/or points: use as replacement items only items with matching points/minutes
* filter by language: use as replacement items only items in a certain language
* Filter per item name pattern: use as replacement items only items matching a name. When (short-) names are used systematically, one can e.g. use the date in the name and specify only items from e.g. one year ago, via "*2020*", or from some chapter "*ch01", ... Certainly, it is also possible to use different item folders for this
Potential further steps:
- Currently, exams containing pool questions are treated as auto-correctable (which implies automated exam-review (Einsicht), when from the question pools only question from strictly closed questions are selected (MC, SC, Ordering). Depending on the detailed settings, also other item types could be possible (via correct-when), but this requires a deeper analysis of every question, which is so far not performed.
- Categorized items: technically, the infrastructure is mostly here to allow also filtering by categories. This would allow one to select e.g. from technical questions, case examples, knowledge transfer questions, research-oriented questions... which are orthogonal to the filtering currently available. Every lecturer can define own categories depending on their needs, we could provide university-wide category-trees, etc. Of course, one could also define separate pools for these purposes, but categorizing is probably more convenient and more flexible.
- Performance: when question pools become large (500+ questions) and the cohorts as well (500+), the current version might require more tuning. The only critical time is the exam-start, where the random question placeholders have to be resolved for every student. The approach from this weekend uses basic caching, but maybe this has to be extended.
- Protecting selected questions: Question pools are more detached from an exam than single exercises, a lecturer might have in mind. One should not allow one to delete questions/question pools when these are in use. Probably deletion should be a move to a trash-can, which is actually, an issue for all exams, but getting more important with pool questions.
- Handling of potential duplicates: When items are pulled from a question pool, a replacement item is selected by making sure that this item does not occur already in the query selection. Therefore, one can safely draw two questions from the same question pool without fearing that a student gets the same question twice.
This duplicate checking might require some fine-tuning: * the system checks for duplicates in an exam via POOL/NAME. * if a lecturer uses two pool-questions in an exam pointing to the same pool, the systems makes sure, the same question is not used twice. * however, if a teacher adds a question q1 to pool1 and the same question to pool2, these two instances have different item ids and are regarded as two different questions. One could make a test for only checking NAME (without POOL), but then it might be the case that certain questions are not accepted although these are different, because they have the same name.
- Statistics: so far, i have not provided any special answer statistics for treating pool questions.
Show form-field statistics in exam-summary for autograded exams:
- Provide statitics for autograded exams and show this (currently for MC/SC) in the exam summary. The statistics are generated when the exam-protocol is rendered and persisted statistics in the workflow instance.
- Added switch "-generic" to answer_form_field_objs to obtain the question with all alternatives (not constraint to "show_max") and without shuffling. This is needed in cases, where e.g. statistics should be provided for all alternatives shown to all students.
- Question_manager->question_info: separate computation from HTML rendering
- Added statistics handler for the WorkflowPage class to collect details from the form-fields.
- new private function "tdom_render" (might be useful, but is the committed version not used) - split out "spec_to_dict" from "fc_to_dict", since it us also useful on its own to work on single form-constraint specs. - make methods add_to_fc and replace_in_fc available to the full AssessmentInterface