Index: openacs-4/packages/assessment/www/doc/as_items.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/as_items.html,v diff -u -N -r1.4 -r1.5 --- openacs-4/packages/assessment/www/doc/as_items.html 28 Jul 2004 10:35:57 -0000 1.4 +++ openacs-4/packages/assessment/www/doc/as_items.html 29 Jul 2004 09:35:10 -0000 1.5 @@ -3,7 +3,7 @@ - AS_Items + As_Items @@ -178,7 +178,7 @@
  • cr::name - Identifier
  • -
  • as_item_default: The content of this +
  • default_value: The content of this field will be prefilled in the response of the user taking the survey
  • feedback_text: The person correcting the answers will see the contents of this box as correct answer for @@ -407,7 +407,7 @@ "small","medium","large". Up to the developer how this translates.
  • item_answer_alignment - the orientation between the -"question part" of the Item (the item_text/item_subtext) and the +"question part" of the Item (the title/subtext) and the "answer part" -- the native Item widget (eg the textbox) or the 1..n choices. Alternatives accommodate L->R and R->L alphabets (or is this handled automagically be Internationalization?) and include: @@ -455,7 +455,7 @@ Templating widgets here.
  • item_answer_alignment - the orientation between the -"question part" of the Item (the item_text/item_subtext) and the +"question part" of the Item (the title/subtext) and the "answer part" -- the native Item widget (eg the textbox) or the 1..n choices. Alternatives accommodate L->R and R->L alphabets (or is this handled automagically be Internationalization?) and include: @@ -493,7 +493,7 @@ by order of entry (sort_order field).
  • item_answer_alignment - the orientation between the -"question part" of the Item (the item_text/item_subtext) and the +"question part" of the Item (the title/subtext) and the "answer part" -- the native Item widget (eg the textbox) or the 1..n choices. Alternatives accommodate L->R and R->L alphabets (or is this handled automagically be Internationalization?) and include: @@ -518,7 +518,7 @@ by order of entry (sort_order field).
  • item_answer_alignment - the orientation between the -"question part" of the Item (the item_text/item_subtext) and the +"question part" of the Item (the title/subtext) and the "answer part" -- the native Item widget (eg the textbox) or the 1..n choices. Alternatives accommodate L->R and R->L alphabets (or is this handled automagically be Internationalization?) and include: @@ -543,7 +543,7 @@ it allow to select multiple values ?
  • item_answer_alignment - the orientation between the -"question part" of the Item (the item_text/item_subtext) and the +"question part" of the Item (the title/subtext) and the "answer part" -- the native Item widget (eg the textbox) or the 1..n choices. Alternatives accommodate L->R and R->L alphabets (or is this handled automagically be Internationalization?) and include: @@ -565,7 +565,7 @@
  • item_answer_alignment - the orientation between the -"question part" of the Item (the item_text/item_subtext) and the +"question part" of the Item (the title/subtext) and the "answer part" -- the native Item widget (eg the textbox) or the 1..n choices. Alternatives accommodate L->R and R->L alphabets (or is this handled automagically be Internationalization?) and include: @@ -596,7 +596,7 @@
  • item_answer_alignment - the orientation between the -"question part" of the Item (the item_text/item_subtext) and the +"question part" of the Item (the title/subtext) and the "answer part" -- the native Item widget (eg the textbox) or the 1..n choices. Alternatives accommodate L->R and R->L alphabets (or is this handled automagically be Internationalization?) and include: Index: openacs-4/packages/assessment/www/doc/data_collection.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/data_collection.html,v diff -u -N -r1.2 -r1.3 --- openacs-4/packages/assessment/www/doc/data_collection.html 28 Jul 2004 10:35:57 -0000 1.2 +++ openacs-4/packages/assessment/www/doc/data_collection.html 29 Jul 2004 09:35:11 -0000 1.3 @@ -107,7 +107,8 @@
  • ip_address - IP Address of the entry
  • percent_score - Current percentage of the subject achieved so -far
    +far
  • +
  • consent_timestamp - Time when the consent has been given.
  • @@ -138,13 +139,15 @@
  • item_id
  • choice_id_answer - references as_item_choices
  • boolean_answer
  • -
  • clob_answer
  • numeric_answer
  • integer_answer
  • -
  • varchar_answer
  • -
  • text_answer
  • +
  • text_answer
  • timestamp_answer
  • -
  • content_answer - references cr_revisions
  • +
  • content_answer - references cr_revisions
  • +
  • signed_data - This field stores the signed entered data, see +below for explanation
  • +
  • percent_score
    +
  • Possible Extension: item_status - Status of the answer. This might be "unanswered, delayed, answered, final". This can be put together with

    -

    -
    +

    Data modell graphic -

    + style="width: 797px; height: 565px;">

    @@ -38,73 +35,138 @@ where all versions potentially are equally-important siblings. In the case of the Assessment package, it seems likely that in some applications, users would indeed want to designate a single "live" -version, while in many others, they wouldn't. However, a given revision -can be chosen many easy ways other than looking at cr_items, while -being forced to create and maintain appropriate state in cr_items when -an application doesn't want it would be a major complication. Thus, -using the cr_revisions part of the CR alone seems to be the most useful -approach here. This decision pertains to all entities using the CR, but -it is particularly important with Assessments since they are the key to -all the rest of the entity hierarchies. -

    +version, while in many others, they wouldn't. 

    Attributes of Assessments will include those previously included -in Surveys plus some others: -

    -

    +in Surveys plus some others:

      -
    • assessment_id +
    • assessment_id
    • +
    • cr:name - a curt name appropriate for urls
    • -
    • name - a formal title to use in page layouts etc -
    • -
    • short_name - a curt name appropriate for urls -
    • -
    • author -
    • -
    • definition - text that can appear in introductory web pages -
    • +
    • cr:title - a formal title to use in page layouts etc
    • +
    • creator_id - Who is the "main" author and creator of this +assessment
    • +
    • cr:description - text that can appear in introductory web +pages
    • instructions - text that explains any specific steps the -subject needs to follow -
    • -
    • scaled_p - whether some kind of scoring algorithm is defined -(ie "grading" or other schemes) -
    • -
    • mode - whether this is a standalone assessment (like current +subject needs to follow
    • +
    • mode - whether this is a standalone assessment (like current surveys), or if it provides an "assessment service" to another OpenACS -app, or a "web service" via SOAP etc +app, or a "web service" via SOAP etc
    • +
    • editable_p - whether the response to the assessment is +editable once an item has been responded to by the user.
    • +
    • anonymous_p - This +shows whether the creator of the accessment will have +the possibility to see the personal details of the respondee or not. In +particular this will exclude the user_id from the CSV files. It shall +still be possible to see the user that have not finished the survey +though.
    • +
    • secure_access_p - The +assessment can only be taken if a secure connection (https) is used.
    • +
    • reuse_responses_p - +If +yes, the system will look for previous responses to the +the questions and prefill the last answer the respondee has given in +the assessment form of the respondee
    • +
    • show_item_name_p - If +yes, the respondee will see the name of the item in +addition to the item itself when taking the survey.
    • +
    • entry_page - The customizable entry page that will be +displayed before the first response. 
    • +
    • exit_page - Customizable exit / thank you page that will be +displayed once the assessment has been responded.
    • +
    • consent_page -
    • -
    • validated_p - whether this is a formal "instrument" like an -eponymous test (eg "Seattle Angina Questionnaire" or "MMPI" etc; this -means that alterations to this Assessment are not allowed since changes -would invalidate the Assessment -
    • -
    • enabled_p - released for actual use? -
    • -
    • editable_p - can anyone alter it further? -
    • -
    • template - references as_templates - ?not entirely clear -how/why we want to use this -
    • +
    • return_url - URL the respondee will be redirected to +after finishing the assessment. Should be redirected directly if no +Thank you page is there. Otherwise the return_url should be set in the +thank you page context, so we can have a "continue" URL.
    • +
    • start_time - At what time shall the assessment become +available to the users (remark: It will only become available to the +users who have at least the "respond" priviledge.
    • +
    • end_time - At what time the assessment becomes unavailable. +This is a hard date, any response given after this time will be +discarded.
    • +
    • number_tries - Number of times a respondee can answer the +assessment
    • +
    • wait_between_tries - Number of minutes a respondee has to +wait before he can retake the assessment.
    • +
    • time_for_response - How many minutes has the respondee to +finish the assessment (taken from the start_time in as_sessions).
    • +
    • show_feedback - Which feedback_text stored with the item_type +shall be displayed to the respondee (All, none, correct, incorrect). +Correct and Incorrect will only show the feedback_text if the response +was correct or incorrect.
    • +
    • section_navigation +- How shall the navigation happen
      +
    • +
        +
      • default +path - Order given by the relationship between assessment and section +(the order value in cr_rels, if this is used).
        +
      • +
      • randomized +- Sections will be displayed randomly
      • +
      • rule-based +branching - Sections will be displayed according to inter-item-checks. +This should be default.
      • +
    -

    -Permissions / Scope: Control of reuse previously was through a +
    +

  • +
  • Style Options (as_assessment_styles): Each assessment has a +special style associated with it. As styles can be reused (e.g. within +a department) they are covered in the as_assessment_styles:
  • + + Index: openacs-4/packages/assessment/www/doc/index.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/index.html,v diff -u -N -r1.2 -r1.3 --- openacs-4/packages/assessment/www/doc/index.html 3 Jul 2004 22:38:06 -0000 1.2 +++ openacs-4/packages/assessment/www/doc/index.html 29 Jul 2004 09:35:11 -0000 1.3 @@ -68,8 +68,7 @@ packages. In particular, incorporating Workflow and a new data collection package would be key to creation of new vertical-application tools like dotWRK. Such integration would also be immensely useful for -a clinical trials management toolkit. -

    +a clinical trials management toolkit.

    @@ -112,28 +111,20 @@ Surveys included several important enhancements to the data model: -

    -However, Surveys has some important limitations: -

    +

    However, Surveys has some important limitations:

    Still, this package adopts some naming conventions consistent with the IMS spec and definitely represents the closest effort to a "complex -survey" done to date. -

    +survey" done to date.

  • "Complex Survey". This is the descendant of @@ -144,8 +135,7 @@ incorporates a number of the features of Surveys. We discuss it in greater detail here. -

    -

    +

  • Questionnaire. This is a 3.2.5 module developed at The Epimetrics Group in order to @@ -192,19 +182,29 @@

    Data modell

    The data modell is described in detail in the design descriptions.
    -

    User Interface

    +

    User Interface

    The UI for Assessment divides into a number of primary functional -areas, as diagrammed below. These include: +areas, with the entry page located here. +It is split up into multiple sections:
      -
    • the "Home" area (for lack of a better term). These are the main -index pages for the user and admin sections
    • -
    • Assessment Authoring: all the pages involved in creating, -editing, and deleting the Assessments themselves; these are all admin -pages
    • -
    • Assessment Delivery: all the pages involved in +
    • Assessment +Authoring: all the pages involved in creating, +editing, and deleting the Assessments themselves
    • +
    • Section Authoring: +all the pages involved in creating, +editing, and deleting the Sections themselves. Includes the page to +browse for items to include in sections
    • +
    • Item Authoring and +Catalogue: all the pages involing the item creation and the item +catalogue.
    • +
    • Assessment Delivery: +all the pages involved in deploying a given Assessment to users for completion, processing those results, etc; these are user pages
    • -
    • Assessment Review: all the pages involved in select +
    • Section on Tests: +Currently still split away, some notes on additional user interface for +test. Shall be integrated with the rest of the pages.
    • +
    • Assessment Review: all the pages involved in select data extracts and displaying them in whatever formats indicated; this includes "grading" of an Assessment -- a special case of data review; these are admin pages, though there also needs to be some access to @@ -215,15 +215,13 @@ other "policies" of an Assessment. This area needs to interact with the next one in some fashion, though exactly how this occurs needs to be further thought through, depending on where the Site Management -mechanisms reside.
    • -
    • Site Management: pages involved in setting up who -does Assessments. These are admin pages and actually fall outside the -Assessment package per se. How dotLRN wants to interact with Assessment -is probably going to be different from how a Clinical Trials Management -CTM system would. But we include this in our diagram as a placeholder.
    • +mechanisms reside.
    -More information can be found at the Page Flow -page.
    +
    +The  Page Flow +page is diagrammed below and should give a very rough and outdated +overview, but still good for getting an impression.
    +

    Authors

    The specifications for the assessment system have been written by Stan Kaufman and Malte Sussdorff with help from numerous people within and Index: openacs-4/packages/assessment/www/doc/page_flow.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/page_flow.html,v diff -u -N -r1.1 -r1.2 --- openacs-4/packages/assessment/www/doc/page_flow.html 13 Jun 2004 23:20:44 -0000 1.1 +++ openacs-4/packages/assessment/www/doc/page_flow.html 29 Jul 2004 09:35:11 -0000 1.2 @@ -14,15 +14,12 @@ package will use these OpenACS standards:

      -
    • "trail of breadcrumb" navigational links -
    • +
    • "trail of breadcrumb" navigational links
    • context-aware (via user identity => permissions) menu options (whether those "menus" are literally menus or some other -interface widget like toolbars) -
    • +interface widget like toolbars)
    • in-place, within-form user feedback (eg error messages about a -form field directly next to that field, not in an "error page") -
    • +form field directly next to that field, not in an "error page")

    Furthermore, the set of necessary pages for Assessment are not all that dissimilar to the set required by any other OpenACS package. We @@ -46,44 +43,59 @@

    • the "Home" area (for lack of a better term). These are the main -index pages for the user and admin sections -
    • -
    • Assessment Authoring: all the pages involved in creating, +index pages for the user and admin sections
    • +
    • Assessment +Authoring: all the pages involved in creating, editing, and deleting the Assessments themselves; these are all admin -pages -
    • +pages
    • Assessment Delivery: all the pages involved in deploying a given Assessment to users for completion, processing those -results, etc; these are user pages -
    • +results, etc; these are user pages
    • Assessment Review: all the pages involved in select data extracts and displaying them in whatever formats indicated; this includes "grading" of an Assessment -- a special case of data review; these are admin pages, though there also needs to be some access to data displays for general users as well (eg for anonymous surveys etc). Also, this is where mechanisms that return information to "client" -packages that embed an Assessment would run. -
    • +packages that embed an Assessment would run.
    • Session Management: pages that set up the timing and other "policies" of an Assessment. This area needs to interact with the next one in some fashion, though exactly how this occurs needs to be further thought through, depending on where the Site Management -mechanisms reside. -
    • +mechanisms reside.
    • Site Management: pages involved in setting up who does Assessments. These are admin pages and actually fall outside the Assessment package per se. How dotLRN wants to interact with Assessment is probably going to be different from how a Clinical Trials Management -CTM system would. But we include this in our diagram as a placeholder. -
    • +CTM system would. But we include this in our diagram as a placeholder.
    -

    -So this is how we currently anticipate this would all interrelate:

    +In addition to the page flow we have two types of portlets for .LRN:
    +
      +
    • Portlet for the respondee with all assessments that have to be +answered and their deadlines.
    • +
    • Portlet for staff with all assessments that have to be reviewed +with review deadline and number of responses still to look at.
    • +
    +
    More Ideas:
    +
    +
      +
    • Possibility to browse assessments and +sections by category. +
    • +

    • +
    • + +
    +

  • +
  • +

    +So this is how we currently anticipate this would all +interrelate:

    data modell -

    + style="width: 950px; height: 1058px;">

    Index: openacs-4/packages/assessment/www/doc/images/assessment-datafocus.jpg =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-datafocus.jpg,v diff -u -N -r1.2 -r1.3 Binary files differ Index: openacs-4/packages/assessment/www/doc/user_interface/assessment_creation.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/assessment_creation.html,v diff -u -N --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/user_interface/assessment_creation.html 29 Jul 2004 09:35:12 -0000 1.1 @@ -0,0 +1,296 @@ + + + + Assessment Creation + + + + + +When creating an assessment the administrator has a couple of fields to +determine the look and feel of the assessment along with the option to +view the responses. This is a list of attributes the administrator can +edit when creating an assessment. The grouping is based on the UI and +not on the datamodell. So you should follow this with regards to the +UI: +
      +
    • Title: Title of the accessment +
    • +
    • Anonymous Accessment: boolean (yes/no). This shows whether the +creator of the accessment will have the possibility to see the personal +details of the respondee or not. In particular this will exclude the +user_id from the CSV files. It shall still be possible to see the user +that have not finished the survey though. +
    • +
    • Secure access only: boolean (yes/no). The assessment can only be +taken if a secure connection (https) is used. +
    • +
    • Presentation Options: These options allow the respondee to select +between different presentation styles. At least one of the checkboxes +mentioned below has to be selected. +
        +
      • All questions at once +
      • +
      • One question per page. If you have selected respondee may not +edit their reponses, it will not be possible for them to go back and +choose another answer to that question. +
      • +
      • Sectioned +
      • +
      +
    • +
    • Reuse responses: boolean (yes/no). If yes, the system will look +for previous responses to the the questions and prefill the last answer +the respondee has given in the assessment form of the respondee. It +is debatable whether this function should be per assessment and/or per +question +
    • +
    • Navigation of sections: select (default +path, randomized, rule-based branching, maybe looping in the future).
    • +
    • Show question titles: boolean +(yes/no). If yes, the respondee will see the title of the question in +addition to the question itself when taking the survey.
    • +
    • Consent Pages: richtext. An +assessment author should be able optionally to specify some consent +statement that a user must agree to in order to proceed with the +assessment. The datamodel needs to store the user's response positive +response with a timestamp (in as_sessions). This isn't relevant in +educational testing, but it is an important feature to include for +other settings, notably medical and financial ones.
      +
    • + +
    • Progress bar: select. (no progress bar, different styles). What +kind of progress bar shall be displayed to the respondee while taking +the assessment. +
    • +
    • Styles +
        +
      • Custom header / footer: richtext. Custom header and footer +that will be displayed the the respondee when answering an assessment. +Possibility to include system variables (e.g. first name). +
      • +
      • Select presentation style. Style (form_template) that will be +used for this assesment +
      • +
      • Upload new: file. Possibility to upload a new style +
      • +
      • Edit (brings up a page with the possibility to edit the +selected style) +
      • +
      • Customizable Entry page: richtext. The page that will be +displayed before the first response. +
      • +
      • Customizable buttons for Submit, Save, continue, cancel (e.g. +using the style?) +
      • +
      • Customizable thank you page: richtext. +
      • +
      • Return_URL: text. URL the respondee will be redirected to +after finishing the assessment. Should be redirected directly if no +Thank you page is there. Otherwise the return_url should be set in the +thank you page context, so we can have a "continue" URL. +
      • +
      +
    • +
    • Times +
        +
      • Availabilty: 2 date widgets (from, and to). This will set the +time the time the survey will become visible for the respondees. It is +overriden by the parameter enabled (if a accessment is not enabled, it +will never be visible, regardless of date). +
      • +
      • How often can a accessment be taken: Number of times a survey +can be taken by a respondee. +
      • +
      • How long has a user to pause: Number of hours a respondee has +to wait before he can take the accessment again. +
      • +
      • Answer_time: integer: Time in minutes a respondee has to +answer a survey. +
      • +
      +
    • +
    • Show comments to the user +
        +
      • No comments: the user will not see the comments stored with +the questions at all +
      • +
      • All comments: the user will see all the comments associated +with his answers to the questions +
      • +
      • Only wrong comments: the user will only see the comments to +questions which he answered in correctly. +
      • +
      +
    • +
    • Permissions +
        +
      • Grant explicit permissions: Link to a seperate page that will +allow the creator to grant and revoke permission for this survey. +Permissions are (take_survey, administer_survey) +
      • +
      • Grant permission on status in curriculum. Needs to be exactly +defined. Otherwise we will write a small page, that allows the admin to +select exams and a minimum point number the student has to have +achieved in that exam. +
      • +
      • Bulk upload: file. Upload a CSV file with email addresses to +allow access to the accessment. Add users to the system if not already +part of it. Notify users via email that they should take the +accessment.
      • +
      • Password: short_text. Password that has to be typed in before +the respondee get's access to the accessment. This should be done by +creating a registered filter that returns a 401 to popup an HTTP auth +box. look in oacs_dav::authenticate for an example of how to check the +username/password
        +
      • +
      • IP Netmask. short_text. Netmask that will be matched against +the IP-Adress of the respondee. If it does not match, the user will not +be given access. Again this should be handled by the creation of a +registered filter on the URL where the assessment resides (for the +respondee that is, meaning the entry URL for responding to the +assessment).
        +
      • +
      +
    • +
    • Notifications +
        +
      • Notifications will be done using the notification system of +OpenACS.
        +
      • +
      • For all notifications allow system variables should be used.
      • +
          +
        • System_name
        • +
        • User_name
        • +
        • user_id
        • +
        • ... (free for the developer to think about what is useful)
          +
        • +
        +
      • Links to spam the following group of people (information can +be taken out of as_sessions):
      • +
          +
        • All respondees having access to the assessment
        • +
        • All respondees that have not started the assessment
        • +
        • All respondees with unfinished assessments
          +
        • +
        • All respondees with finished assessment
        • +
        +
      • Notification message: richtext. This will allow the creator +to supply a message that will be send to the respondee, Possible +messages:
        +
          +
        • To invite the respondee +
        • +
        • To remind him for filling out the survey +
        • +
        • To thank them for performing +
        • +
        +
      • +
      • Possible Messages for the staff +
          +
        • Inform the staff about reponses to be looked at +
        • +
        • Remind the staff about responses +
        • +
        +
      • +
      • Reminder period for notification messages. +
      • +
      +
    • +
    • Reponses +
        +
      • View responses per User (resulting in a page with all +responses with checkboxes in front for deletion and a check/uncheck all +link) +
      • +
      • View responses per Question +
      • +
      • View responses by Filter / Groups / Values (e.g. search for +questions with a negative distractor) +
      • +
      • Grant access to responses (using the permission system):
        +
      • +
          +
        • Closed - Only the owner of the assessment can see the +responses
          +
        • +
        • Admin - Only admins of the assessment can see the reponses
          +
        • +
        • Respondees - Only respondees can see the responses
          +
        • +
        • Registered_Users - Only registered users can see the +responses
        • +
        • Public - Everyone can see the responses
          +
        • +
        • grant permission to special parties
          +
        • +
        +
      • Import / Export +
          +
        • Import/export style: WebCT, CVS, Blackboard, IMS +
        • +
        • Import Filename: file, select file that shall be imported +
        • +
        • Import and export button +
        • +
        +
      • +
      +
    • +
    • Statistics +
        +
      • Number of completed assessments +
      • +
      • Number of unfinished assessments +
      • +
      • Average score (only with scoring module) +
      • +
      +
    • +
    • Survey Import / Export +
        +
      • Type: (select box): CSV, WebCT, SCORM, Blackboard, IMS +
      • +
      • File: file (file for import) +
      • +
      • Download file name: short_text. Filename for the download of +the export. +
      • +
      +
    • +
    • Delete assessment with / without responses
    • +
    • Assign category to the assessment
      +
    • +
    • Link to a mapping and browsing page to link sections to this +assessment (or to create new sections).
    • +
    • (Optional) For each +section in the assessment display: +
        +
      • Section name +
      • +
      • Link to section page +
      • +
      • Reorder section buttons. +
      • +
      +
    • +
    • Instant survey preview (needs to be defined how exactly this is +going to happen) +
    • +
    • One additional option that should be +included is a consent form; an +assessment author should be able optionally to specify some consent +statement that a user must agree to in order to proceed with the +assessment. The datamodel needs to store the user's response whether it +is positive or negative, along with a timestamp. This isn't relevant in +educational testing, but it is an important feature to include for +other settings, notably medical and financial ones.
    • +
    +
    +
    + + Index: openacs-4/packages/assessment/www/doc/user_interface/index.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/index.html,v diff -u -N --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/user_interface/index.html 29 Jul 2004 09:35:12 -0000 1.1 @@ -0,0 +1,159 @@ + + + + Appendix A: RFC for Assessment Specs + + + + + +

    Introduction

    +In recent times the survey system has expanded beyond it's initial +scope of providing a quick and easy solution to conduct surveys. Due to +it's flexibility it has already expanded in the area of storing user +information and provide a tool for feedback for quality assurance. +

    On the other hand the need for dotLRN has risen to provide an +assessment solutions that allows (automated) tests as well with the +possibility to score the test results and store them in an easy +retrievable way.

    +

    Last but not least a new demand has risen for the possibility to +give and store ratings on objects within the system as part of a +knowledge management solution.

    +

    The documents on these page will provide a solution that is flexible +to meet ababove needs but still be focused enought to apply for special +clients demands. +

    +

    Assessments

    +The current survey system will build the basis for a new assessment +package and will consist of various areas: +

    Question Catalogue

    +The question catalogue stores all the questions of the assesment +package. It is a pool where all the questions from all assessments are +stored in. This creates the opportunity to make the questions reusable, +allowing for statistics across surveys and prevents the respondee from +having to fill out a question he has already filled out. Furthermore +special administrators are given the possibility to add questions that +do not store the results within the scope of the assessment package but +in other database tables (e.g. the name of the user) or trigger some +other events (e.g. send email to the email address filled out by the +respondee). A detailed description can be found here. +

    Assessment creation

    +An assessment is either a survey or a test. The functionality for both +is nearly identical though a test needs additional work to allow for +automated grading. A detailed description of the options given to the +creator of an assessment can be found here. +

    Each assessment consists of various sections, that allow for the +split up of the assessment (so it will be displayed to the respondee on +multiple pages) and give the possibility for branching depending on +previous answers of the respondee. Questions are always added into the +question database first, then added to a specific section and thus made +available to the assessment. A detailed description of the Sections can +be found here. +

    +

    Tests

    +Tests are a special kind of assessment in that they allow for automatic +processing of the answers and storage of the result in the grading +system. They have a couple of additional settings as well as the +possibility to get an overview of the evaluation (what have the +respondees answered, how have they done in total (points)). A +description for this can be found here. +

    The backend for the test processing, that enables the automatic +tests is described in a seperate document +as it will be parsed while the respondee answers the test, not +manually. In addition this document describes how the grades are +calculated (automatically or manually) for each question. The result is +beeing stored in the grading package. +

    +

    Scoring/Grading

    +The grading package will be designed first of all to all the storing of +test results. In addition to this, it will provide functionality to +other packages to allow rating of their contents (one example of this +would be Lars Rating package, that would be used as a basis for this). +In general it should provide a very flexible way of adding scores into +the system, either automatically (as described above) or manually (e.g. +this student did a good oral exam). +

    In addition to the possiblity to enter scores/rates, the grading +package allows for automatic aggregation of scores. This holds +especially true for tests and classes. A test result will depend on the +result of all the answers (aggregated). A class result will depend on +the result of all the tests a respondee did in addition to any manual +grades the professor can come up with. Providing a clean UI for this is +going to be the challange.

    +

    Furthermore the grading package offers to transfer scores (which are +stored as integer values) into a grade (e.g. the american A-F scheme, +or the German 1-6). This is where it gets the name from I'd say ;). +Grading schemes are flexible and can be created on the fly. This allows +us to support any grading scheme across the world's universities. In +addition in the area of Knowledge Management, grades could be +transfered into incentive points, that can be reused to reward +employees for good work done (where they got good ratings for).

    +

    Last but not least, maybe embeded with the workflow system, is the +possibility to execute actions based on the grade. An example would be +the adding of the student to the advanced class if his grade or score +reaches a certain level. Alternatively this looks like a good thing for +the curriculum module to achieve. +

    +

    User Experience

    +So far we have only talked about the administrators point of view. A +respondee will be directed to an assessment from various possible entry +points. Depending on the settings for the assessment the respondee will +be presented with the assessment that he is allowed to answer. Though a +lot of it is redundant, a special page +has been created to describe this. For the implementation though there +might be additional things depending on the specifications of the +various administrator settings. +

    Use Cases

    +The most obvious use case would be a class in a school or university, +that offers automated tests to the students and wants to have them +graded automatically. The description of the assessment system has been +written mainly with this in mind. +

    Additionally you can use the assessment system to collect user +information. When signing up to a site the user could be asked to fill +out an assessment where part of the questions will be stored in the +acs_users table, some other questions in other tables and the rest in +the accessment package. This way a quick view can be given about the +user (aggregating user information in a flexible way). Best explanation +would be to treat the /pvt/home page as a collection of assessment data +and the "change basic information" as one assessment among many.

    +

    With a little bit of tweaking and the possiblity to add instant +gratification, aka aggregated result display, it could include the poll +package and make it redundant.

    +

    Last but not least with the ability to display questions in a multi +dimensional way to the user, the assessment system is usefull for +quality assurance (how important is this feature / how good do you +think we implemented it). And as you might have guessed, for anything +the current survey module has been used for as well (e.g. plain and +simple surveys). +

    +

    The grading system on it's own would be usefull for the OpenACS +community as it would allow the handing out of "zorkmints" along with +any benefits the collection of mints gives to the users. As mentioned +earlier, this is also very important in a Knowledge Management +environment, where you want to give rated feedback to users. +

    +
    +

    Question Catalogue +
    +
    + Assessment Creation +
    +
    + Sections +
    +
    + Tests +
    +
    + Test Processing +
    +
    + User Experience +
    +
    +
    +

    +
    +
    + + Index: openacs-4/packages/assessment/www/doc/user_interface/item_creation.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/item_creation.html,v diff -u -N --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/user_interface/item_creation.html 29 Jul 2004 09:35:12 -0000 1.1 @@ -0,0 +1,436 @@ + + + + Question Catalogue + + + + +Question +Catalogue: Summary: +

    The question catalogue is a central part of the assessment system. +It deals with the storing of the various questions that can be used in +a survey. You are able to add/edit/delete a question of a certain type +to a certain scope. Furthermore it allows you to search and browse for +questions for inclusion in your assesment as well as import and export +multiple questions using various formats. This concept is new to survey +0.1d and changes the design of the survey module considerably. No +mockups available.

    +

    Spec:
    +All questions have some common ground. +

    +
      +
    • Questions must be scopeable, scope should be: +
        +
      • site-wide for questions that are useful for the whole site +
      • +
      • package-wide, which is useful for dotLRN communities, so you +can have questions which are just useful for your class +
      • +
      • survey-wide, so you can add questions, that are only useful +for one survey. This is the default if you add a question using the +normal survey interface +
      • +
      +
    • +
    • Questions should be able to be assigned to multiple categories +with multiple hierarchies (e.g. catogorize by symptoms or by classes). +These categories must be editable site wide. +
    • +
    • Questions will have a title, so they can easily be found again. +
    • +
    • Questions have to be versionable. This allows a survey to use the +older version of a question, if the question is changed. +
    • +
    +For each of the various question types, there will be a seperate input +form instead of the currently used method. A user selects a question +type to add and is then redirected to the question type add form. +
      +
    • True for all Question types
      +The following fields are true for every question type: +
        +
      • Category +
      • +
      • Title +
      • +
      • Scope +
      • +
      • Question: An richtext HTML widget (look at bug-tracker for an +example of this widget) +
      • +
      • Graphik: A file for upload from the local harddisk to the +content repository +
      • +
      • Use old answers: boolean (yes/no). If yes, the latest +response of the user to this question will be given as a default value +
      • +
      • Data validation steps are fairly complex because we need two +layers of data validation checks: +

        +
          +
        • Intra-item checks: the user input { exactly +matches | falls within narrow "target" bounds | falls within broader +"acceptable" bounds with explanation} +
        • +
        • Inter-item checks: if { a user input for item a +is A, item b is B, ... item n is N } then { user input for item z is Z +} +
        • +
        +

        +Both levels involve stringing together multiple binary comparisons (eg +0 < input < 3 means checks that 0 < input and input +< 3), so we need to express a grammar consisting of

        +

        +
          +
        • comparison1 conjunction comparison2 conjunction +... comparison n +
        • +
        • appropriate grouping to define precedence order (or +simply agree to evaluate left to right) +
        • +
        +
      • +
      +
    • +
    • Open Question
      +Besides categories and title (which is the same for all questions), +open questions have the following entry fields: +
        +
      • Size of the reply box: Radio buttons (small/medium/large) +
      • +
      • Prefilled Answer Box: richtext widget. The content of this +field will be prefilled in the response of the user taking the survey +
      • +
      • Correct Answer Box: richtext widget. The person correcting +the answers will see the contents of this box as correct answer for +comparison with the user response. +
      • +
      • [NTH]: Button to add predefined comments next to the correct +answer box. This would be shown to the manual corrector to quickly +choose from when manually scoring the answer. +
      • +
      +
    • +
    • Calculation:
      +This type of question will not be supported. But we should make sure we +can take care of that type while importing the data. Therefore we have +to know the values. And while we are at it, we can as well just +generate the input form :-). +
        +
      • Formula: string +
      • +
      • Units +
      • +
      • Value (in %): integer +
      • +
      • Required (boolean) +
      • +
      • Ignore Space (boolean) +
      • +
      • Ignore spell checking (boolean) +
      • +
      • General Feedback: richtext +
      • +
      +
    • +
    • Short Answer Question:
      +
        +
      • Number of Answerboxes: Integer Selectbox. This will control +how many answer boxes the respondee will see. +
      • +
      • Upper/Lowercase: Radio boolean. This will control, whether we +treat the response case sensitive when comparing it to the correct +answers or not. +
      • +
      • The questioneer has the option to define multiple possible +correct answers that will be matched with the response of the user in +various ways. For each of the possible answers the following fields are +given: +
          +
        • Answer: short_text. This contains the answer string that +will be matched against the response +
        • +
        • Value in %: short integer: How many percentage points a +match will awarded. +
        • +
        • Size: Integer Select: size of the input box (small, +medium, large)
        • +
        • Compare by: Select (equal, contains, regexp). This +defines how the comparison between the answer string and the response +shall happen.
        • +
        • Allow in answerbox: (multiple select box with "All" and +the numbers from 1 to x where x is the number of answerboxes from +above. For sure this only works with JS enabled :)). Defines the +answerboxes the user can fill out that shall be matched with this +answer. +w
        • +
        +
      • +
      +
    • +
    • Matching Question:
      +Matching questions are useful for matching some items on the left with +pull down menues on the right hand side of a survey. The number of the +items is identical to the number of items on the right hand side. +
        +
      • Settings: +
          +
        • Distribution of points: boolean (all or nothing / +increasing). All or nothing will either give 100%, if all correct +answers are given, or 0% else. Increasing will give (number of correct +matches / number of total matches) *100% points. +
        • +
        • Allow negative: boolean (yes/no). This will allow a +negative percentage as well (as the total result).
        • +
        +
      • +
      • A couple of match entries will be presented below the +settings. Each one will consist of: +
          +
        • Match item: This is the item which will be displayed on +the left side of the question presented to the respondee. +
        • +
        • Matched item: This is the correct item from the select +box on the right side. For each match item on the left there will be a +select box on the right with ALL the matched items (when taking the +survey, that is...) +
        • +
        +
      • +
      • In addition to submit, there is another button to allow +further answers to be filled in. Typed in values shall be remembered +and 4 more answerboxes be shown. +
      • +
      +
    • +
    • File upload question: +A file upload question will allow the respondent to upload a file. No +additional settings but the usual for every question. +
    • +
    • Multiple Choice question: +Multiple Choice questions allow the respondee to choose from multiple +alternatives with the possibility to answer more than one at a time. +
        +
      • Settings: +
          +
        • Allow Multiple: boolean (yes/no). This will determine if +the respondee has the option to choose multiple possible answers for +his response. +
        • +
        • Select Box: boolean (yes/no). Will display a select box +or radio/checkbox otherwise. +
        • +
        • Distribution of points: boolean (all or nothing / +increasing). All or nothing will either give 100%, if all correct +answers are given, or 0% else. Increasing will give (number of correct +matches / number of total matches) *100% points. +
        • +
        • Allow negative: boolean (yes/no). This will allow a +negative percentage as well (as the total result).
        • +
        +For each (possible) answer we have a couple of fields: +
          +
        • Correct answer: boolean, radio with grafik (red x, green +y) (yes/no). This marks if the current answer is a correct one.
        • +
        • Answer: Richtext widget. +
        • +
        • Value: percentage value this answer gives to the +respondee +
        • +
        • Reply: richtext widget. This is a reply the student can +see at the end of the test giving him some more information on the +question he choose. +
        • +
        +
      • +
      • In addition to submit, there is another button to allow +further answers to be filled in. Typed in values shall be remembered +and 4 more answerboxes be shown. +
      • +
      • Additionally there is a button "copy", which copies the +contents of this question to a new question, after you gave it a new +title. +
      • +
      • [FE]: Possibility to randomly choose from the options. This +would add a couple of fields: +
          +
        • To each answer: Fixed position: Select Box, Choose the +mandatory position when displaying the option (e.g. "none of the +above"). +
        • +
        • Number of correct answers: integer, defining how many +correct options have to be displayed. Check if enough correct answers +have been defined. +
        • +
        • Number of answers: integer, defining how many options +shall be displayed in total (correct and incorrect). Check if enough +answers are available. +
        • +
        • Display of options: Numerical, alphabetic, randomized or +by order of entry. +
        • +
        +
      • +
      +
    • +
    • [FE]: Rank question.
      +Rank questions ask for the answers to be ranked. +
        +
      • Rank Type: Boolean (alphabetic, numeric). Shall the rank be +from a to z or from 1 to n. +
      • +
      • Only unique rank: Boolean (yes/no). Shall the ranking only +allow unique ranks (like 1,2,3,5,6 instead of 1,2,2,4,5) +
      • +
      • Straigth order: Boolean (alphabetic, numeric). Shall the rank +be in a straigth order or is it allowed to skip values (1,2,3 vs. +1,3,4) +
      • +
      • For each answer we ask the following questions: +
          +
        • Answer: Richtext widget. +
        • +
        • Rank: correct rank +
        • +
        +
      • +
      • In addition to submit, there is another button to allow +further answers to be filled in. Typed in values shall be remembered +and 4 more answerboxes be shown. +
      • +
      +
    • +
    • [FE]: Matrix table (blocked questions)
      +A matric table allows multiple questions with the same answer to be +displayed in one block. At the moment this is done in the section setup +(if all questions in a section have the same answers they would be +shown in a matrix). One could think about making this a special +question type on it's own.
    • +
    +Only site wide admins will get to see the following question types: +
      +
    • Database question:
      +The answer to this question will be stored in the database. The +question has the following additional fields: +
        +
      • Table Name: short_string. This is the name of the table that +is beeing used for storing the responses. +
      • +
      • Column: short_string. This is the column of the table that is +used for storing the responses. +
      • +
      • Key Column: short_string. This is the column of the table +that matches the user_id of the respondee.
      • +
      +
    • +
    +Concerning permissions here is the current thinking: +
      +
    • A question can be changed only by the creator or any person that +the creator authorizes. To keep it simple for the moment, a person that +is authorized by the creator has the same rights as the creator +himself. +
    • +
    • If a question is changed, all survey administrators, whose survey +use the question, are notified of the change and given the opportunity +to upgrade to the new version, or stick with their revision of the +question. +
    • +
    • If an upgrade happens we have to make sure that the survey gets +reevaluated. Unsure about the exact procedure here. +
    • +
    +There needs to be an option to search the question catalogue: +
      +
    • Search term: short_text. What shall be searched for +
    • +
    • Search type: select {exact, all, either}. Search for the exact +term, for all terms or for one of the terms given. +
    • +
    • Search in: Checkboxes (Title, Question, Answer, Category, Type). +Search for the term(s) in the Title, Question, Answer, Category and +question type (multiple, short_answer ...). Obviously search only in +these areas which have their checkbox set to true. +
    • +
    • Browse by category (link). Link to allow browsing for a question +in the category tree. +
    • +
    • The result should show the question title, the type of the +question, a checkbox for inclusion in a survey. The following actions +are possible: +
        +
      • Include the marked questions to the current section, if +section_id was delivered with the search. +
      • +
      • Delete selected questions +
      • +
      • Change scope of selected questions +
      • +
      • Export questions in CSV, Blackboard, WebCT or IMS format. +
      • +
      +
    • +
    +Operations on questions: +
      +
    • View. View the question in more detail (all settings along with a +preview of the question) +
    • +
    • Edit. Edit the current question. On submit: +
        +
      • Store a new version of the question. +
      • +
      • Mail all current survey administrators using this question +about the update.
      • +
      • Include a link which allows the administrators to update +their survey to the latest revision of the question. +
      • +
      • Don't relink the survey to the latest revision if not +explicitly asked for by the survey administrator. +
      • +
      +
    • +
    • Copy. Copy the current question and allow for a new title. The +edit screen should be presented with an empty Title field. +
    • +
    • Delete. Delete a question. On the confirmation page show all the +Possibility to include images in answers. Currently this can be done +using HTML linking. A more sophisticated system which links to a media +database is thinkable, once the media database is ready. +
    • +
    +For the future we'd like to see a more sophisticated way to include +images in questions. Currently this can be done using HTML linking, but +a media database would be considerably more helpful and could be reused +for the CMS as well. +

    +

    Calculation and Database Questions

    +I'm not clear from your description what these are. If by Calculation +questions you mean questions that produce some calculated result from +the user's raw response, then IMHO this is an important type of +question to support now and not defer. This is the main type of +question we use in quality-of-life measures (see demo here). +These are questions scored by the Likert scale algorithm. If there are +five potential responses (1,2,3,4, and 5) for a question, and the user +choose "1" then the "score" is calculated as 0; if "5" then 100; if "3" +then 50, and so on -- a mapping from raw responses to a 0-100 scale. Is +this what you mean by a "calculation" question? +

    By Database questions, do you mean free text input (via textboxes or +textareas) questions for which there is a "correct" answer that needs +to be stored during question creation? Then when the teacher is +reviewing the student's response, she can inspect the student's +response against the stored answer and determine what degree of +correctness to assign the response? +

    +

    -- Stan Kaufman +on November 09, 2003 06:29 PM (view +details)

    +

    + +
    + + Index: openacs-4/packages/assessment/www/doc/user_interface/main.css =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/main.css,v diff -u -N --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/user_interface/main.css 29 Jul 2004 09:35:12 -0000 1.1 @@ -0,0 +1,225 @@ +body { + background-color: white; + font-family: sans-serif; + margin-top: 3px; + +} + + +a { + color: #336699; +} + +a:visited { + color: purple; +} + +#description { + +} + +.more { + float: right; + margin-right: 8px; +} + +.forum { + margin-top: 0px; + margin-bottom:7px; +} + +.title { + margin-bottom:2px; + margin-top: 5px; + font-weight: bold; +} + +.page-title { +} + +.context { +} + +.item { + padding-bottom: 0px; + margin-bottom: 8px; + margin-top: 0px; + +} + +div.header { + margin-bottom: 15px; +} + +span.chosen { + color: #ccc; +} + +span.navigation td { + font-weight: bold; + color: white; +} + +span.navigation a { + font-weight: bold; + color: white; +} + +span.navigation a:visited { + color: white; +} + +span.statistics { + font-size: 70%; +} + +span.search input { + font-size: 100%; +} + +span.user-status { + font-size: 85%; +} + +div.box { + margin-bottom: 15px; +} + +span.box-title a { + color: white; + font-weight: bold; + padding-left: 2px; + padding-right: 2px; + padding-bottom: 5px; +} + +span.box-title a:visited { + color: white; +} + +div.box-content { + font-size: 85%; + padding-left: 8px; + padding-right: 8px; + padding-top: 5px; +} +div.box-full-content td { + color: white; + font-size: 85%; + padding-left: 2px; + padding-right: 2px; +} + +div.box-full-content a { + color:white; + font-size: 75%; +} + +div.box-full-content a:visited { + color:white; + font-size: 75%; +} + +div.left-panel { + float: left; + width: 20%; +} + +div.login td { + +} + +div.news { + +} + +div.postings { + +} + +div.postings span.forum { + +} + +div.postings span.post { + text-indent: 10px; +} + +div.download { + margin-bottom: 15px; +} + +div.right-panel { + float:left; + width: 20%; +} + +div.features { + +} + +div.main-content { + float: left; + width: 50%; + margin-left: 3%; + margin-right: 3%; +} + +div.footer table { + clear: both; + font-size: 70%; + margin-top: 15px; +} + + +span.etp-link { + float: right; +} + + +/* + Community Page +*/ + +div.container-left { + float: left; + width: 39%; +} + +div.container-right { + float: right; + width: 59%; +} + +div.forums { + clear: both; +} + +div.jobs { + clear: both; +} + +div.sites { + clear: both; +} + +div.companies { + clear: both; +} + +div.hosting { + clear: both; +} + +div.forum-post { + margin-left: 30px; + margin-right: 30px; +} + + +dl.faq dt { margin-bottom: 5px; + font-weight: bold; + background-color: #eee; + padding: 2px; } + +dl.faq dd { margin-bottom: 20px; } \ No newline at end of file Index: openacs-4/packages/assessment/www/doc/user_interface/section_creation.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/section_creation.html,v diff -u -N --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/user_interface/section_creation.html 29 Jul 2004 09:35:12 -0000 1.1 @@ -0,0 +1,162 @@ + + + + Sections + + + + + +Section page. This page is for editing information about a section and +adding questions to it. It contains a couple of subpages. +
      +
    • Title: Title of the section +
    • +
    • Display of the Pre Display Checks (with an edit and a remove +link). +
    • +
    • Add new Pre Display Check.
    • +
    • Add new Post Display Check. (See the sequencing +documenation for this).
      +
    • +
    • Use one or all conditions: boolean. Is it mandatory that all +conditions have been met or is one condition enough (for not displaying +this section)
      +
    • +
    • Add question(s) from question database: Link to the search page +which allows to search for questions that can be added to this section +(multiple add possibility). +
    • +
    • Add question: Link to the question catalogue entry form with a +return_url that adds the question from the catalogue to this section +and return to the section page. +
    • +
    • Sort questions: Select (random, manual, alphabetic, numerical) +
    • +
    • Number of questions: Number of questions that will be displayed +in this section. Only useful if we randomize. +
    • +
    • Custom Template: link. Opens a page with the current layout of +the section in an textarea to be edited +
    • +
    • At the buttom display all questions that have been added to this +section with the following items +
        +
      • Change order of questions (arrow navigation) +
      • +
      • Title of the question +
      • +
      • Link to edit question properties with regards to this section +
          +
        • Points: integer. Number of Points this question is worth +in the section +
        • +
        • Mandatory: boolean (yes/no). Is this question mandatory +in this section. It will be displayed in any case, regardless of +randomizing. +
        • +
        • Fixed Position: select (1,2..., buttom). Position the +question has to be displayed, regardless of randomizing. +
        • +
        +
      • +
      +
    • +
    +The branch conditions page allows the conditions to be added under +which this section will be called (branch conditions). You have two +general options +
      +
    • Branch by question. This kind of branch depends on previous +answers. A table of all multiple choice / boolean questions will be +given to the creator along with their possible answers. +
        +
      • Each question has a checkbox to determine if this question +shall be included in this branch condition and a radio button, if all +answers or just one have to be given (e.g. if we have multiple correct +answers, we might want to branch into this section all answers have +been selected by the respondee or just one). +
      • +
      • The answers have checkboxes, with the correct answers checked +by default for multiple choice question. All other questions will only +be displayed if they give a percentage value to the answer. In this +case a textfield is given with the possibility to give a range (10-100) +or seperate percentages (10, 100, 200). +
      • +
      • The display of this section depends on whether the valid +answers have been given to all or just one of the questions that have +been checked (as you might have guessed, we need a radio button for +this below the table). +
      • +
      +Questions that will be displayed depend on the position of the section. +Only questions that could have been answered in the assessment before +this section is displayed will be shown. +
    • +
    • Branch by result. Instead of relying on one or multiple answers +we check for a result in a previous section. This can only work in a +test environment (so don't display this option if we are not dealing +with a test). +
        +
      • Section: select. This will display a list of all previous +sections. The selected section will be used for the computation. +
      • +
      • Calculation: select (median, distractor, absolut number of +points). What shall be computed to determine whether the user is +allowed to see this section. +
      • +
      • From / To value: integer. Two fields to display the valid +range for which this section will be displayed to the user. +
      • +
      +
    • +
    • It is imagineable that a combination of both methods makes sense, +so we should take this into account when creating the UI. +
    • +
    +
    +
    +

    Order of creation: sections and questions

    +

    Requiring the questions to be written first +before the sections are created by selecting and "inserting" the +questions into the section is reasonable. But it would also be very +useful to be able to import entire sections from other assessments into +a new assessment. In clinical trials, there are sets of questions that +show up in many forms, and it shouldn't be necessary to recreate these +sets every time a new assessment is being designed.

    +

    I think that the data model should provide catalogues of all levels +of the hierarchical assessment structures to support, say, inserting +one entire assessment into the middle of another. This seems like a +generalization of the mechanism you suggest here.

    +

    -- Stan Kaufman +on November 18, 2003 05:19 PM (view +details)

    +

    Branch conditions

    +It seems to me that there are two types of branch conditions: +

    + +

    +The datamodel and UI must thus support creating and processing a) these +criteria; and b) the paths. This seems more or less to be what you've +said, I think. +

    +

    -- Stan Kaufman +on November 18, 2003 05:32 PM (view +details)

    +

    +
    + + Index: openacs-4/packages/assessment/www/doc/user_interface/tests.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/tests.html,v diff -u -N --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/user_interface/tests.html 29 Jul 2004 09:35:12 -0000 1.1 @@ -0,0 +1,278 @@ + + + + Tests + + + + + +A test is a special kind of accessment that allows the respondee's +answers to be rated immediatly. Unless otherwise stated, all pages +described are admin viewable only. +
      +
    • Settings +
        +
      • Assessment: a selectbox containing all the assessment in the +same subnode as the test package, so the administrator knows for which +survey he will create an test. +
      • +
      • Valid Results: select (first, last, highest, median). +Describes which points to choose if a user has the option to take a +test multiple times. Median is the median of all tries. +
      • +
      • Publication of Results: radio. Publicate results once test is +submitted, once test has been evaluated by a TA, never. This is always +with regards to the respondee. Admins can view the results all the +time. +
      • +
      • What to show in publication: checkboxes. +
          +
        • Question. The question that has been asked +
        • +
        • Answer. The answer the respondee has given +
        • +
        • Points. The number of Points the respondee has gotten for +the answer +
        • +
        • Evaluation comment. The evaluation comment given by the +evaluator +
        • +
        • Correct Answer. The correct answer of the question +
        • +
        • Feedback. The feedback that is stored with the given +answer +
        • +
        • Feeback for question. The feedback that is stored with +the question +
        • +
        • Total result. The total result of the test (at the +buttoms) +
        • +
        +
      • +
      • Submit each answer seperatly: boolean (yes/no). Does the user +have to submit each answer seperatly. +
      • +
      • Answer changeable: boolean (yes/no). Can the user change a +submitted answer. +
      • +
      • Finish button: boolean (yes/no). Allow the respondee to +finish the test early, despite not having answered all the questions. +The respondee will not have the option to go back and continue the test +afterwards (like handing out your written test to the TA in an on site +exam). +
      • +
      • Allow answers to be seen: boolean (yes/no). This will +disallow the respondee to revisit his answers. +
      • +
      +
    • +
    • Evaluation overview. This is a page with a table of all +respondees with +
        +
      • Smart display (to limit the number of respondees per page) +
      • +
      • Name. Name of the respondee, maybe with email address +
      • +
      • Test result (with max number of points in the header). Number +of Points the respondee has achieved in this test +
      • +
      • All tries with +
          +
        • Points. Number of Points for this try (out of the scoring +system) +
        • +
        • Time. Time taken for a try (yes, we will have to store +the time needed for a try) +
        • +
        • Status. Status of the try (not finished, finished, +auto-graded, manually graded) +
        • +
        • Link to evaluate single response (human grading in +test-processing.html) +
        • +
        • The try that is used for scoring the respondee is +diplayed with a green background. If we take the median of all tries, +mark all of them green. +
        • +
        +
      • +
      • Furthermore links to details about the test, reports and +summary are given.
      • +
      +
    • +
    +
    Test processing +happens in a multiple stage process. +
      +
    1. The system evaluates the test as good as it can.
    2. +
    3. The results of this auto-grading are displayed in the evaluation +table for the admin (see test specifications) +
    4. +
    5. The test result is stored in the scoring system. +
    6. +
    7. Staff can manually Human Grade the test. This is mandatory for +open questions for the test to be completly graded. +
    8. +
    9. The result of the human grading overwrites the auto grading in +the scoring system. +
    10. +
    +Autograding is different for the types of questions the test has. For +future versions it should be possible to easily add other types of +information that will be autograded. All autograding follow this +scheme: +
      +
    1. The answer is taken from the respondee response +
    2. +
    3. It is compared with the correct answer +
    4. +
    5. A percentage value is returned +
    6. +
    7. The percentage value is multiplied with the points for the +question in the section (see assessment document for more info). +
    8. +
    9. The result will be stored together with a link to the response +for the particular question in the scoring system. +
    10. +
    11. Once finished with all the questions, the result for the test +is computed and stored with a link to the response in the scoring +system. +
    12. +
    +Autograding is different for each type of question. +
      +
    • Multiple Choice +
        +
      • All or nothing. In this scenario it will be looked, if all +correct +answers have been chosen by the respondee and none of the incorrect +ones. If this is the case, respondee get's 100%, otherwise nothing. +
      • +
      • Cumultative. Each answer has a certain percentage associated +with it. This can also be negative. For each option the user choose he +will get the according percentage. If negative points are allowed, the +user will get a negative percentage. In any case, a user can never get +more than 100% or less then -100%. +
      • +
      +
    • +
    • Matching question +
        +
      • All or nothing: User get's 100% if all matches are correct, +0% otherwise. +
      • +
      • Equally weigthed: Each match is worth 100/{number of matches} +percent. Each correct match will give the according percentage and the +end result will be the sum of all correct matches. +
      • +
      • Allow negative: If we have equally weigthed matches, each +correct match adds the match percentage (see above) to the total. Each +wrong match distracts the match percentage from the total. +
      • +
      • Obviously it is only possible to get up to 100% and not less +than -100%. +
      • +
      +
    • +
    • Short answer question +
        +
      1. For each answerbox the possible answers are selected. +
      2. +
      3. The response is matched with each of the possible answers +
          +
        • Equals: Only award the percentage if the the strings +match exactly (case senstivity depends on the setting for the +question). +
        • +
        • Contains: If the answer contains exactly the string, +points +are granted. If you want to give percentages for multiple words, add +another answer to the answerbox (so instead of having one answerbox +containing "rugby soccer football", have three, one for each word). +
        • +
        • Regexp: A regular expression will be run on the answer. +If the result is 1, grant the percentage. +
        • +
        +
      4. +
      5. The sum of all answerbox percentages will be granted to the +response. If allow negative is true, even a negative percentage can be +the result. +
      6. +
      +
    • +
    +Human grading will display all the questions and answers of +response along with the possibility to reevalutate the points and give +comments. The header will display the following information: +
      +
    • Title of the test +
    • +
    • Name of the respondee +
    • +
    • Number of the try / total number of tries +
    • +
    • Status of the try (finished, unfinished, autograded, human graded +(by whom)) +
    • +
    • Start and Endtime for this try +
    • +
    • Time needed for the try +
    • +
    • Total number of Points for this test:Integer. Prefilled with the +current value for the response. +
    • +
    • Comment: richtext. Comment for the number of points given. +Prefilled with the current version of the comment. +
    • +
    +For each question the following will be displayed +
      +
    • Question Title. +
    • +
    • Maximum number of points for this question. +
    • +
    • Question. +
    • +
    • New Points: Integer. Prefilled with the current value for the +response. This is the possibility for staff to give a different number +of points for whatever reason. +
    • +
    • Comment: richtext. Comment for the number of points given. +Prefilled with the current version of the comment. +
    • +
    • Answer. The answer depends on the question type. +
        +
      • Multiple Choice: The answer is made up of all the options, +with a +correct/wrong statement (in case we have an all or nothing type of +multiple choice) or a percentage in front of them (depending on the +response) and a small marker that shows which option the respondee +picked. The correct / wrong depends whether the respondee has answered +correct or wrong for this option (if he picked an option that he should +not have picked, this option will get a wrong in front). +
      • +
      • Matching question: The item on the left side and the picked +item are displayed in a connecting manner. A correct / wrong statment +will be added depending whether the displayed (and responded) match is +correct. +
      • +
      • Open Question: The answer is displayed as written by the +user. Furthermore the correct answer is displayed as well. This should +allow the TA to easily come to a conclusion concerning the number of +points. +
      • +
      • Short Answer: For each answerbox the response will be +displayed along with the percentage it got and all the correct answers +for this answerbox (with percentage). Might be interesting to display +regexps here :-).
      • +
      +
    • +
    +
    + + + Index: openacs-4/packages/assessment/www/doc/user_interface/user_experience.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/user_experience.html,v diff -u -N --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/user_interface/user_experience.html 29 Jul 2004 09:35:12 -0000 1.1 @@ -0,0 +1,118 @@ + + + + User Experience + + + + + +User experience describes the various steps the USER sees and what he +can do when taking an assessment. When answering a section a couple of +things happen: +
      +
    • A permission check will be made to determine whether the user is +allowed to take the assessment. +
        +
      • Is the user allowed to take the survey at all (acs_permission +check) +
      • +
      • Are all conditions for the taking of the assessment +fullfilled +
      • +
      • Is there still a try for the user left +
      • +
      +
    • +
    • Starttime of the response will be logged +
    • +
    • First section will be delivered to the user for anwering. +
    • +
    +Depending on the settings, the display of the assessment will vary: +
      +
    • If all answers have to be submited seperatly, a submit button +will be shown next to each answer. If the user hits the submit button +next to the question the answer will be stored in the response. +
    • +
    • Else the normal section view will be displayed with a submit +button at the end of the section. +
    • +
    • If the user can change the results of a question, the submitted +question will be displayed with an edit instead of the submit button. +Otherwise only the answer will be displayed. In either case the answer +is displayed as text on the page, not within a form element. +
    • +
    • If the respondee cannot see his old answers, don't display them +once submitted. Make sure the backbutton does not work (e.g. using +Postdata ?). Not sure how much sense it makes to display an edit button +:). +
    • +
    • If we have a time in which the respondee has to answer the +assessment, display a bar with time left. +
    • +
    • If we have to show a progress bar, show it and renew it after +each submit (so also for each question). +
    • +
    • Display a finish test button at the end of the page to "hand the +test to the TA" if this is allowed +
    • +
    • Allow for chancellation of the test with a chancel button. The +result will not be stored but the test will be marked as taken. +
    • +
    • If immediate answer validation (aka. ad_form check) for a +question is true, check the answer if it is valid, otherwise notify the +user that it is not and do not store the result. +
    • +
    +The processing has to take some additional notes into consideration: +
      +
    • Branching does not always depend on an answer but may also depend +on the result within a section (branch by disctractor, median) +
    • +
    • questions within a section can be randomly displayed. Take also +into account that not all questions have to be displayed and that some +of the questions migt be mandatory and even mandatory in position. +
    • +
    • When displaying random questions the randomizing element has to +be the same for each response_id (the user shall not have the option to +see different questions just by hitting reload). +
    • +
    +Once the assessment has been finished +
      +
    • Display optional electronic signature file upload with an "I +certify this test and state it is mine" checkbox. This will be stored +in addition to the test. +
    • +
    • Notifications shall be send to the admin, staff and respondee. +
    • +
    • If we shall display the results to the respondee immediatly after +finishing the assessment, show it to him / her. Display the comments +along depending on the settings. +
    • +
    • If we have a special score, show this result to the user (e.g. if +90% means "you are a dream husband", display this along with the 90%). +
    • +
    • Display a link with the possibility to show all the questions and +answers for printout. +
    • +
    • Store the endtime with the response. +
    • +
    +An administrator can take the survey in various modes which he can +select before the first section will be displayed. +
      +
    • Normal mode: The adminsitrator is treated as a normal respondee, +the response will be stored in the system. +
    • +
    • Test mode: The administrator sees the survey as a normal +respondee, the response will not be stored in the system. +
    • +
    • Optional: Display correct answers when taking the assessment.
    • +
    +
    +
    + +