Index: openacs-4/packages/assessment/www/doc/as_items.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/as_items.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/as_items.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,287 @@
+
+
+
+
+ AS_Items
+
+
+
+Overview
+
+The Item and Section catalogues are central parts of the assessment
+system. These repositories support reuse of Assessment components by
+storing of the various Items (or questions if you like) and groups of
+Items (ie Sections) that can be used in an assessment. You are able to
+add/edit/delete an item of a certain type to a certain scope.
+Furthermore it allows you to search and browse for questions for
+inclusion in your assesment as well as import and export multiple
+questions using various formats.
+
+In this description here we will only discuss the design
+implications for Items. Green colored tables have to be
+internationlized.
+
+Each Item consists of a specific Item Type like "Multiple Choice
+Question" or "Free Text". Each Item Type
+adds additional Attributes to the Item, thereby making it pretty
+flexible. Additionally each item has a related display type
+storing information on how to display this item. This way we can create
+an adp-snippet which we can include to display a certain item (the
+snippet is stored de-normalized in the as_items table and update on
+every change to the item or the item_type).
+
+Categorization and internationalization will make it into
+OpenACS 5.2, therefore we are not dealing with it in Assessment
+seperately but use the (to be) built in functionality of OpenACS 5.2
+
+Additionally we have support functionality for an item. This includes
+the help functionality. To give Assessment authors flexibility in
+adapting Item defaults, help messages, etc for use in different
+Assessments, we abstract out a number of attributes from as_items into
+mapping tables where "override" values for these attributes can
+optionally be set by authors. If they choose not to set overrides, then
+the values originally created in the Item supercede.
+
+Seperately we will deal with Checks on Items. These will allow
+us to make checks on the input (is the value given by the user actually
+a valid value??), branches (if we display this item, which responses
+have to have been given) and post-input checks (how many points does
+this answer give).
+
+Here is the graphical schema for the Item-related subsystems,
+including the Item Display subsystem described here.
+
+
+
+
+
+
+
+Specific Entities: Core Functions
+
+Here are the components of the Item model in Assessment:
+
+
+
+ - Items (as_items) are the "questions" that constitute the
+atomic focus of the Assessment package. Each item is of a certain type,
+that can give the item additional attributes, making it really
+flexible. The following attributes are common to all item types.
+
+
+ - item_id
+
+ - name - some phrase used in admin UIs
+
+ - item_text - the primary "label" attached to an Item's
+display
+
+ - item_subtext - a secondary label, needed for many kinds of
+questions
+
+ - field_code - a short label for use in data output header
+rows, etc
+
+ - definition - some descriptive text
+
+ - enabled_p - whether Item is released for actual use
+
+ - required_p - whether Item must be answered (default value,
+can be overriden)
+
+ - item_default - optional field that sets what the Item will
+display when first output (eg text in a textbox; eg the defaults that
+ad_dateentrywidget expects: "" for "no date", "0" for "today", or else
+some specific date set by the author; see
+this example)
+
+ - data_type - This is the expected data_type of the answer.
+Previously "abstract_data_type" but omitting the superfluous "abstract"
+term; selected from the data types supported in the RDBMS:
+
+
+ - integer
+
+ - numeric
+
+ - exponential - stored in the db as a varchar; of form
+9.999e99
+
+ - varchar
+
+ - text
+
+ - date
+
+ - boolean (char(1) 't' 'f' in Oracle)
+
+ - timestamp (should work for all coarser granularities
+like date etc)
+
+ - content_type -- a derived type: something in the CR
+(instead of a blob type since we use the CR for such things now)
+
+
+ This value was previously stored with each
+as_item_type. For retrieval purposes it makes more sense thought to
+store it with the item itself, as this prevents us to follow each
+relationship to the as_item_type objects if we want to retrieve the
+answer.
+
+
+ - max_time_to_complete - optional max number of seconds to
+perform Item
+
+ - adp_chunk - a denormalization to cache the generated
+"widget"
+for the Item (NB: when any change is made to an as_item_choice related
+to an as_item, this will have to be updated!)
+
+
+
+ Permissions / Scope: Items need a clearly defined scope, in
+which they can be reused. Instead of defining a special scope variable
+we will use the acs permission system to grant access rights to an
+item.
+
+
+ - Read: An assessment author (who is granted this permission)
+can
+reuse this item in one of his sections. (NB: Usually the original
+author has admin priviledges.)
+
+ - Write: Author can reuse and change this item.
+
+ - Admin: Author can reuse, change and give permission on this
+item
+
+
+
+ - Item Types (as_item_types)
+define types of items like "Open Question", "Calculation" and others.
+The item type will also define in what format the answer should be
+stored. For each item type a cr_item_type will be generated. Each
+object of this type is linked to the primary object of the item (see
+above) using relationships. This has the benefit that we split the core
+attributes of an item from the type specific ones and the display ones
+(see down below). Using cr_item_type usage allows us to create and
+reuse standard items (e.g. for the likert scale), by linking different
+questions with the answer possibilities (and the same attributes) to
+one as_item_type object. If we have objects that are linked this way,
+we can generate the matrix for them.
+
Alternatively we could make the as_item_type object a child of
+the
+as_item and therefore prevent storing all the relations. In that case
+we'd loose the ability described before, but it might be a considerable
+performance gain.
+
+ Common attributes for *all* item_types:
+
+
+ - item_type_id
+
+ - item_type_name - name like "Question Type"
+
+ - item_type_description
+
+
+
+A list of all item types and their attributes can be found in the requirements
+section
+
+
+
+ - Item Choices (as_item_choices)
+contain additional information for all multiple choice item_types.
+Obvious examples are radiobutton and checkbox Items, but pop-up_date,
+typed_date and image_map Items also are constructed via Item Choices.
+Each choice is a child to an as_item_type Object. Note the difference.
+A choice does not belong to an item, but to the instance of the
+item_type! This way we can reuse multiple choice answers easier. It is
+debatable if we should allow n:m relationships between choices and
+item_types (thereby allowing the same choice been reused). In my
+opinion this is not necessary, therefore we have the parent_id.
+Following the Lars Skinny Table approach of conflating all the
+different potential data types into one table, we provide columns to
+hold values of the different types and another field to determine which
+of them is used. Item Choices have these attributes:
+
+
+ - choice_id
+
+ - parent_id (belonging to to an as_item_type object).
+
+ - name
+
+ - choice_text - what is displayed in the choice's "label"
+
+ - data_type - which of the value columns has the information
+this Choice conveys
+
+ - numeric_value - we can stuff both integers and real numbers
+here - this is where "points" could be stored for each Choice
+
+ - text_value
+ - boolean_value
+
+ - content_value - references an item in the CR -- for an
+image, audio file, or video file
+
+ - shareable_p - whether Choice is shareable; defaults to 't'
+since this is the whole intent of this "repository" approach, but
+authors' should have option to prevent reuse
+
+ - feedback_text - where optionally some preset feedback can be
+specified by the author
+
+
+ NB: In earlier versions (surveys/questionnaire), each Choice
+definition carried with it any range-checking or text-filtering
+criteria; these are now abstracted to the Item-Checks and Inter-Item
+Checks.
+
+
+
+
+Help System
+The help system should allow a small "?" appear
+next to an object's title that has a help text identified with it. Help
+texts are to be displayed in the nice bar that Lars created for OpenACS
+in the header. Each object can have multiple help texts associated with
+it (which will be displayed in sort order with each hit to the "?".)
+and we can reuse the help text, making this an n:m relationship (using
+cr_rels). E.g. you might want to have a default help text for certain
+cr_item_types, that's why I was thinking about reuse...
+Relationship attributes:
+
+
+ - item_id
+
+ - message_id - references as_messages
+
+ - sort_order (in which order do the messages appear)
+
+
+
+Messages (as_messages) abstracts out help messages (and other
+types of messages) for use in this package. Attributes include:
+
+
+ - message_id
+
+ - message
+
+ - locale (Actually I hope the i18n system Joel proposed makes this
+obsolete).
+
+
+
+
+
Index: openacs-4/packages/assessment/www/doc/data-modell.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/Attic/data-modell.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/data-modell.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,111 @@
+
+
+
+
+ Assessment Data Modell Overview
+
+
+Overview
+
+At its core, the Assessment package defines a hierarchical container
+model of a "survey", "questionnaire" or "form". This approach not only
+follows the precedent of existing work; it also makes excellent sense
+and no one has come up with a better idea.
+
+
+
+ - One Assessment consists of
+
+ - One or more Sections which each consist of
+
+ - One or more Items which have
+ - Zero or more Choices
+
+
+We choose the terms Assessment-Sections-Items-Choices over
+Surveys-Sectdions-Questions-Choices partly to reduce naming clashes
+during the transition from Survey/Questionnaire packages, but mostly
+because these terms are more general and thus suit the broader
+applicability intended for this package.
+
+As is the custom in the OpenACS framework, all RDBMS tables in
+the package will be prepended with "as_" to prevent further prefent
+naming clashes. Judicious use of namespaces will also be made in
+keeping with current OpenACS best practice.
+
+Several of the Metadata entities have direct counterparts in
+the Data-related partition of the data model. Some standards (notably
+CDISC) rigorously name all metadata entities with a "_def" suffix and
+all data entities with a "_data" suffix -- thus "as_item_def" and
+"as_item_data" tables in our case. We think this is overkill since
+there are far more metadata entities than data entities and in only a
+few cases do distinctions between the two become important. In those
+cases, we will add the "_data" suffix to data entities to make this
+difference clear.
+
+A final general point (that we revisit for specific entities
+below): the Assessment package data model exercises the Content
+Repository (CR) in the OpenACS framework heavily. In fact, this use of
+the CR for most important entities represents one of the main advances
+of this package compared to the earlier versions. The decision to use
+the CR is partly driven by the universal need for versioning and reuse
+within the functional requirements, and partly by the fact that the CR
+has become "the Right Way" to build OpenACS systems. Note that one
+implication of this is that we can't use a couple column names in our
+derived tables because of naming clashes with columns in cr_items and
+cr_revisions: title and description. Furthermore we can handle versioning and internationalization through
+the CR.
+
+Synopsis of The Data Model
+
+Here's a detailed summary view of the entities in the Assessment
+package. Note that in addition to the partitioning of the entities
+between Metadata Elements and Collected Data Elements, we identify the
+various subsystems in the package that perform basic functions.
+
+We discuss the following stuff in detail through the subsequent
+pages, and we use a sort of "bird's eye view" of this global graphic to
+keep the schema for each subsystem in perspective while homing in on
+the relevent detail. Here's a brief introduction to each of these
+section
+
+
+ - core - items entities (purple)
+define the structure and semantics of Items, the atomic units of the
+Assessment package
+
+ - core - grouping entities (dark
+blue) define constructs that group Items into Sections and Assessments
+ - sequencing entities
+(yellow-orange) handle data validation steps and conditional navigation
+derived from user responses
+
+ - scoring ("grading") entities
+(yellow-green) define how raw user responses are to be processed into
+calculated numeric values for a given Assessment
+
+ - display entities (light blue)
+define constructs that
+handle how Items are output into the actual html forms returned to
+users for completion -- including page layout and internationalization
+characteristics
+
+ - scheduling entities define
+mechanisms for package administrators to set up when, who and how often
+users should perform an Assessment
+
+ - session data collection
+entities (bright green) define
+entities that store information about user data collection events --
+notably session status and activities that change that status as the
+user users the system
+
+
+
+
+
+
Index: openacs-4/packages/assessment/www/doc/data_collection.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/data_collection.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/data_collection.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,360 @@
+
+
+
+
+ Data Collection
+
+
+Overview
+
+The schema for the entities that actually collect, store and retrieve
+Assesment data parallels the hierarchical structure of the Metadata Data Model. In the antecedent
+"complex survey" and "questionnaire" systems, this schema was simple
+two-level structure:
+
+
+
+ - survey_responses which capture information about which
+survey was completed, by whom, when, etc
+
+ - survey_question_responses which capture the actual user
+data in a "long skinny table" mechanism
+
+
+
+This suffices for one-shot surveys but doesn't support the fine
+granularity of user-action tracking, "save&resume" capabilities,
+and other requirements identified for the enhanced Assessment package.
+Consequently, we use a more extended hierarchy:
+
+
+
+ - Assessment Session which captures information about
+which Assessment, which Subject, when, etc
+
+ - Assessment Data which holds information about the status
+of the entire Assessment (rolled up from the constituent parts)
+
+ - Section Data which holds information about the status of
+each Section
+
+ - Item Data which holds the actual data extracted from the
+Assessment's html forms; this is the "long skinny table"
+
+
+To support user modification of submitted data (of which
+"store&resume" is a special case), we base all these entities in
+the CR. In fact, we use both cr_items and cr_revisions in our schema,
+since for any given user's Assessment submission, there indeed is a
+"final" or "live" version. (In contrast, recall that for any Assessment
+itself, different authors may be using different versions of the
+Assessment. While this situation may be unusual, the fact that it must
+be supported means that the semantics of cr_items don't fit the
+Assessment itself. They do fit the semantics of a given user's
+Assessment "session" however.)
+
+Note that all these entities derive from the CR, they are also all
+acs_objects and thus automagically have the standard creation_user,
+creation_date etc attributes. We don't mention them separately here.
+
+Also, while this doesn't impact the datamodel structure per se,
+we add an important innovation to Assessment that wasn't used in
+"complex survey" or questionnaire. When a user initiates an Assessment
+Session, an entire set of Assessment objects are created (literally,
+rows are inserted in all the relevant tables as defined by the
+structure of the Assessment). Then when the user submits a form with
+one or more Items "completed", all database actions from there on
+consist of updates in the CR, not insertions. (In contrast, the systems
+to date all wait to insert into "survey_question_responses", for
+example, until the user submits the html form.) The big advantage of
+this is that determining the status of any given Item, Section or the
+entire Assessment is now trivial. We don't have to see whether an Item
+Data row for this particular Assessment Session is already there and
+then insert it or else update it; we know that it's there and we just
+update it. More importantly, all of our reporting UIs that show
+Assessment admins the current status of users' progress through the
+Assessment are straightforward.
+
+We distinguish here between "subjects" which are users whose
+information is the primary source of the Assessment's responses, and
+"users" which are real OpenACS users who can log into the system.
+Subjects may be completing the Assessment themselves or may have
+completed some paper form that is being transcribed by staff people who
+are users. We thus account for both the "real" and one or more "proxy"
+respondents via this mechanism.
+
+Note that we assume that there is only one "real"
+respondent. Only one student can take a test for a grade. Even if
+multiple clinical staff enter data about a patient, all those values
+still pertain to that single patient.
+
+One final note: we denormalize several attributes in these entities
+--
+event_id, subject_id and staff_id. The reason for putting these foreign
+keys in each row of the "data" is to produce a "star topology" of fact
+tables and dimension tables. This will facilitate data retrieval and
+analysis. (Are there other dimension keys that we should include
+besides these?)
+
+
+Synopsis of Data-Collection Datamodel
+
+Here's the schema for this subsystem:
+
+
+
+
+
+
+
+
+Specific Entities
+This section addresses the attributes the most important entities
+have in the data-collection data model -- principally the various
+design issues and choices we've made. We omit here literal SQL snippets
+since that's what the web interface to CVS is for. ;-)
+
+
+
+ - Assessment Sessions (as_sessions) are the top of the
+data-collection entity hierarchy. They provide the central definition
+of a given subject's performance of an Assessment. Attributes include:
+
+
+ - session_id
+
+ - assessment_id (note that this is actually a revision_id)
+
+ - subject_id - references a Subjects entity that we don't
+define in this package; presumably a table derived from Persons since
+we need to be able to deploy Assessments to individuals who aren't
+OpenACS users
+
+ - staff_id - references Users if someone is doing the
+Assessment as a proxy for the real subject
+
+ - event_id
+
+ - target_datetime - when the subject should do the Assessment
+
+ - creation_datetime - when the subject initiated the
+Assessment
+
+ - first_mod_datetime - when the subject first sent something
+back in
+
+ - last_mod_datetime - the most recent submission
+
+ - completed_datetime - when the final submission produced a
+complete Assessment
+
+ - signature_id - references optionial digital signature in
+as_signatures
+
+ - session_status - tracks FSM
+
+ - ip_address
+
+
+
+
+
+ - Assessment Data (as_assessment_data) captures
+information about the Assessment (NB this may overlap with
+as_sessions). Attributes include:
+
+
+ - assessment_data_id
+
+ - session_id
+
+ - subject_id
+
+ - staff_id
+
+ - event_id
+
+ - assessment_id
+
+ - signature_id
+
+ - assessment_status
+
+
+
+
+
+ - Assessment Section Data (as_section_data) tracks the
+state of each Section in the Assessment. Attributes include:
+
+
+ - section_data_id
+
+ - session_id
+
+ - event_id
+
+ - subject_id
+
+ - staff_id
+
+ - section_id
+
+ - signature_id
+
+ - section_status
+
+
+
+
+
+ - Assessment Item Data (as_item_data) is the heart
+of the data collection piece. This is the "long skinny table" where all
+the primary data go -- everything other than "scale" data ie calculated
+scoring results derived from these primary responses from subjects.
+Attributes include:
+
+
+ - item_data_id
+
+ - session_id
+
+ - event_id
+
+ - subject_id
+
+ - staff_id
+
+ - item_id
+
+ - signature_id
+
+ - item_status
+
+ - is_unknown_p - defaults to "f" - important to clearly
+distinguish an Item value that is unanswered from a value that means
+"We've looked for this answer and it doesn't exist" or "I don't know
+the answer to this". Put another way, if none of the other "value"
+attributes in this table have values, did the subject just decline to
+answer it? Or is the "answer" actually this: "there is no answer". This
+attribute toggles that clearly when set to "t".
+
+ - choice_id_answer - references as_item_choices
+
+ - boolean_answer
+
+ - clob_answer
+
+ - numeric_answer
+
+ - integer_answer
+
+ - varchar_answer
+
+ - text_answer
+
+ - timestamp_answer
+
+ - content_answer - references cr_revisions
+
+ - attachment_answer - may be redundant with content_answer (?)
+
+ - attachment_file_type
+
+ - attachment_file_extension
+
+
+
+
+
+ - Assessment Scale Data (as_scale_data) captures derived
+data calculated from subjects' raw responses. Attributes include:
+
+
+ - scale_data_id
+
+ - session_id
+
+ - event_id
+
+ - subject_id
+
+ - staff_id
+
+ - scale_id
+
+ - assessment_id
+
+ - signature_id
+
+ - scale_score_status
+
+ - is_unknown_p
+
+ - numeric_value
+
+ - varchar_value - optional nominal or ordinal value
+
+
+
+
+
+ - Assessment Annotations (as_annotations) provides a
+flexible way to handle a variety of ways that we need to be able to
+"mark up" an Assessment. Subjects may modify a response they've already
+made and need to provide a reason for making that change. Teachers may
+want to attach a reply to a student's answer to a specific Item or make
+a global comment about the entire Assessment. This mechanism provides
+that flexibility, and is designed like the old general_comments system.
+An argument could be made to use a generic OpenACS Comments system but
+such doesn't exist now, and probably our uses here are specific enough
+that it makes sense to have our own. Attributes include:
+
+
+ - annotation_id
+
+ - on_what_table
+
+ - on_what_id
+
+ - title
+
+ - text
+
+ - content_id - references cr_revisions
+
+ - signature_id
+
+
+
+
+
+ - Assessment Signatures (as_signatures): abstracts the
+digital signatures mechanism from the data tables themselves.
+Attributes include:
+
+
+ - signature_id
+
+ - subject_id
+
+ - staff_id
+
+ - on_what_table
+
+ - on_what_id
+
+ - reason
+
+ - signature - a hash of the primary datum encrypted with the
+user's passcode
+
+
+
+
+
+
Index: openacs-4/packages/assessment/www/doc/display_types.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/display_types.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/display_types.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,193 @@
+
+
+
+
+ As_Item Display Types
+
+
+
+Overview
+Displaying items to users has a couple of challanges. First of all the
+display of a single item can be different for each item_type (and even
+within a type). Second of all, the display of items within a section
+can be different from assessment to assessment. Last but not least, the
+whole assessment might be displayed differently depending on attributes
+and the type of assessment we are talking about.
+Note: please refer to the discussion of Items here.
+That discussion complements the discussion here, and the data model
+graphic pertaining to the Item Display Types system is available there
+also.
+
+
+Item Display Types
+Each item has an item_display_type object associated with it, that
+defines how to display the item. Each item_display_type has a couple of
+attributes, that can be passed to the formbuilder for the creation of
+the widget. Each widget has at least one item_display_type associated
+with it. In the long run I think this system has the potential to
+become a part of OpenACS itself (storing additional display information
+for each acs_object), but we are not there yet :). Obviouslly we are
+talking cr_item_types here as well.
+Each item_display_type has a couple of attributes in common.
+
+
+ - item_display_type_id
+
+ - item_type_name - name like "Select box, aligned right"
+
+ - presentation_type - the type of "widget" displayed when the
+Item is output in html. There are many types we should support beyond
+the stock html types.
+
+ - item_answer_alignment - the orientation between the
+"question part" of the Item (the item_text/item_subtext) and the
+"answer part" -- the native Item widget (eg the textbox) or the 1..n
+choices. Alternatives accommodate L->R and R->L alphabets (or is
+this handled automagically be Internationalization?) and include:
+
+ - beside_left - the "answers" are left of the "question"
+
+ - beside_right - the "answers" are right of the "question"
+
+ - below - the "answers" are below the "question"
+
+ - above - the "answers" are above the "question"
+
+
+
+ - html_display_options - field to specify other stuff like
+textarea dimensions ("rows=10 cols=50" eg)
+
+
+Depending on the presentation_types additonal
+attributes (presentation_type attributes)
+come into play (are added as attributes to the CR item type) (mark:
+this is not feature complete. It really is up to the coder to decide
+what attributes each widget should have, down here are only
+*suggestions*). Additionally we're not mentioning all HTML
+possibilities associated with each type (e.g. a textarea has width and
+heigth..).
+
+
+ - textbox - single-line typed entry
+
+ - abs_size - An abstraction of the real size value in
+"small","medium","large". Up to the developer how this translates.
+
+
+
+ - text area - multiple-line typed entry
+
+ - abs_size - An abstraction of the real size value in
+"small","medium","large". Up to the developer how this translates.
+
+
+
+ - radiobutton - single-choice multiple-option
+
+ - choice_orientation - the pattern by which 2..n Item Choices
+are
+laid out when displayed. Note that this isn't a purely stylistic issue
+better left to the .adp templates or css; the patterns have semantic
+implications that the Assessment author appropriately should control
+here. Note also that Items with no Choices (eg a simple textbox Item)
+has no choice_orientation, but handles the location of that textbox
+relative to the Item label by the item_alignment option (discussed
+below).
+
+ - horizontal - all Choices are in one line
+
+ - vertical - all Choices are in one column
+
+ - matrix_col-row - Choices are laid out in matrix, filling
+first col then row
+
+ - matrix_row-col -Choices are laid out in matrix, filling
+first row then col
+
+
+
+ - Button type - type of button to use
+
+
+
+ - checkbox - multiple-choice multiple-option
+
+ - choice_orientation (see above)
+
+
+
+ - select - single-choice multiple-option displayed in "popup menu"
+
+ - multiple-choice-other: Consider, for instance, a combo box
+that consists of a radiobutton plus a textbox -- used for instance when
+you need a check "other" and then fill in what that "other" datum is.
+In effect this is a single Item but it has two different forms: a
+radiobutton and a textbox.
+
+ - other_size: size of the other text field.
+
+ - other_label: label (instead of "other").
+
+ - display_type: What display type should be used for the
+multiple-choice-part ?
+
+
+
+ - pop-up_date - a widget with month-day-year select elements
+that resets the day element based on year and month (ie include Feb 29
+during leap years -- via Javascript) and tests for valid dates
+
+ - typed_date - similar to pop-up_date but month-day-year
+elements are textboxes for all-keyboard entry; needs no resetting
+scripts but does need date validity check
+
+ - image_map - requires a linked image; the image map coordinates
+are handled as Item Choices
+
+ - file_upload - present a File box (browse button, file_name
+textbox, and submit button together) so user can upload a file
+
+ - many more
+
+
+
+In addition, there are some potential presentation_types that actually
+seem to be better modeled as a Section of separate Items:
+
+
+
+ - ranking - a set of alternatives each need to be assigned an
+exclusive rank ("Indicate the order of US Presidents from bad to
+worse"). Is this one Item with multiple Item Choices? Actually, not,
+since each alternative has a value that must be separately stored (the
+tester would want to know that the testee ranked GWB last, for
+instance).
+
+ - ...
+
+
+Section display
+A section can be seen as a form with all the
+items within this section making up the form. Depending on the type of
+assessment we are talking about, the section can be displayed in
+various ways (examples):
+
+ - Normal survey view, with a description on top.
+
+ - Test view, which has certain restrictions on the display (e.g.
+not allowed to use the back button)
+
+ - Portlet view, only displaying the items of the section with a
+submit button
+
+
+Additionally each section has certain parameters that
+determine the look and feel of the section itself. Luckily it is not
+necessary to have differing attributes for the sections, therefore all
+these display attributes can be found with the section
+and assessment specification
+
+
Index: openacs-4/packages/assessment/www/doc/grouping.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/grouping.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/grouping.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,326 @@
+
+
+
+
+ Assessment
+
+
+Here is a graphical overview of the subsystem in the Assessment
+package
+that organizes Items into Sections and whole Assessments:
+
+
+
+
+
+
+
+
+Review of Specific Entities
+
+
+
+ - Assessments (as_assessments) are the highest-level
+container in the hierarchical structure. They define the key by which
+all other entities are assembled into meaningful order during display,
+processing, retrieval and display of Assessment information.
+
The primary key assessment_id is a revision_id inherited from
+cr_revisions. Note, the CR provides two main types of entities --
+cr_items and cr_revisions. The latter are where sequential versions of
+the former go, while cr_items is where the "current" version of an
+entity can be stored, where unchanging elements of an entity are kept,
+or where data can be cached. This is particularly useful if the system
+needs a single "live" version, but it isn't appropriate in situations
+where all versions potentially are equally-important siblings. In the
+case of the Assessment package, it seems likely that in some
+applications, users would indeed want to designate a single "live"
+version, while in many others, they wouldn't. However, a given revision
+can be chosen many easy ways other than looking at cr_items, while
+being forced to create and maintain appropriate state in cr_items when
+an application doesn't want it would be a major complication. Thus,
+using the cr_revisions part of the CR alone seems to be the most useful
+approach here. This decision pertains to all entities using the CR, but
+it is particularly important with Assessments since they are the key to
+all the rest of the entity hierarchies.
+
+ Attributes of Assessments will include those previously included
+in Surveys plus some others:
+
+
+
+ - assessment_id
+
+ - name - a formal title to use in page layouts etc
+
+ - short_name - a curt name appropriate for urls
+
+ - author
+
+ - definition - text that can appear in introductory web pages
+
+ - instructions - text that explains any specific steps the
+subject needs to follow
+
+ - scaled_p - whether some kind of scoring algorithm is defined
+(ie "grading" or other schemes)
+
+ - mode - whether this is a standalone assessment (like current
+surveys), or if it provides an "assessment service" to another OpenACS
+app, or a "web service" via SOAP etc
+
+ - validated_p - whether this is a formal "instrument" like an
+eponymous test (eg "Seattle Angina Questionnaire" or "MMPI" etc; this
+means that alterations to this Assessment are not allowed since changes
+would invalidate the Assessment
+
+ - enabled_p - released for actual use?
+
+ - editable_p - can anyone alter it further?
+
+ - template - references as_templates - ?not entirely clear
+how/why we want to use this
+
+
+
+Permissions / Scope: Control of reuse previously was through a
+shareable_p boolean. As with Items and Sections, we instead will use
+the acs permission system:
+
+
+ - Read: An assessment author (who is granted this permission)
+can
+reuse this assessment (NB: Usually the original author has admin
+priviledges.)
+
+ - Write: Author can reuse and change this assessment.
+
+ - Admin: Author can reuse, change and give permission on this
+assessment
+
+
+
+
+
+ - Sections (as_sections) represent logically-grouped
+set of Items that always need to be present or absent together in the
+Assessment. Sections thus divide at logical branch points. These branch
+points are configured during Assessment creation to determine movement
+among Sections based on one of various mechanisms: pre-set criteria
+specified by the Assessment author, or criteria based on user-submitted
+data up to the point of branching. Note that Items within a single
+Section may be presented one-by-one in different pages; pagination is
+thus related but not equivalent to Section definitions and in fact is
+an attribute of a Section. Well, more accurately, of a Section Display
+Type (see below). Attributes of Sections themselves include:
+
+
+ - section_id
+
+ - section_display_type_id - references
+as_section_display_types
+
+ - name - used for page display
+
+ - definition - text used for identification and selection in
+admin pages, not for end-user pages
+
+ - instructions - text displayed on user pages
+
+ - enabled_p - good to go?
+
+ - required_p - probably not as useful as per-Item required_p
+but maybe worth having here; what should it mean, though? All Items in
+a required section need to be required? At least one? Maybe this isn't
+really useful.
+
+ - content_value - references cr_revisions: for an image, audio
+file or video file
+
+ - numeric_value - optional "number of points" for section
+
+ - feedback_text - optional preset text to show user
+
+ - max_time_to_complete - optional max number of seconds to
+perform Section
+
+
+
+Permissions / Scope: Control of reuse previously was through a
+shareable_p boolean. As with Items and Assessments, we instead will use
+the acs permission system:
+
+
+ - Read: A section author (who is granted this permission) can
+reuse
+this section (NB: Usually the original author has admin priviledges.)
+
+ - Write: Author can reuse and change this section.
+
+ - Admin: Author can reuse, change and give permission on this
+section
+
+
+
+
+
+ - Section Display Types (as_section_display_types)
+define types of display for an groups of Items. Examples are a
+"compound question" such as "What is your height" where the response
+needs to include a textbox for "feet" and one for "inches". Other
+examples are "grids" of radiobutton multiple-choice Items in which each
+row is another Item and each column is a shared radiobutton, with the
+labels for the radiobutton options only displayed at the top of the
+grid (see the SAQ for an
+illustration of this).
+
This entity is directly analogous in purpose and design to
+as_item_display_types.
+
+
+
+ - section_display_type_id
+
+ - section_type_name - name like "Vertical Column" or
+"Depth-first Grid" or "Combo Box"
+
+ - pagination_style - all-items; one-item-per-page; variable
+(get item groups from mapping table)
+
+ - branched_p - whether this Section defines a branch point (so
+that the navigation procs should look for the next step) or whether
+this Section simply transitions to the next Section in the sort_order
+(it may be better not to use this denormalization and instead always
+look into the Sequencing mechanism for navigation -- we're still fuzzy
+on this)
+
+ - item_orientation - the pattern by which 2..n Items are laid
+out when displayed. Note that this isn't a purely stylistic issue
+better left to the .adp templates or css; the patterns have semantic
+implications that the Assessment author appropriately should control
+here.
+
+
+ - horizontal - all Items are in one line
+
+ - vertical - all Items are in one column
+
+ - matrix_col-row - Items are laid out in matrix, filling
+first col then row
+
+ - matrix_row-col -Items are laid out in matrix, filling
+first row then col
+
+
+
+
+
+ - item_labels_as headers_p - whether to display labels of
+the Items; if not, a "grid of radiobuttons" gets displayed. See
+discussion of Items and Item Choices here.
+There are contexts where a Section of Items all share the same Choices
+and should be laid out with the Items' item_subtexts as row headers and
+the radiobuttons (or checkboxes) only -- without their labels --
+displayed in a grid (see this
+example).
+
+
+ - presentation_type - may actually be superfluous...gotta
+think more about this, but there's at least one example:
+
+
+ - ranking - a set of alternatives each need to be assigned
+an
+exclusive rank ("Indicate the order of US Presidents from bad to
+worse"). Is this one Item with multiple Item Choices? Actually, not,
+since each alternative has a value that must be separately stored (the
+tester would want to know that the testee ranked GWB last, for
+instance).
+
+ - what others?
+
+
+
+
+
+ - item_alignment - the orientation between the "section
+description part" of the Section (if any) and the group of Items.
+Alternatives accommodate L->R and R->L alphabets (or is this
+handled automagically be Internationalization?) and include:
+
+
+ - beside_left - the Items are left of the "heading"
+
+ - beside_right - the Items are right of the "heading"
+
+ - below - the Items are below the "heading"
+
+ - above - the Items are above the "heading"
+
+
+
+
+
+ - display_options - field to specify other stuff like the grid
+dimensions ("rows=10 cols=50" eg)
+
+
+
+
+
+
+ - Item Section map (as_item_section_map) defines 1..n Items to a
+Section, caches display code, and contains optional overrides for
+Section and Item attributes:
+
+ - item_id
+
+ - section_id
+
+ - enabled_p
+
+ - required_p - whether Item must be answered
+
+ - item_default
+
+ - content_value - references CR
+
+ - numeric_value - where optionally the "points" for the Item
+can be stored
+
+ - feedback_text
+
+ - max_time_to_complete
+
+ - adp_chunk - display code
+
+ - sort_order
+
+
+
+
+
+ - Section Assessment Map (as_assessment_section_map) basically is
+a standard map, though we can override a few Section attributes here if
+desired:
+
+
+ - assessment_id
+
+ - section_id
+
+ - feedback_text
+
+ - max_time_to_complete
+
+ - sort_order
+
+
+
+
+
+
Index: openacs-4/packages/assessment/www/doc/index.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/index.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/index.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,233 @@
+
+
+
+
+ Assessment Overview
+
+
+Introduction
+The Assessment Package unites the work and needs of various members
+of the OpenACS community for data collection functionality within the
+OpenACS framework. We're using the term "Assessment" instead of
+"Survey" or "Questionnaire" (or "Case Report Form" aka CRF, the term
+used in clinical trials) because it is a term used by IMS and because
+it connotes the more generic nature of the data collection system we're
+focusing on.
+There has been considerable recent interest in expanding the
+capabilities of generic data collection packages within OpenACS.
+Identified applications include:
+
+
+ - Educational settings. The dotLRN project has updated the
+Simple-Survey package to the Survey
+package now in the current distribution. A number of groups in the
+OpenACS community are interested in adding capabilities defined in the IMS Global Learning
+Consortium's specs for Question
+and Test Interoperability and Simple Sequencing.
+
+
+ - Clinical research settings. The
+Epimetrics Group
+has created an enhanced version of the Simple-Survey package that adds
+a variety of scoring and scheduling tools for use in health-related
+quality-of-life assessments. This Questionnaire package has not
+been ported to OpenACS 4.x yet, however, and it also lacks a wide
+variety of other features that are necessary for use in formal clinical
+trial data collection applications, certainly for those that intend to
+create data sets acceptible for new drug applications to the US Food
+and Drug Administration and equivalent European regulatory agencies.
+
Of note, there are large and well-funded vendors of clinical
+trials data management systems. Phase Forward,
+ Outcome Sciences, and PHT Corporation
+among others. A standards body called CDISC
+(Clinical Data Interchange Standards Consortium) formed a few years ago
+and is developing data models for clinical trials data derived from
+schema contributed primarily by Phase Forward and PHT. These vendors
+provide "electronic data capture" (EDC) services at considerable cost
+-- a 18 month study of 2500 patients including about 500 data elements
+costs nearly $500,000. There is clearly interest and opportunity to
+craft systems that bring such costs "in house" for organizations doing
+clinical research.
+
+ -
+
Data collection services for other OpenACS packages. Most other
+OpenACS packages invoke some form of data collection from users. While
+developments such as ad_form
+and the templating system in OpenACS 4.x ease the construction of data
+collection forms, it may be possible to expose a focused data
+collection package via acs_service_contract mechanisms to other
+packages. In particular, incorporating Workflow and a new data
+collection package would be key to creation of new vertical-application
+tools like dotWRK. Such integration would also be immensely useful for
+a clinical trials management toolkit.
+
+
+
+
+Historical Considerations (Work Done So Far)
+
+Several OpenACS efforts form the context for any future work. These
+include:
+
+
+
+ - Survey. This package (largely written/revised by Dave Bauer) doesn't currently have any
+documentation in the documentation section of the OpenACS.org site,
+but it is in any current OpenACS installation at /doc/survey/. Dave has
+added internationalization capabilities (in the version of Survey in
+CVS HEAD) and cleaned up the administrative UIs very nicely. This
+package was thoroughly debugged prior to the 4.6.1 release. It supports
+simple one-section surveys, though the data model has as-yet
+unimplemented provisions for multiple sections within a survey.
+
+
+ - Exam. This package (written by Ernie Ghiglione and Malte Sussdorff) is currently an Oracle-only tool
+with capabilities not much different from Survey.
+
+
+ - Surveys. This package was written a while ago by
+Buddy Dennis, and the source code package has dropped from view.
+However, we've posted it here. Presumably this package has been further
+developed, since it appears to be in use at the iQ&A
+site, though current source doesn't appear to be available there.
+Surveys included several important enhancements to the data model:
+
+ - Conditional branching within a survey (though how well
+worked out this is remains unclear)
+
+ - "Folder" based repositories of questions and sections
+
+
+
+However, Surveys has some important limitations:
+
+
+ - Surveys are "published" as static HTML files which are
+served out to users when they complete the survey
+
+ - The package doesn't use a templating system
+
+ - Oracle-only
+
+
+ Still, this package adopts some naming conventions consistent
+with
+the IMS spec and definitely represents the closest effort to a "complex
+survey" done to date.
+
+
+
+ - "Complex Survey". This is the descendant of
+"Survey" and Buddy's "Surveys" written by Malte Sussdorf. It currently
+is in the /contrib branch of the OpenACS 5 distro and represents the
+currently most advanced package for OpenACS 5+. If you want to start
+looking at surveys in OpenACS right now, this is the package to get. It
+incorporates a number of the features of Surveys. We discuss it in
+greater detail here.
+
+
+
+ - Questionnaire. This is a 3.2.5 module developed at The Epimetrics Group in order to
+support complex scoring of a particular type of clinical measure. (You
+can see a demo of this here,
+and if you register at the site and join the Bay Area OpenACS Users
+Group, you can play with the intuitive administrative pages for
+creating and editing questionnaires, defining scoring mechanisms,
+setting up user scheduling and reminder features, and configuring
+results reporting/graphing capabilities.) This module runs within
+OpenACS 3.2.5, though, and will need a substantial rewrite to work
+within the new 5.x infrastructure.
+
+
+ - Simple-survey. This package remains in the OpenACS distribution
+but it is now obsolete, supplanted by Survey
+
+
+
+Competitive Analysis
+The number of competing products in this area is *huge*. Starting with
+the usual suspects Blackboard and WebCT you can go on to clinical trial
+software like Oracle Clinical or specialised survey systems. When
+writing the specifications we tried to incorporate as many ideas as
+possible from the various systems we had a look at and use that
+experience. A detailed analysis would be too much for the moment.
+Functional Requirements
+An overview of the functional requirements can be found here. It is highly encouraged to be read
+first, as it contains the use cases along with a global overview of the
+functionality contained within assessment. Additional requirements can
+be found in the specific pages for the user interface.
+Design Tradeoffs
+The assessment system has been designed with a large flexibility and
+reuse of existing functionality in mind. This might result in larger
+complexity for simple uses (e.g. a plain poll system on it's own will
+be more performant than running a poll through assessment), but
+provides the chance to maintain one code base for all these seperate
+modules.
+API
+The API will be defined during the development phase.
+Data modell
+The data modell is described in detail in the design descriptions.
+User Interface
+The UI for Assessment divides into a number of primary functional
+areas, as diagrammed below. These include:
+
+ - the "Home" area (for lack of a better term). These are the main
+index pages for the user and admin sections
+ - Assessment Authoring: all the pages involved in creating,
+editing, and deleting the Assessments themselves; these are all admin
+pages
+ - Assessment Delivery: all the pages involved in
+deploying a given Assessment to users for completion, processing those
+results, etc; these are user pages
+ - Assessment Review: all the pages involved in select
+data extracts and displaying them in whatever formats indicated; this
+includes "grading" of an Assessment -- a special case of data review;
+these are admin pages, though there also needs to be some access to
+data displays for general users as well (eg for anonymous surveys etc).
+Also, this is where mechanisms that return information to "client"
+packages that embed an Assessment would run.
+ - Session Management: pages that set up the timing and
+other "policies" of an Assessment. This area needs to interact with the
+next one in some fashion, though exactly how this occurs needs to be
+further thought through, depending on where the Site Management
+mechanisms reside.
+ - Site Management: pages involved in setting up who
+does Assessments. These are admin pages and actually fall outside the
+Assessment package per se. How dotLRN wants to interact with Assessment
+is probably going to be different from how a Clinical Trials Management
+CTM system would. But we include this in our diagram as a placeholder.
+
+More information can be found at the Page Flow
+page.
+Authors
+The specifications for the assessment system have been written by Stan
+Kaufmann and Malte Sussdorff with help from numerous people within and
+outside the OpenACS community.
+
+
+
Index: openacs-4/packages/assessment/www/doc/item_types.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/item_types.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/item_types.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,369 @@
+
+
+
+
+ AS_item Types
+
+
+Overview
+This is a list of item types and their attributes we want to support.
+At a later stage we are going to add the checks for each item_type to
+this page as well. This does not mean we are going to create all of
+them in the first shot. The attributes are *ONLY* those which are not
+already part of as_items and therefore should be dealt with in
+as_item_type_attributes (see Item Data Model
+for reference).
+
+Specific Item Types
+
+
+
+ - Open Question
+
Open questions are text input questions for free text. For
+obvious
+reasons they cannot be auto corrected. The difference between an "Open
+Question" and a "Short Answer" Item is that Open Questions accept
+alphanumeric data from a user and only undergo manual "grading" by an
+admin user through comparison with "correct" values configured during
+Assessment authoring. Open Questions can either be short (textbox) or
+long (text area) elements in the html form. Here are several
+configuration options the authoring environment will support (in
+addition to many others, such as alignment/orientation, required/not
+required, etc etc):
+
+
+
+ - Size of the reply box: Radio buttons to set size of textbox:
+small/medium/large; or text area
+
+ - Prefilled Answer Box: richtext widget. The content of this
+field will be prefilled in the response of the user taking the survey
+-> stored as item_default
+
+ - Correct Answer Box: richtext widget. The person correcting
+the answers will see the contents of this box as correct answer for
+comparison with the user response. -> stored as feedback_text
+
+ - [NTH nice to have?]: Button to add predefined comments next
+to the correct answer box. This would be shown to the manual corrector
+to quickly choose from when manually scoring the answer. What
+kind of comments would these be? Should they be categorized entries in
+the "message" system that admin users would populate over time, that
+would be stuck into the authoring UI dynamically during Assessment
+creation?
+
+
+
+
+
+ - Short Answer Item:
+
Short Answer Items allow the user to give a short answer to an
+answer
+box, which could be automatically corrected. The questioneer can define
+what values are valid in each answer box and use various compare
+functions to compare the output. The creation of a short answer
+question will trigger entries into the as_item check tables. It also
+enables us to have Fill-in-the-blank
+items. In addition to supporting automated validation/grading, this
+item type differs from "Open Questions" in that only textboxes are
+supported -- meaning short answers, no text area essays.
+
+
+ - Number of Answerboxes: Integer Selectbox. This will control
+how many answer boxes the respondee will see. I
+don't agree here; there needs to be one answer box per Item; if a
+"question" involves more than one answer box, it is actually a group
+of Items that needs to be implemented as a Section of Items. So this
+selector needs to be in the section-edit UI, not item-edit.
+
+ - Upper/Lowercase: Radio boolean. This will control, whether we
+treat the response case sensitive when comparing it to the correct
+answers or not.
+
+ - The questioneer has the option to define multiple possible
+correct answers that will be matched with the response of the user in
+various ways. For each of the possible answers the following fields are
+given (among others):
+
+ - Answer: short_text. This contains the answer string that
+will be matched against the response
+
+ - Data type: integer vs real number
+
+ - Value in %: short integer: How many percentage points a
+match will awarded.
+
+ - Size: Integer Select: size of the input box (small,
+medium, large)
+ - Compare
+by: Select (equal, contains, regexp). This defines how the comparison
+between the answer string and the response shall happen.
+ - Optional Lower bound value; lower bound comparator
+
+ - Optional upper bound value; upper bound comparator
+
+ - Allow in answerbox: (multiple select box with "All" and
+the
+numbers from 1 to x where x is the number of answerboxes from above.
+For sure this only works with JS enabled :)). Defines the answerboxes
+the user can fill out that shall be matched with this answer. I don't
+follow this exactly, but I think the comments about Sections of Items
+above applies here also.
+
+
+
+
+
+
+
+ - Matching Item:
+
Matching questions are useful for matching some items on the
+left with
+pull down menues on the right hand side of a survey. The number of the
+items is identical to the number of items on the right hand side. This
+also appears to be a Section of Items; each Item consists of a single
+"phrase" against which it is to be associated with one of a set of
+potential choices (displayed via a select widget; could be radiobutton
+though too). If there are several such matchings (three phrases
+<-> three items in the popup select) then this is a Section with
+three Items. The UI for this needs to be in section-edit, not item-edit.
+
+
+ - Settings:
+
+ - Distribution of points: boolean (all or nothing /
+increasing). All
+or nothing will either give 100%, if all correct answers are given, or
+0% else. Increasing will give (number of correct matches / number of
+total matches) *100% points.
+
+ - Allow negative: boolean (yes/no). This will allow a
+negative percentage as well (as the total result).
+
+
+ - A couple of match entries will be presented below the
+settings. Each one will consist of:
+
+ - Match item: This is the item which will be displayed on
+the left side of the question presented to the respondee.
+
+ - Matched item: This is the correct item from the select
+box on
+the right side. For each match item on the left there will be a select
+box on the right with ALL the matched items (when taking the survey,
+that is...)
+
+
+
+ - In addition to submit, there is another button to allow
+further answers to be filled in. Typed in values shall be remembered
+and 4 more answerboxes be shown.
+
+
+
+
+
+ - File upload item:
+
A file upload question will allow the respondent to upload a
+file. No additional attributes but the usual for every question.
+
+
+
+ - Multiple Choice items:
+
Multiple Choice questions
+allow the respondee to choose from multiple alternatives with the
+possibility to answer more than one at a time. This will also deal with
+True/False and Multiple response items.
+
+
+ - Settings:
+
+ - Allow Multiple: boolean (yes/no). This will determine if
+the
+respondee has the option to choose multiple possible answers for his
+response.
+
+ - Select Box: boolean (yes/no). Will display a select box
+or radio/checkbox otherwise.
+
+ - Distribution of points: boolean (all or nothing /
+increasing).
+All or nothing will either give 100%, if all correct answers are given,
+or 0% else. Increasing will give (number of correct matches / number of
+total matches) *100% points.
+
+ - Allow negative: boolean (yes/no). This will allow a
+negative percentage as well (as the total result).
+
+For each (possible) answer we have a couple of fields
+(as_item_choices):
+
+ - Correct answer: boolean, radio with grafik (red x, green
+y) (yes/no). This marks if the current answer is a correct one.
+ - Answer: Richtext widget. Need option to associate
+both/either a numeric value and a text value to each choice.
+
+ - Value: percentage value this answer gives to the
+respondee -- this is different from the "answer"
+
+ - Reply: richtext widget. This is a reply the student can
+see at
+the end of the test giving him some more information on the question he
+choose.
+
+ - CR_Item: cr_item. If we have an image it will be shown
+instead of the answer text. If we have a sound item, we will generate
+audio includes.
+
+
+
+ - In addition to submit, there is another button to allow
+further answers to be filled in. Typed in values shall be remembered
+and 4 more answerboxes be shown.
+
+ - [Additional Feature]: Possibility to randomly choose from the
+options. This would add a couple of fields:
+
+ - To each answer: Fixed position: Select Box, Choose the
+mandatory position when displaying the option (e.g. "none of the
+above").
+
+ - Number of correct answers: integer, defining how many
+correct
+options have to be displayed. Check if enough correct answers have been
+defined.
+
+ - Number of answers: integer, defining how many options
+shall
+be displayed in total (correct and incorrect). Check if enough answers
+are available.
+
+ - Display of options: Numerical, alphabetic, randomized or
+by order of entry.
+
+ - All radio button Items must have a "clear" button that
+unsets all the radiobuttons for the item.
+(For that matter, every Section and every Assessment also must have
+"clear" buttons. Fairly trivial with Javascript.)
+
+
+
+
+Note that one special type of "multiple choice" question
+consists of choices that are created by a database select. For
+instance: a question like "Indicate your state" will have a select
+widget that displays all state names obtained from the states table in
+OpenACS.
+
+
+ - Rank question:
+
Rank questions ask for the answers to be ranked. This
+appears to me to be a special case of the "matching question" in which
+the select options are ordinal values, not arbitrary strings.
+
+
+ - Rank Type: Boolean (alphabetic, numeric). Shall the rank be
+from a to z or from 1 to n.
+
+ - Only unique rank: Boolean (yes/no). Shall the ranking only
+allow unique ranks (like 1,2,3,5,6 instead of 1,2,2,4,5)
+
+ - Straigth order: Boolean (alphabetic, numeric). Shall the rank
+be in a straigth order or is it allowed to skip values (1,2,3 vs.
+1,3,4)
+
+ - For each answer we ask the following questions:
+
+ - Answer: Richtext widget.
+
+ - Rank: correct rank
+
+
+
+ - In addition to submit, there is another button to allow
+further answers to be filled in. Typed in values shall be remembered
+and 4 more answerboxes be shown.
+
+
+
+
+
+ - Matrix table (blocked questions):
+
The idea here
+is a "question" consisting of a group of questions. We include it here
+because to many users, this does appear to be a "single" question.
+ However, it is actually more appropriately recognized to be a
+"section"
+because it is a group of questions, a response to each of which will
+need to be separately stored by the system. Further, this is in fact a
+display option for the section that could reasonably be used for any
+Item Type. For instance, there are situations where an Assessment
+author may want to group a set of selects, or radiobuttons, or small
+textboxes, etc.
+
+
+
+ - Composite matrix-based multiple response item:
+
Same as the matrix table, but you have different choices that
+are displayed in each column.
+
+
+
+ - Composite multiple choice with Fill-in-Blank item:
+
Multiple Choice question with an additional short_text input
+field. Usually used for the "Other" thing
+
+
+
+ - Calculation:
+
This type of question will not be
+supported. But we should make sure we can take care of that type while
+importing the data from WebCT. Therefore we have to know the values. At
+a later stage, we will add more info on this.
+
+
+ - Formula: string
+
+ - Units
+
+ - Value (in %): integer
+
+ - Required (boolean)
+
+ - Ignore Space (boolean)
+
+ - Ignore spell checking (boolean)
+
+ - General Feedback: richtext
+
+
+
+
+Only site wide admins will get to see the following question types:
+
+ - Database question:
+
The answer to this question will be
+stored in the database. The concept here is to support bidirectional
+interchange of data between Assessment package tables and other package
+tables. Thus, while assembling an Assessment to send to a user, data
+may be pulled from some other table (eg users) to populate the
+Assessment form. And similarly, when the user submits the Assessment
+form, response data will be stored not only in Assessment entities
+(as_item_data eg) but also back in the other table (eg users). The
+question has the following additional fields:
+
+
+ - Table Name: short_string. This is the name of the table that
+is beeing used for storing the responses.
+
+ - Column: short_string. This is the column of the table that is
+used for storing the responses.
+
+ - Key Column: short_string. This is the column of the table
+that matches the user_id of the respondee.
+
+
+
+
+
Index: openacs-4/packages/assessment/www/doc/page_flow.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/page_flow.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/page_flow.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,89 @@
+
+
+
+
+ Page Flow
+
+
+Overview
+
+Through the OpenACS templating system, the UI look&feel will be
+modifiable by specific sites, so we needn't address page layout and
+graphical design issues here. Other than to mention that the Assessment
+package will use these OpenACS standards:
+
+
+ - "trail of breadcrumb" navigational links
+
+ - context-aware (via user identity => permissions) menu
+options (whether those "menus" are literally menus or some other
+interface widget like toolbars)
+
+ - in-place, within-form user feedback (eg error messages about a
+form field directly next to that field, not in an "error page")
+
+
+Furthermore, the set of necessary pages for Assessment are not all
+that dissimilar to the set required by any other OpenACS package. We
+need to be able to create, edit and delete all the constituent entities
+in the Package. The boundary between the pages belonging specifically
+to Assessment and those belonging to "calling" packages (eg dotLRN,
+clinical trials packages, financial management packages, etc etc) will
+necessarily be somewhat blurred.
+
+
+Proposed Page Flow
+Nevertheless, here is a proposed set of pages along with very brief
+descriptions of what happens in each. This organization is actually
+derived mostly from the existing Questionnaire module which can be
+examined here in the "Bay
+Area OpenACS Users Group (add yourself to the group and have a look).
+
+The UI for Assessment divides into a number of primary functional
+areas, as diagrammed below. These include:
+
+
+
+ - the "Home" area (for lack of a better term). These are the main
+index pages for the user and admin sections
+
+ - Assessment Authoring: all the pages involved in creating,
+editing, and deleting the Assessments themselves; these are all admin
+pages
+
+ - Assessment Delivery: all the pages involved in
+deploying a given Assessment to users for completion, processing those
+results, etc; these are user pages
+
+ - Assessment Review: all the pages involved in select
+data extracts and displaying them in whatever formats indicated; this
+includes "grading" of an Assessment -- a special case of data review;
+these are admin pages, though there also needs to be some access to
+data displays for general users as well (eg for anonymous surveys etc).
+Also, this is where mechanisms that return information to "client"
+packages that embed an Assessment would run.
+
+ - Session Management: pages that set up the timing and
+other "policies" of an Assessment. This area needs to interact with the
+next one in some fashion, though exactly how this occurs needs to be
+further thought through, depending on where the Site Management
+mechanisms reside.
+
+ - Site Management: pages involved in setting up who
+does Assessments. These are admin pages and actually fall outside the
+Assessment package per se. How dotLRN wants to interact with Assessment
+is probably going to be different from how a Clinical Trials Management
+CTM system would. But we include this in our diagram as a placeholder.
+
+
+
+So this is how we currently anticipate this would all interrelate:
+
+
+
+
+
+
Index: openacs-4/packages/assessment/www/doc/policies.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/policies.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/policies.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,125 @@
+
+
+
+
+ Policies and Events
+
+
+
+
+Policies and Events
+
+
+
+
+ - Assessment-Policies (as_assessment_policies) abstract out
+from Assessments a variety of attributes that describe deployment
+particulars. This allows multiple users of an Assessment to define
+beginning and ending dates, eg:
+
+
+ - policy_id
+
+ - policy_name
+
+ - start_date
+
+ - end_date
+
+ - anonymous_p - whether anonymous subjects are allowed
+
+ - repetition_interval_granularity - minutes, hours, days
+
+ - repetition_interval - an integer that (along with the
+granularity) defines the minimum interval between sequential
+assessments by a subject; an interval of zero means that only a single
+time through is allowed
+
+ - editable_p - whether user can alter submitted responses
+
+ - max_edits_allowed - optional max number of times subject can
+change responses
+
+ - max_time_to_complete - optional max number of seconds to
+perform Assessment
+
+ - interruptable_p - whether user can "save&resume" session
+
+ - data_entry_mode - (presumes that the necessary UI output
+procs are implemented in the APIs) to produce different deployment
+formats: standard web page, handheld gizmo, kiosk "one question at a
+time", AVR over phone, etc etc
+
+ - consent_required_p - whether subjects must give formal
+consent before doing Assessment
+
+ - consent - optional text to which the subject needs to agree
+before doing the Assessment (this may be more appropriate to abstract
+to Assessment-Events)
+
+ - logo - optional graphic that can appear on each page
+
+ - electronic_signature_p - whether subject must check
+"attestation box" and provide password to "sign"
+
+ - digital_signature_p - whether in addition to the electronic
+signature, the response must be hashed and encrypted
+
+ - shareable_p - whether Policy is shareable; defaults to 't'
+since this is the whole intent of this "repository" approach, but
+authors' should have option to prevent reuse
+
+ - feedback_text - where optionally some preset feedback can be
+specified by the author
+
+ - double_entry_p - do two staff need to enter data before it's
+accepted?
+
+ - require_annotations_with_rev_p - is an annotation required
+if a user modifies a submitted response?
+
+
+
+
+
+ - Assessment Events (as_assessment_events) define an
+planned, scheduled or intended "data collection event". It abstracts
+out from Assessment Policies details that define specific instances of
+an Assessment's deployment. Attributes include:
+
+
+ - event_id
+
+ - name
+
+ - description
+
+ - instructions
+
+ - target_days_post_enroll - an interval after the "enrollment"
+date which could be the time a subject is enrolled in a trial or the
+beginning of a term
+
+ - optimal_days_pre - along with the next attribute, defines a
+range of dates when the Assessment should be performed (if zero, then
+the date must be exact)
+
+ - optimal_days_post
+
+ - required_days_pre - as above, only the range within which
+the Assessment must be performed
+
+ - required_days_post
+ -
+
+
+
+
+
+
+
+
+
+
Index: openacs-4/packages/assessment/www/doc/requirements.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/requirements.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/requirements.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,666 @@
+
+
+
+
+ Assessment functional requirements
+
+
+Introduction
+The assessment module provides OpenACS with capabilities to conduct
+surveys, tests and dynamic information gathering in general, as can be
+seen in the use cases.
+Vision Statement
+The motivation behind the Assessment package is to extend the
+functionality of the existing Survey package in both depth and breadth:
+
+
+ - more question formats, user response filtering and processing,
+versioning, import/export capabilities for standards-based exchange
+with non-OpenACS systems, etc.
+
+ - mechanisms to embed Assessment capabilities within other
+OpenACS packages and to assemble larger systems of packages within
+which Assessment is one component (eg dotLRN, clinical trials
+management systems, etc)
+
+
+The current Survey package is a very capable piece of engineering
+that provides stand-alone data collection functions. It is
+subsite-aware and has been integrated to some extent with portlets. It
+also is just being integrated into user registration processes. These
+efforts point the path down which the Assessment package intends to
+proceed to its logical conclusion.
+
+Development efforts for Assessment thus involve two tracks:
+
+
+
+ - refinement and extension of the data model and UIs from Survey
+(and its sibling forks) to support a variety of expanded user
+requirements
+
+ - incorporation of hooks (of various sorts, such as Service
+Contracts) to integrate Assessment with OpenACS subsystems: Content
+Repository, Workflow, Notifications, Internationalization, etc
+
+
+The measure of success of the Assessment package is the ease with
+which it can rapidly be deployed into some high-profile
+implementations, notably dotLRN and a clinical trials management system
+under development.
+
+Use Cases
+The assessment module in it's
+simplest form is a dynamic information
+gathering tool. This can be clearly seen in the first group of use
+cases, which deal with surveys (one form of assessment, e.g. for
+quality assurance or clinical trials). An extension of this information
+gathering the possibility to conduct an evaluation on the information
+given, as we show in the second group of use cases (testing scenarios).
+Last but not least, the assessment tool should be able to provide it's
+information gathering features to other packages within the OpenACS
+framework as well.
+It is very important to note, that not all parameters and features
+mentioned in this use case should be displayed to the user at all
+times. Depending on the use case, a good guess with pre determined
+parameters should be made for the user (e.g. no need to let the user
+fill out correct answers to questions, if the question is not used in a
+test). Some use cases like elections require special parameters not
+necessary anywhere else (like counting system).
+
+Survey scenario
+The survey scenarios are the
+basic use cases for the use of the assessment system.
+Simple survey
+An editor wants to conduct surveys on his site.
+For this purpose he creates questions which are stored in a question
+catalogue. From this question catalogue, the editor choose the
+questions he wants to use in his current survey along with the style
+the survey should be presented to the user. Once satisfied he can make
+the survey public or test it first. Once the survey is public subjects
+(users) of the site can take the survey by filling out the generated
+form with all the questions the author added to the survey.
+Quality Assurance
+A company wants to get feedback from users
+about it's product. It creates a survey which offers branching (to
+prevent users from filling out unnecessary data, e.g. if you answered
+you have never been to Europe the question "Have you seen Rome" should
+not show up) and multi-dimensional likert scales (To ask for the
+quality and importance of a part of the product in conjunction).
+Professional data entry
+A clinic wants to conduct a trial. For
+this research assistants are asked to interview the patients and store
+the answers in the assessment on behalf of the client. For meeting FDA
+requirements it is mandatory to prove exactly who created any datum,
+when, whether it is a correct value, whether anyone has looked at it or
+edited it and when along with other audit trails. As mistakes might
+happen, it is important that the system runs checks on the plausibility
+of the entered data and the validity of it (area code should be five
+digits, if the age of the patient is below 10, no need to ask for
+credit card information, ...).
+University survey
+A Professor wants to create a test by searching through the question
+database and selecting old questions. He searches the database for a
+specific keyword or browses by category. The System presents him all
+questions which have the keyword and/or category in it. The Professor
+is able to preview every question and may then decide which question he
+will transfer into the survey.
+Internal Evaluation
+An institution wants to survey students to compare the quality of
+specific courses, teachers, or other factors effecting the quality of
+their education and level of happiness.
+It should be possible for the person who takes the survey to submit the
+survey anonymously and only be able to take the survey once.
+It should also be able to show the results of a survey to a group of
+users (e.g. a specific department evaluated). The results should be
+able to be displayed in a way that give a department a ranking compared
+with other departments.
+
+Reuse of questions
+The author of multiple choice question
+decides that the provided answers are not good for differentiating the
+knowledge of the subjects and changes some of them. All editors using
+this question should be informed and asked, if they want to use the
+changed version or the original one. If the decision is made to switch,
+it has to be guaranteed that a distinction between subjects that
+answered the original and the new version is kept. In addition the
+editor should be able to inform all subjects that have taken the
+question already, that it has changed (and that they might (have to)
+re-answer).
+Multiple languages
+The quality assurance team of the company
+mentioned above realizes that the majority of it's user base is not
+native English speakers. This is why they want to add additional
+translations to the questions to broaden the response base. For
+consistency, the assessment may only be shown to the subject if all
+questions used have been translated. Furthermore it is necessary to
+store the language used along with the response (as a translation might
+not be as good as the original).
+The poll
+An editor wants to conduct a poll on the site with
+immediate publication of the result to get a feeling how users like the
+new design of the website. The result can be displayed in an includelet
+(see the below for details) on any page the editor wants.
+The election
+The OpenACS community wants to conduct a new
+election on the OCT. On creation the names of the contestants have to
+be available along with a list of all users allowed to vote. Depending
+on the election system, the users have one or multiple votes (ranked or
+not), which are calculated in a certain way. Once the election is over
+the result is published.
+Collective Meeting planing
+The sailing club needs to find meeting time for all skippers to attend.
+Given a number of predefined choices, each skipper can give his/her
+preference for the time slots. The slot with the highest approval wins
+and is automatically entered into the calendar of all skippers and a
+notification send out.
+Testing scenario
+Especially in the university environment it
+is important to be able to conduct tests. These help the students to
+prepare for exams but also allow Professors to conduct exams. In
+addition to the data collection done in a survey scenario testing adds
+checks and instant evaluation to assessment.
+Proctored Exam
+A Professor wants to have a proctored test in a computer room. He wants
+to create the test using question that he has added and are already in
+the database. The only people allowed to take the test are the people
+that have actually showed up in the room (e.g. restricting the exam to
+specific IP-subnet and/or an exam password which he will give the
+students in the room at the time of the test that gives them access to
+the exam). Additional security measures include:
+
+ - Students have to submit the survey signed with their PGP key
+(which has been verified by the university) at the end.
+ - Students have to print out their test and sign every page to
+make sure the answers in the system are identical to the ones the
+student has given.
+ - In a purely multiple choice environment, the Test might be
+printed out on a sheet of paper for each user along with a return sheet
+which needs the answers to be ticked off. A scanner system scans this
+return sheet and stores the data for the student in the system.
+
+The Mistake
+A Professor has created a test from the question pool and have
+administered the exam to a group of students. The test has been taken
+by some of his students already. He discovers that the answer to one of
+the questions is not correct. He modifies the test and should be given
+the option to change the results of exams that have already been
+completed and the option to notify students who have taken the test and
+received a grade that their results have changed.
+Discriminatory power
+A Professor has created a test which is taken by all of his students.
+The test results should be matched with the individual results to
+create the discriminatory power and the reliability of the questions
+used in the test. The results should be stored in the question database
+and be accessible by every other professor which has the privileges to
+access the database of this professor.
+[A Question improves the test in reliability if it differentiates in
+the context of the test. This is happening if it has discriminatory
+power. The Question has discriminatory power if it is splitting good
+from bad students within the question in the same way they passes the
+test as good and bad students. The discriminatory power tells the
+professor if the question matches the test. Example: A hard question
+with a high mean value should be answered by good students more often
+right than by bad students. If the questions is answered same often by
+good and bad students the discriminatory power tells the professor that
+the question is more to guess than to know]
+
+The vocabulary test
+A student wants to learn a new language.
+While attending the class, he enters the vocabulary for each section
+into the assessment system. If he wants to check his learned knowledge
+he takes the vocabulary test which will show him randomized words to be
+translated. Each word will have a ranking stating how probable it is
+for the word to show up in the test. With each correct answer the
+ranking goes down, with each wrong answer it goes up. Once a section
+has been finished and all words have been translated correctly, the
+student may proceed to the next section. Possible types of questions:
+
+ - Free text translation of a word
+ - Free text translation of a sentence
+ - Multiple choice test
+ - Fill in the blanks
+
+To determine the correct answer it is possible to do a
+char-by-char compare and highlight the wrong parts vs. just displaying
+the wrong and correct answer (at the end of the test or once the answer
+is given).
+The quizz
+To pep up your website you offer a quiz, which
+allows users to answer some (multiple choice) questions and get the
+result immediately as a percentage score in a table comparing that
+score to other users. Users should be able to answer only a part of the
+possible questions each time. If the user is in the top 2%, offer him
+the contact address of "Mensa", other percentages should give
+encouraging text.
+Scoring
+The computer science department has a final exam for the students. The
+exam consists of 3 sections. The exam is passed, if the student
+achieves at least 50% total score. In addition the student has to
+achive at least 40% in each of the sections. The first section is
+deemed more important, therefore it gets a weigth of 40%, the other two
+sections only 30% towards the total score. Each section consists of
+multiple questions that have a different weigth (in percent) for the
+total score of the section. The sum of the weigths has to be 100%,
+otherwise the author of the section get's a warning. Some of the
+questions are multiple choice questions, that get different percentages
+for each answer. As the computer science department wants to discourage
+students from giving wrong answers, some wrong answers have a negative
+percentage (thereby reducing the total score in the section).
+
+Reuse in other packages
+The information gathering capabilities of the assessment system should
+be able to be reused by other packages.
+User profiling
+In order to join a class at the university the
+student has to fill out some questions. The answers can be viewed by
+the administrator but also by other students (pending the choice of the
+user). This latter functionality should not be part of assessment
+itself, but of a different module, making use of assessment. The GPI
+user-register is a good example for this.
+Includes
+Using a CMS the editor wants to include the poll on
+the first page on the top right corner. The result should be shown on a
+separate page or be included in the CMS as well.
+Information gathering for developers
+A developer needs
+functionality for gathering dynamic information easily. For this he
+should be able to easily include an assessment instead of using ad_form
+directly in his code. This gives the administrator of the site the
+option to change the questions at a later stage (take the questions in
+the user sign-up process as an example).
+Database questions
+Some answers to questions should be stored
+directly in database tables of OpenACS in addition to the assessment
+system. This is e.g. useful if your questions ask for first_names and
+last_name. When answering the question, the user should see the value
+currently stored in the database as a default.
+Action driven questions
+The company conducting the QA wants to
+get more participants to it's survey by recommendation. For this each
+respondee is asked at the end of the survey if he would recommend this
+survey to other users (with the option to give the email address of
+these users). The answer will be processed and an email send out to all
+given emails inviting them to take the survey.
+User Types
+There are several types of administrative users and end-users for
+the
+Assessment package which drive the functional requirements. Here is a
+brief synopsis of their responsibilities in this package.
+
+
+
+Package-level Administrator
+Assigns permissions to other users for administrative roles.
+
+Editor
+Has permissions to create, edit, delete and organize in repositories
+Assessments, Sections and Items. This includes defining Item formats,
+configuring data validation and data integrity checks, configuring
+scoring mechanisms, defining sequencing/navigation parameters, etc.
+
+Editors could thus be teachers in schools, principal
+investigators or biostatisticians in clinical trials, creative
+designers in advertising firms -- or OpenACS developers incorporating a
+bit of data collection machinery into another package.
+
+
+Scheduler
+Has permissions to assign, schedule or otherwise map a given
+Assessment or set of Assessments to a specific set of subjects,
+students or other data entry personnel. These actions potentially will
+involve interfacing with other Workflow management tools (e.g. an
+"Enrollment" package that would handle creation of new Parties (aka
+clinical trial subjects) in the database.
+
+Schedulers could also be teachers, curriculum designers, site
+coordinators in clinical trials, etc.
+
+
+Analyst
+
+Has permissions to search, sort, review and download data collected via
+Assessments.
+
+Analysts could be teachers, principals, principal investigators,
+biostatisticians, auditors, etc.
+
+
+Subject
+Has permissions to complete an Assessment providing her own
+responses or information. This would be a Student, for instance,
+completing a test in an educational setting, or a Patient completing a
+health-related quality-of-life instrument to track her health status.
+Subjects need appropriate UIs depending on Item formats and
+technological prowess of the Subject -- kiosk "one-question-at-a-time"
+formats, for example. May or may not get immediate feedback about data
+submitted.
+
+Subjects could be students, consumers, or patients.
+
+
+Data Entry Staff
+Has permissions to create, edit and delete data for or about the
+"real" Subject. Needs UIs to speed the actions of this trained
+individual and support "save and resume" operations. Data entry
+procedures used by Staff must capture the identity if both the "real"
+subject and the Staff person entering the data -- for audit trails and
+other data security and authentication functions. Data entry staff need
+robust data validation and integrity checks with optional, immediate
+data verification steps and electronic signatures at final submission.
+(Many of the tight-sphinctered requirements for FDA submissions center
+around mechanisms encountered here: to prove exactly who created any
+datum, when, whether it is a correct value, whether anyone has looked
+at it or edited it and when, etc etc...)
+
+Staff could be site coordinators in clinical trials, insurance
+adjustors, accountants, tax preparation staff, etc.
+
+System / Application Overview
+
+Editing of Assessments
+
+
+
+
+
+Scheduling of Assessments
+
+
+
+ - Create, edit, clone and delete Assessment Schedules. Schedulers
+will define:
+
+ - Start and End Dates for an Assessment
+ - Number of times a Subject can perform the Assessment (1-n)
+ - Interval between Assessment completion if Subject can
+perform it more than once
+ - Whether anonymous Subjects are allowed
+ - Text of email to Subjects to Invite, Remind and Thank them
+for performing Assessment
+ - Text of email to Staff to Instuct, Remind and Thank them for
+performing Assessment on a Subject
+
+
+
+ - Provide these additional functions:
+
+ - Support optional "electronic signatures" consisting simply
+of an
+additional password field on the form along with an "I attest this is
+my response" checkbox that the user completes on submission (rejected
+without the correct password) -- ie authentication only.
+ - Support optional "digital signatures" consisting of a hash
+of the user's submitted data, encrypted along with the user's password
+-- ie authentication + nonrepudiation.
+ - Perform daily scheduled procedures to look for Subjects and
+Staff who need to be Invited/Instructed or Reminded to participate.
+ - Incorporate procedures to send Thanks notifications upon
+completion of Assessment
+ - Provide UIs for Subjects and for Staff to show the status of
+the Assessments they're scheduled to perform -- eg a table that shows
+expected dates, actual completion dates, etc.
+
+
+
+
+
+Analysis of Assessments
+
+
+
+ - Provide UIs to:
+
+ - Define time-based, sortable searches of Assessment data
+(both
+primary/raw data and calculated Scored data) for tabular and (if
+appropriate) graphical display
+ - Define time-based, sortable searches of Assessment data for
+conversion into configurable file formats for download
+ - Define specific searches for display of data quality
+(incomplete assessments, audit trails of changed data values, etc)
+
+
+
+
+
+Performance of Assessments
+
+
+
+ - Provide mechanisms to:
+
+ - Handle user Login (for non-anonymous studies)
+ - Determine and display correct UI for type of user (eg kiosk
+format for patients; keyboard-centric UI for data entry Staff)
+ - Deliver Section forms to user
+ - Perform data validation and data integrity checks on form
+submission, and return any errors flagged within form
+ - Display confirmation page showing submitted data (if
+appropriate) along with "Edit this again" or "Yes, Save Data" buttons
+ - Display additional "electronic signature" field for password
+and "I certify these data" checkbox if indicated for Assessment
+ - Process sequence navigation rules based on submitted data
+and deliver next Section or terminate event as indicated
+ - Track elapsed time user spends on Assessment tasks --
+answering a given question, a section of questions, or the entire
+Assessment -- and do something with this (we're not
+entirely sure yet what this should be -- merely record the elapsed time
+for subsequent analysis, reject over-time submissions, or even forcibly
+refresh a laggard user's page to "grab the Assessment back")
+ - Insert appropriate audit records for each data submission,
+if indicated for Assessment
+ - Handle indicated email notifications at end of Assessment
+(to Subject, Staff, Scheduler, or Editor)
+
+
+
+
+
+
Index: openacs-4/packages/assessment/www/doc/sequencing.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/sequencing.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/sequencing.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,275 @@
+
+
+
+
+ Assessment Item Checks
+
+
+Sequencing
+
+Along with Data Validation and Versioning, probably the most vexing
+problem confronting the Assessment package is how to handle conditional
+navigation through an Assessment guided by user input. Simple branching
+has already been accomplished in the "complex survey" package via hinge
+points defined by responses to single items. But what if
+branching/skipping needs to depend on combinations of user responses to
+multiple items? And how does this relate to management of data
+validation steps? If branching/skipping depends not merely on what
+combination of "correct" or "in range" data the user submits, but also
+on combinations of "incorrect" or "out of range" data, how the heck do
+we do this?
+
+One basic conceptual question is whether Data Validation is a
+distinct process from Navigation Control or not. Initially we thought
+it was and that there should be a datamodel and set of procedures for
+checking user input, the output of which would pipe to a separate
+navigation datamodel and set of procedures for determining the user's
+next action. This separation is made (along with quite a few other
+distinctions/complexities) in the IMS "simple sequencing" model
+diagrammed below). But to jump the gun a bit, we think that actually it
+makes sense to combine these two processes into a common
+"post-submission user input processing" step we'll refer to here as
+Sequencing. (Note: we reviewed several alternatives in the archived
+prior discussions here.
+
+So here's the current approach. First, we think that the QTI
+components
+nicely capture the essential pieces needed for both Data Validation and
+Navigation Control (the combination of which we're referring to as
+Sequencing). But though not explicitly part of the QTI schema,
+implicitly there is (or should be) another component:
+
+
+
+ - a destination that defines which is the next
+item/section/form to be presented to the user based on the evaluation
+of the first four elements; It appears to us that this could include
+the optional Data Validation step, in that certain rule evaluation
+results may product a "no move" destination requiring the user to
+remain at the current item and perform some additional action (change
+the result or provide an additional comment/justification)
+
+
+Next we note that there are two scopes over which Sequencing needs to
+be handled:
+
+
+ - intra-item: checks pertaining to user responses to a single item
+
+ - inter-item : checks pertaining to user responses to more than
+one item; checks among multiple items will be built up pairwise
+
+
+So how might we implement this in our datamodel? Consider the
+"sequencing" subsystem of the Assessment package:
+
+
+
+
+
+Here is how this might work:
+
+
+ - Each intra-item "rule" (eg "age < 90") is a row in the
+as_item_checks table, which has columns for a "comparator" (EQ, NE, LT,
+LE, GT, GE, IN), a "conjunction" ("and", "or", "not"), and columns for
+the target value to be compared (the "content_value" is an version_id
+in cr_versions for images etc).
+
Thus to say that a user's response must be greater than or equal
+to 0
+and less than one would involve insertion of two rows into
+as_item_checks (in abbreviated pseudo-sql):
+
+ - insert into as_item_checks
+(comparator,numeric_value,conjunction) values (GE,0,'and')
+ - insert into as_item_checks
+(comparator,numeric_value,conjunction) values (LT,1,'and')
+
+ Then when a user submits a response to this item, the
+as_item_checks
+table would be queried as part of the "get_assessment_info" proc to get
+these parameters, which would then be passed to some procedure that
+checks the user's response by converting the "GE", say, to an actual
+numeric comparison in some switch structure (unless there's a cleverer
+way to do this via uplevel'ing, upvar'ing or exec'ing).
+ As long as these criteria aren't grouped (other than the
+default "single group" implicit in such a statement, the
+as_check_groups table isn't needed. However, if you want to say a
+user's response must be greather than or equal to 0 and less than one
+OR greater than 10, then you'd insert a third row into as_item_checks
+and two rows into as_check_groups:
+
+ - insert into as_check_groups (conjunction) values ('or')
+then use this new check_group_id = 234 (eg) to insert into the
+as_item_checks rows:
+ - insert into as_item_checks
+(comparator,numeric_value,conjunction,check_group_id) values
+(GE,0,'and',234)
+ - insert into as_item_checks
+(comparator,numeric_value,conjunction,check_group_id) values
+(LT,1,'and',234)
+
+
+ - insert into as_check_groups (conjunction) values ('or')
+then use this new check_group_id = 235 (eg) to insert into the
+as_item_checks row:
+ - insert into as_item_checks
+(comparator,numeric_value,conjunction,check_group_id) values
+(GT,10,'and',235)
+
+ If the grouping were more complex, then the parent_group_id
+field would
+get used to define the hierarchical grouping.
+
+
+ - Each inter-item "rule" (eg "age < 90" or "gender =
+male") is a row in the as_inter_item_checks, which has columns for each
+of the two items to be compared in this rule, similar to the use in
+as_item_checks; each rule is a row in this table. Each row thus
+supports a pairwise check, so to test for three items would involve
+three rows:
+
+
+ - insert into as_inter_item_checks (item1_flds,item2_flds)
+values (item1_vals,item2_vals)
+ - insert into as_inter_item_checks (item1_flds,item3_flds)
+values (item1_vals,item3_vals)
+ - insert into as_inter_item_checks (item2_flds,item3_flds)
+values (item2_vals,item3_vals)
+
+ Obviously, this schema quickly becomes unworkable since 2^n rows
+are
+required for n items, but I can't see needing more than several such
+checks for any real case; in fact I've only encountered the need for
+two items to be checked against each other in real applications.
+However, if there's a more clever way to do this without falling into
+the Bottomless Combinatorial Pit, I'm keen to hear it. ;-)
+ Groups and ordering of these inter-item checks would be handled
+by adding rows to as_check_groups as before.
+
+
+ - Navigation information is removed to the
+as_check_navigation table, each row of which defines by
+form/section/item ids where to user is to be taken based on evaluation
+of the top-level group (ie parent_group_id is null) for that item or
+inter-item check group. This table would store what is to happen (ie
+where the user is to go) depending on whether the item/inter-item
+checks evaluate to "success" (ie everything is fine so proceed),
+"warning" (something isn't exactly right but isn't flagrantly wrong;
+with an explanation we'll take that value), or "error" (nope, that's
+right out; resubmit the damn data idjit!").
+
+
+
+Specific Entities
+
+ - Item-checks (as_item_checks) define 1..n ordered
+evaluations of a user's response to a single Item. These can occur
+either via client-side Javascript when the user moves focus from the
+Item, or server-side once the entire html form comes back.
+
The goal is to have a flexible, expressive grammar for these
+checks to support arbitrary types of checks, which will include both
+input validation ("Is the user's number within bounds?"; "Is that a
+properly formatted phone number?") as well as grading ("How many points
+out of the Item's total should this response get?"). The implementation
+details remain somewhat uncertain, but the general approach seems
+sound: define groups of checks, run through the ordered hierarchy
+calling a tcl/sql callback proc that performs a comparison operation,
+and then evaluate the results and implement some consequence
+(navigation, etc).
+ Item Checks thus will have these attributes:
+
+
+ - item_check_id
+ - check_location - client-side or server-side
+ - comparator_type
+ - comparator_id - references as_comparators where we abstract
+the actual comparators
+ - check_group_id - references as_check_group
+ - javascript_function - name of function that gets called when
+focus moves
+ - user_message - optional text to return to user
+ - navigation_id - references as_check_navigation
+
+
+
+ - Inter-item-checks (as_inter_item_checks) are
+similar to Item-Checks but operate pairwise over multiple Items. These
+are server-side checks though conceivably it may be possible to spin
+out Javascript that could perform these client-side; this will
+definitely be tricky though. Attributes include:
+
+
+ - inter_item_check_id
+ - item1_id
+ - item2_id
+ - comparator1_id
+ - comparator2_id
+ - check_group_id - references as_check_group
+ - user_message - optional text to return to user
+ - navigation_id - references as_check_navigation
+
+
+
+ - Check Groups (as_check_groups) are the grouping/associative
+mechanisms by which Item Checks and Inter-Item Checks operate.
+
+
+ - group_id
+ - parent_group_id
+ - sort_order
+
+
+
+ - Comparators (as_comparators) are where we abstract
+the comparisons in order to support extensibly additional types of
+comparisons (image-image comparisons, etc). Since tcl is poorly typed
+but sql is not, we have to use a skinny table approach here, too.
+Attributes include:
+
+
+ - comparator_id
+ - data_type
+ - numeric_ref_value
+ - text_ref_value
+ - boolean_ref_value
+ - content_ref_value
+ - blob_ref_value
+ - item_null_p - this is "t" if the check "Is this Item value
+null?" is supposed to evaluate to True. Without this attribute, the
+only way to get this meaning is to leave all the other *_ref_values
+null and test each time that they all are null. Instead, this
+gives us a single, positive check. Why do we want this? There are lots
+of inter-item checks of this sort: "If Item(gender) = "male" then
+Item(bra size) is null". The comparator attached to Item(bra size)
+would have item_null_p = "t". (Well, we would hope that this is the
+case. ;-)
+
+
+
+ - Check Navigation (as_check_navigation) abstracts
+out where the user will be directed after a check or group of checks is
+completed. We need to handle three outputs from the check functions:
+success, warning, failure. And we need to be able able to send the user
+to the next assessment, section or item. Attributes include:
+
+
+ - navigation_id
+ - success_next_assess_id
+ - success_next_section_id
+ - success_next_item_id
+ - warning_next_assess_id
+ - warning_next_section_id
+ - warning_next_item_id
+ - failure_next_assess_id
+ - failure_next_section_id
+ - failure_next_item_id
+
+
+
+
+
Index: openacs-4/packages/assessment/www/doc/versioning.html
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/versioning.html,v
diff -u
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ openacs-4/packages/assessment/www/doc/versioning.html 13 Jun 2004 23:20:44 -0000 1.1
@@ -0,0 +1,223 @@
+
+
+
+
+ Versioning
+
+
+Overview
+
+This topic requires special mention because it is centrally important
+to Assessment and one of the most radical departures from the current
+packages (in which "surveys" or "questionnaires" are all one-shot
+affairs that at best can be cloned but not readily modified in a
+controlled fashion).
+
+During its lifetime, an Assessment may undergo revisions in the
+midst of data collection. These revisions may be minor (change of a
+label on an Item or adddition of a new Choice to an Item) or major
+(addition or deletion of an entire Section). Obviously in most
+applications, such changes are undesirable and people want to avoid
+them. But the reality is that such changes are inevitable and so the
+Assessment package must accommodate them. Clinical trial protocols
+change; teachers alter their exams from term to term. And still, there
+is a crucial need to be able to assemble and interpret data collected
+across all these changes.
+
+Another type of "revision" occurs when a component (an Item
+Choice, Item, Section, or the entire Assessment) needs to be translated
+into another language. Even if the semantics of the component are
+identical (and they should be or you need a better translator ;-), the
+Assessment package needs to handle this situation correctly: an admin
+user needs to be able to "assign" the right language version to a set
+of subjects, and the returned user data need to be assembled into
+trans-language data sets.
+
+Note that two orthogonal constructs are in play here:
+
+
+
+ - Many-many relationships: a given Section may be reused in
+many different Assessments (eg if it contains commonly-needed Items
+such as questions about demographic details)
+
+ - Multiple versions: that same Section may exist in
+different versions in those different Assessments (eg if different
+Assessment authors add or subtract an Item, change wording of an Item's
+label, etc). This includes different translations of semantically
+identical text.
+
+
+Approach
+The Content Repository (CR) in OpenACS is designed to handle these
+complex design issues, though it is still undergoing refinements and
+how best to use it is also still being discovered. So the ideas here
+are still somewhat exploratory.
+
+For each of the package components that need to be versioned
+(certainly the core components as_assessments, as_sections, as_items,
+and as_item_choices; but also other components like as_policies), we
+extend the basic CR entities cr_items and cr_revisions. Thus we
+actually have, for instance, two tables for Items:
+
+
+
+ - as_items (a cr_item) for whatever "immutable" attributes there
+are
+
+ - as_items_revs (a cr_revision) for all mutable attributes
+including translations
+
+
+
+This pattern of dual tables is used for all components that need to
+behave this way. When an admin user creates a new Item, a new row is
+inserted into the as_items and the as_items_revs table. Then when the
+same admin user (or another admin user) changes something about the
+Item, a new as_items_revs row is inserted.
+
+Now here is where things become tricky, though.. Any time a
+component is changed, there is a simultaneous implicit change to the
+entire hierarchy.
+Data collected after this change will be collected with a semantically
+different instrument. Whether the difference is large or small is
+immaterial; it is different, and Assessment must handle this. And the
+CR doesn't do this for us automagically.
+
+So what the package must do is version both the individual
+entities and also all the relationships over which we join when we're
+assembling the entire Assessment (whether to send out to a requesting
+user, to stuff the database when the form comes back, or to pull
+collected data into a report).
+This doesn't involve merely creating triggers to insert new mapping
+table rows that point to the new components. We also need to insert new
+revisions for all components higher up the hierarchy than the component
+we've just revised. Thus:
+
+
+
+ - If we change the text displayed with a Section, then we need to
+insert a new as_section_revs and a new as_section_assessment_map row.
+But we also need to insert a new as_assessment_revs as well, since if
+the Section is different, so is the Assessment. However, we don't need
+to insert any new as_item_revs for Items in the Section, though we do
+need to insert new as_section_item_map rows.
+
+
+ - If we change the text of an Item Choice, then we need to insert
+new stuff all the way up the hierarchy.
+
+
+
+Another key issue, discussed in this
+thread,
+involves the semantics of versioning. How big of a modification in some
+Assessment package entity needs to happen before that entity is now a
+"new item" instead of a "new version of an existing item"? If a typo in
+a single Item Choice is corrected, one can reasonably assume that is
+merely a new version. But if an Item of multiple choice options is
+given a new choice, is this Item now a new one?
+
+One possible way this could be defined would derive from the
+hierarchy model in the CR: cr_items -- but not cr_revisions -- can
+contain other entities; the parent_id column is only in cr_items. Thus
+if we want to add a fifth as_item_choice to an as_item (while
+preserving the state of the as_item that only had four
+as_item_choices), we need to insert a new as_item and not merely a new
+as_item_rev for the existing as_item.
+
+On the other hand, if we manage the many-many hierarchies of
+Assessment package entities in our own mapping tables outside of the CR
+mechanism, then we can handle this differently. At this point, we're
+not sure what is the best approach. Please post comments!
+
+A final point concerns the mapping tables. The OpenACS
+framework provides a variety of special-purpose mapping tables that are
+all proper acs_objects (member_rels, composition_rels, acs_rels, and
+the CR's own cr_rels). These provide additional control over
+permissioning but fundamentally are mapping tables. Whether to use them
+or just simple two-key tables will depend on the need for permission
+management in the given relationship. Presumably for most of the
+relations over which joins occur (ie that aren't exposed to outside
+procs etc), the simple kind will be superior since they are far lighter
+weight constructs.
+
+
+Specific Versionable Entities
+Within each subsystem of the Assessment package, the following
+entities will inherit from the CR. We list them here now, and once
+we've confirmed this selection, we'll move the information out to each
+of the subsystems' pages.
+
+
+
+ - Core - Items:
+
+
+ - Items: as_items; as_items_revs
+
+ - Item Choices: as_item_choices; as_item_choices_revs
+
+ - Localized Items: as_item_localized; as_item_localized_revs
+Note:
+we're not yet entirely sure what we gain by this when Items themselves
+are versioned; we haven't yet settled on whether different translations
+of the same Items should be different versions or not.
+
+ - Messages: as_messages; as_messages_revs
+
+
+
+
+
+ - Core - Grouping:
+
+
+ - Assessments: as_assessments; as_assessments_revs
+
+ - Sections: as_sections; as_sections_revs
+
+
+
+
+
+ - Scheduling:
+
+
+ - Assessment Events: as_assessment_events;
+as_assessment_events_revs
+
+ - Assessment Policies: as_assessment_policies;
+as_assessment_policies_revs
+
+
+
+
+
+ - Core - Collected Data:
+
+
+ - Item Data: as_item_data; as_item_data_revs
+
+ - Scale Data: as_scale_data; as_scale_data_revs
+
+
+
+
+
+ - Session Data:
+
+
+ - Sessions: as_sessions; as_sessions_revs
+
+ - Assessment Data: as_assessment_data; as_assessment_data_revs
+
+ - Section Data: as_section_data; as_section_data_revs
+
+
+
+
+
+
Index: openacs-4/packages/assessment/www/doc/images/assessment-datafocus.jpg
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-datafocus.jpg,v
diff -u
Binary files differ
Index: openacs-4/packages/assessment/www/doc/images/assessment-groupingfocus.jpg
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-groupingfocus.jpg,v
diff -u
Binary files differ
Index: openacs-4/packages/assessment/www/doc/images/assessment-itemfocus.jpg
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-itemfocus.jpg,v
diff -u
Binary files differ
Index: openacs-4/packages/assessment/www/doc/images/assessment-page-flow.jpg
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-page-flow.jpg,v
diff -u
Binary files differ
Index: openacs-4/packages/assessment/www/doc/images/assessment-schedfocus.jpg
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-schedfocus.jpg,v
diff -u
Binary files differ
Index: openacs-4/packages/assessment/www/doc/images/assessment-sequencefocus.jpg
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-sequencefocus.jpg,v
diff -u
Binary files differ
Index: openacs-4/packages/assessment/www/doc/images/assessment.jpg
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment.jpg,v
diff -u
Binary files differ