Index: openacs-4/packages/assessment/www/doc/as_items.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/as_items.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/as_items.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,287 @@ + + + + + AS_Items + + + +

Overview

+

+The Item and Section catalogues are central parts of the assessment +system. These repositories support reuse of Assessment components by +storing of the various Items (or questions if you like) and groups of +Items (ie Sections) that can be used in an assessment. You are able to +add/edit/delete an item of a certain type to a certain scope. +Furthermore it allows you to search and browse for questions for +inclusion in your assesment as well as import and export multiple +questions using various formats. +

+

In this description here we will only discuss the design +implications for Items. Green colored tables have to be +internationlized. +

+

Each Item consists of a specific Item Type like "Multiple Choice +Question" or "Free Text". Each Item Type +adds additional Attributes to the Item, thereby making it pretty +flexible. Additionally each item has a related display type +storing information on how to display this item. This way we can create +an adp-snippet which we can include to display a certain item (the +snippet is stored de-normalized in the as_items table and update on +every change to the item or the item_type). +

+

Categorization and internationalization will make it into +OpenACS 5.2, therefore we are not dealing with it in Assessment +seperately but use the (to be) built in functionality of OpenACS 5.2

+

+Additionally we have support functionality for an item. This includes +the help functionality. To give Assessment authors flexibility in +adapting Item defaults, help messages, etc for use in different +Assessments, we abstract out a number of attributes from as_items into +mapping tables where "override" values for these attributes can +optionally be set by authors. If they choose not to set overrides, then +the values originally created in the Item supercede. +

+

Seperately we will deal with Checks on Items. These will allow +us to make checks on the input (is the value given by the user actually +a valid value??), branches (if we display this item, which responses +have to have been given) and post-input checks (how many points does +this answer give). +

+

Here is the graphical schema for the Item-related subsystems, +including the Item Display subsystem described here.
+

+

+
+

Data modell graphic

+
+

+

+

Specific Entities: Core Functions

+

+Here are the components of the Item model in Assessment: +

+

+ +

Help System

+The help system should allow a small "?" appear +next to an object's title that has a help text identified with it. Help +texts are to be displayed in the nice bar that Lars created for OpenACS +in the header. Each object can have multiple help texts associated with +it (which will be displayed in sort order with each hit to the "?".) +and we can reuse the help text, making this an n:m relationship (using +cr_rels). E.g. you might want to have a default help text for certain +cr_item_types, that's why I was thinking about reuse... +

Relationship attributes: +

+ +

+Messages (as_messages) abstracts out help messages (and other +types of messages) for use in this package. Attributes include: +

+ +
+ + Index: openacs-4/packages/assessment/www/doc/data-modell.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/Attic/data-modell.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/data-modell.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,111 @@ + + + + + Assessment Data Modell Overview + + +

Overview

+

+At its core, the Assessment package defines a hierarchical container +model of a "survey", "questionnaire" or "form". This approach not only +follows the precedent of existing work; it also makes excellent sense +and no one has come up with a better idea. +

+

+ +

We choose the terms Assessment-Sections-Items-Choices over +Surveys-Sectdions-Questions-Choices partly to reduce naming clashes +during the transition from Survey/Questionnaire packages, but mostly +because these terms are more general and thus suit the broader +applicability intended for this package. +

+

As is the custom in the OpenACS framework, all RDBMS tables in +the package will be prepended with "as_" to prevent further prefent +naming clashes. Judicious use of namespaces will also be made in +keeping with current OpenACS best practice. +

+

Several of the Metadata entities have direct counterparts in +the Data-related partition of the data model. Some standards (notably +CDISC) rigorously name all metadata entities with a "_def" suffix and +all data entities with a "_data" suffix -- thus "as_item_def" and +"as_item_data" tables in our case. We think this is overkill since +there are far more metadata entities than data entities and in only a +few cases do distinctions between the two become important. In those +cases, we will add the "_data" suffix to data entities to make this +difference clear. +

+

A final general point (that we revisit for specific entities +below): the Assessment package data model exercises the Content +Repository (CR) in the OpenACS framework heavily. In fact, this use of +the CR for most important entities represents one of the main advances +of this package compared to the earlier versions. The decision to use +the CR is partly driven by the universal need for versioning and reuse +within the functional requirements, and partly by the fact that the CR +has become "the Right Way" to build OpenACS systems. Note that one +implication of this is that we can't use a couple column names in our +derived tables because of naming clashes with columns in cr_items and +cr_revisions: title and description. Furthermore we can handle versioning and internationalization through +the CR.

+

+

Synopsis of The Data Model

+

+Here's a detailed summary view of the entities in the Assessment +package. Note that in addition to the partitioning of the entities +between Metadata Elements and Collected Data Elements, we identify the +various subsystems in the package that perform basic functions. +

+We discuss the following stuff in detail through the subsequent +pages, and we use a sort of "bird's eye view" of this global graphic to +keep the schema for each subsystem in perspective while homing in on +the relevent detail. Here's a brief introduction to each of these +section
+

+ +
+

Data Modell Graphic

+
+ + Index: openacs-4/packages/assessment/www/doc/data_collection.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/data_collection.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/data_collection.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,360 @@ + + + + + Data Collection + + +

Overview

+

+The schema for the entities that actually collect, store and retrieve +Assesment data parallels the hierarchical structure of the Metadata Data Model. In the antecedent +"complex survey" and "questionnaire" systems, this schema was simple +two-level structure: +

+

+ +

+This suffices for one-shot surveys but doesn't support the fine +granularity of user-action tracking, "save&resume" capabilities, +and other requirements identified for the enhanced Assessment package. +Consequently, we use a more extended hierarchy: +

+

+ +

To support user modification of submitted data (of which +"store&resume" is a special case), we base all these entities in +the CR. In fact, we use both cr_items and cr_revisions in our schema, +since for any given user's Assessment submission, there indeed is a +"final" or "live" version. (In contrast, recall that for any Assessment +itself, different authors may be using different versions of the +Assessment. While this situation may be unusual, the fact that it must +be supported means that the semantics of cr_items don't fit the +Assessment itself. They do fit the semantics of a given user's +Assessment "session" however.) +

+

Note that all these entities derive from the CR, they are also all +acs_objects and thus automagically have the standard creation_user, +creation_date etc attributes. We don't mention them separately here. +

+

Also, while this doesn't impact the datamodel structure per se, +we add an important innovation to Assessment that wasn't used in +"complex survey" or questionnaire. When a user initiates an Assessment +Session, an entire set of Assessment objects are created (literally, +rows are inserted in all the relevant tables as defined by the +structure of the Assessment). Then when the user submits a form with +one or more Items "completed", all database actions from there on +consist of updates in the CR, not insertions. (In contrast, the systems +to date all wait to insert into "survey_question_responses", for +example, until the user submits the html form.) The big advantage of +this is that determining the status of any given Item, Section or the +entire Assessment is now trivial. We don't have to see whether an Item +Data row for this particular Assessment Session is already there and +then insert it or else update it; we know that it's there and we just +update it. More importantly, all of our reporting UIs that show +Assessment admins the current status of users' progress through the +Assessment are straightforward. +

+

We distinguish here between "subjects" which are users whose +information is the primary source of the Assessment's responses, and +"users" which are real OpenACS users who can log into the system. +Subjects may be completing the Assessment themselves or may have +completed some paper form that is being transcribed by staff people who +are users. We thus account for both the "real" and one or more "proxy" +respondents via this mechanism. +

+

Note that we assume that there is only one "real" +respondent. Only one student can take a test for a grade. Even if +multiple clinical staff enter data about a patient, all those values +still pertain to that single patient.  +

+

One final note: we denormalize several attributes in these entities +-- +event_id, subject_id and staff_id. The reason for putting these foreign +keys in each row of the "data" is to produce a "star topology" of fact +tables and dimension tables. This will facilitate data retrieval and +analysis. (Are there other dimension keys that we should include +besides these?) +

+

+

Synopsis of Data-Collection Datamodel

+

+Here's the schema for this subsystem:
+

+

+
+

Data Modell +

+
+

+

+

Specific Entities

+

This section addresses the attributes the most important entities +have in the data-collection data model -- principally the various +design issues and choices we've made. We omit here literal SQL snippets +since that's what the web interface to CVS is for. ;-) +

+

+ + + Index: openacs-4/packages/assessment/www/doc/display_types.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/display_types.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/display_types.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,193 @@ + + + + + As_Item Display Types + + + +

Overview

+Displaying items to users has a couple of challanges. First of all the +display of a single item can be different for each item_type (and even +within a type). Second of all, the display of items within a section +can be different from assessment to assessment. Last but not least, the +whole assessment might be displayed differently depending on attributes +and the type of assessment we are talking about. +

Note: please refer to the discussion of Items here. +That discussion complements the discussion here, and the data model +graphic pertaining to the Item Display Types system is available there +also. +

+

+

Item Display Types

+Each item has an item_display_type object associated with it, that +defines how to display the item. Each item_display_type has a couple of +attributes, that can be passed to the formbuilder for the creation of +the widget. Each widget has at least one item_display_type associated +with it. In the long run I think this system has the potential to +become a part of OpenACS itself (storing additional display information +for each acs_object), but we are not there yet :). Obviouslly we are +talking cr_item_types here as well. +

Each item_display_type has a couple of attributes in common. +

+ +

+Depending on the presentation_types additonal +attributes (presentation_type attributes) +come into play (are added as attributes to the CR item type) (mark: +this is not feature complete. It really is up to the coder to decide +what attributes each widget should have, down here are only +*suggestions*). Additionally we're not mentioning all HTML +possibilities associated with each type (e.g. a textarea has width and +heigth..). +

+ +

+In addition, there are some potential presentation_types that actually +seem to be better modeled as a Section of separate Items: +

+

+ +

Section display

+A section can be seen as a form with all the +items within this section making up the form. Depending on the type of +assessment we are talking about, the section can be displayed in +various ways (examples): + +Additionally each section has certain parameters that +determine the look and feel of the section itself. Luckily it is not +necessary to have differing attributes for the sections, therefore all +these display attributes can be found with the section +and assessment specification
+ + Index: openacs-4/packages/assessment/www/doc/grouping.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/grouping.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/grouping.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,326 @@ + + + + + Assessment + + +

Here is a graphical overview of the subsystem in the Assessment +package +that organizes Items into Sections and whole Assessments:
+

+

+
+

Data modell graphic +

+
+

+

+

Review of Specific Entities

+

+

+ + + Index: openacs-4/packages/assessment/www/doc/index.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/index.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/index.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,233 @@ + + + + + Assessment Overview + + +

Introduction

+

The Assessment Package unites the work and needs of various members +of the OpenACS community for data collection functionality within the +OpenACS framework. We're using the term "Assessment" instead of +"Survey" or "Questionnaire" (or "Case Report Form" aka CRF, the term +used in clinical trials) because it is a term used by IMS and because +it connotes the more generic nature of the data collection system we're +focusing on.

+

There has been considerable recent interest in expanding the +capabilities of generic data collection packages within OpenACS. +Identified applications include: +

+ +

Historical Considerations (Work Done So Far)

+

+Several OpenACS efforts form the context for any future work. These +include: +

+

+ +

Competitive Analysis

+The number of competing products in this area is *huge*. Starting with +the usual suspects Blackboard and WebCT you can go on to clinical trial +software like Oracle Clinical or specialised survey systems. When +writing the specifications we tried to incorporate as many ideas as +possible from the various systems we had a look at and use that +experience. A detailed analysis would be too much for the moment.
+

Functional Requirements

+An overview of the functional requirements can be found here. It is highly encouraged to be read +first, as it contains the use cases along with a global overview of the +functionality contained within assessment. Additional requirements can +be found in the specific pages for the user interface.
+

Design Tradeoffs

+The assessment system has been designed with a large flexibility and +reuse of existing functionality in mind. This might result in larger +complexity for simple uses (e.g. a plain poll system on it's own will +be more performant than running a poll through assessment), but +provides the chance to maintain one code base for all these seperate +modules.
+

API

+The API will be defined during the development phase.
+

Data modell

+The data modell is described in detail in the design descriptions.
+

User Interface

+The UI for Assessment divides into a number of primary functional +areas, as diagrammed below. These include: + +More information can be found at the Page Flow +page.
+

Authors

+The specifications for the assessment system have been written by Stan +Kaufmann and Malte Sussdorff with help from numerous people within and +outside the OpenACS community.
+
+ + Index: openacs-4/packages/assessment/www/doc/item_types.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/item_types.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/item_types.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,369 @@ + + + + + AS_item Types + + +

Overview

+This is a list of item types and their attributes we want to support. +At a later stage we are going to add the checks for each item_type to +this page as well. This does not mean we are going to create all of +them in the first shot. The attributes are *ONLY* those which are not +already part of as_items and therefore should be dealt with in +as_item_type_attributes (see Item Data Model +for reference). +

+

Specific Item Types

+

+

+ +Only site wide admins will get to see the following question types: + + + Index: openacs-4/packages/assessment/www/doc/page_flow.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/page_flow.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/page_flow.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,89 @@ + + + + + Page Flow + + +

Overview

+

+Through the OpenACS templating system, the UI look&feel will be +modifiable by specific sites, so we needn't address page layout and +graphical design issues here. Other than to mention that the Assessment +package will use these OpenACS standards: +

+ +

Furthermore, the set of necessary pages for Assessment are not all +that dissimilar to the set required by any other OpenACS package. We +need to be able to create, edit and delete all the constituent entities +in the Package. The boundary between the pages belonging specifically +to Assessment and those belonging to "calling" packages (eg dotLRN, +clinical trials packages, financial management packages, etc etc) will +necessarily be somewhat blurred. +

+

+

Proposed Page Flow

+

Nevertheless, here is a proposed set of pages along with very brief +descriptions of what happens in each. This organization is actually +derived mostly from the existing Questionnaire module which can be +examined here in the "Bay +Area OpenACS Users Group (add yourself to the group and have a look). +

+

The UI for Assessment divides into a number of primary functional +areas, as diagrammed below. These include: +

+

+ +

+So this is how we currently anticipate this would all interrelate:

+
+

data modell +

+
+ + Index: openacs-4/packages/assessment/www/doc/policies.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/policies.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/policies.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,125 @@ + + + + + Policies and Events + + +

+

+

Policies and Events
+

+

+

+ + + Index: openacs-4/packages/assessment/www/doc/requirements.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/requirements.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/requirements.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,666 @@ + + + + + Assessment functional requirements + + +

Introduction

+The assessment module provides OpenACS with capabilities to conduct +surveys, tests and dynamic information gathering in general, as can be +seen in the use cases.
+

Vision Statement

+The motivation behind the Assessment package is to extend the +functionality of the existing Survey package in both depth and breadth: +

+ +

The current Survey package is a very capable piece of engineering +that provides stand-alone data collection functions. It is +subsite-aware and has been integrated to some extent with portlets. It +also is just being integrated into user registration processes. These +efforts point the path down which the Assessment package intends to +proceed to its logical conclusion. +

+

Development efforts for Assessment thus involve two tracks: +

+

+ +

The measure of success of the Assessment package is the ease with +which it can rapidly be deployed into some high-profile +implementations, notably dotLRN and a clinical trials management system +under development. +

+

Use Cases

+The assessment module in it's +simplest form is a dynamic information +gathering tool. This can be clearly seen in the first group of use +cases, which deal with surveys (one form of assessment, e.g. for +quality assurance or clinical trials). An extension of this information +gathering the possibility to conduct an evaluation on the information +given, as we show in the second group of use cases (testing scenarios). +Last but not least, the assessment tool should be able to provide it's +information gathering features to other packages within the OpenACS +framework as well. +

It is very important to note, that not all parameters and features +mentioned in this use case should be displayed to the user at all +times. Depending on the use case, a good guess with pre determined +parameters should be made for the user (e.g. no need to let the user +fill out correct answers to questions, if the question is not used in a +test). Some use cases like elections require special parameters not +necessary anywhere else (like counting system). +

+

Survey scenario

+The survey scenarios are the +basic use cases for the use of the assessment system.
+

Simple survey

+An editor wants to conduct surveys on his site. +For this purpose he creates questions which are stored in a question +catalogue. From this question catalogue, the editor choose the +questions he wants to use in his current survey along with the style +the survey should be presented to the user. Once satisfied he can make +the survey public or test it first. Once the survey is public subjects +(users) of the site can take the survey by filling out the generated +form with all the questions the author added to the survey.
+

Quality Assurance

+A company wants to get feedback from users +about it's product. It creates a survey which offers branching (to +prevent users from filling out unnecessary data, e.g. if you answered +you have never been to Europe the question "Have you seen Rome" should +not show up) and multi-dimensional likert scales (To ask for the +quality and importance of a part of the product in conjunction).
+

Professional data entry

+A clinic wants to conduct a trial. For +this research assistants are asked to interview the patients and store +the answers in the assessment on behalf of the client. For meeting FDA +requirements it is mandatory to prove exactly who created any datum, +when, whether it is a correct value, whether anyone has looked at it or +edited it and when along with other audit trails. As mistakes might +happen, it is important that the system runs checks on the plausibility +of the entered data and the validity of it (area code should be five +digits, if the age of the patient is below 10, no need to ask for +credit card information, ...). +

University survey

+A Professor wants to create a test by searching through the question +database and selecting old questions. He searches the database for a +specific keyword or browses by category. The System presents him all +questions which have the keyword and/or category in it. The Professor +is able to preview every question and may then decide which question he +will transfer into the survey. +

Internal Evaluation

+An institution wants to survey students to compare the quality of +specific courses, teachers, or other factors effecting the quality of +their education and level of happiness. +It should be possible for the person who takes the survey to submit the +survey anonymously and only be able to take the survey once. +

It should also be able to show the results of a survey to a group of +users (e.g. a specific department evaluated). The results should be +able to be displayed in a way that give a department a ranking compared +with other departments. +

+

Reuse of questions

+The author of multiple choice question +decides that the provided answers are not good for differentiating the +knowledge of the subjects and changes some of them. All editors using +this question should be informed and asked, if they want to use the +changed version or the original one. If the decision is made to switch, +it has to be guaranteed that a distinction between subjects that +answered the original and the new version is kept. In addition the +editor should be able to inform all subjects that have taken the +question already, that it has changed (and that they might (have to) +re-answer). +

Multiple languages

+The quality assurance team of the company +mentioned above realizes that the majority of it's user base is not +native English speakers. This is why they want to add additional +translations to the questions to broaden the response base. For +consistency, the assessment may only be shown to the subject if all +questions used have been translated. Furthermore it is necessary to +store the language used along with the response (as a translation might +not be as good as the original). +

The poll

+An editor wants to conduct a poll on the site with +immediate publication of the result to get a feeling how users like the +new design of the website. The result can be displayed in an includelet +(see the below for details) on any page the editor wants. +

The election

+The OpenACS community wants to conduct a new +election on the OCT. On creation the names of the contestants have to +be available along with a list of all users allowed to vote. Depending +on the election system, the users have one or multiple votes (ranked or +not), which are calculated in a certain way. Once the election is over +the result is published. +

Collective Meeting planing

+The sailing club needs to find meeting time for all skippers to attend. +Given a number of predefined choices, each skipper can give his/her +preference for the time slots. The slot with the highest approval wins +and is automatically entered into the calendar of all skippers and a +notification send out. +

Testing scenario

+Especially in the university environment it +is important to be able to conduct tests. These help the students to +prepare for exams but also allow Professors to conduct exams. In +addition to the data collection done in a survey scenario testing adds +checks and instant evaluation to assessment. +

Proctored Exam

+A Professor wants to have a proctored test in a computer room. He wants +to create the test using question that he has added and are already in +the database. The only people allowed to take the test are the people +that have actually showed up in the room (e.g. restricting the exam to +specific IP-subnet and/or an exam password which he will give the +students in the room at the time of the test that gives them access to +the exam). Additional security measures include: + +

The Mistake

+A Professor has created a test from the question pool and have +administered the exam to a group of students. The test has been taken +by some of his students already. He discovers that the answer to one of +the questions is not correct. He modifies the test and should be given +the option to change the results of exams that have already been +completed and the option to notify students who have taken the test and +received a grade that their results have changed. +

Discriminatory power

+A Professor has created a test which is taken by all of his students. +The test results should be matched with the individual results to +create the discriminatory power and the reliability of the questions +used in the test. The results should be stored in the question database +and be accessible by every other professor which has the privileges to +access the database of this professor. +

[A Question improves the test in reliability if it differentiates in +the context of the test. This is happening if it has discriminatory +power. The Question has discriminatory power if it is splitting good +from bad students within the question in the same way they passes the +test as good and bad students. The discriminatory power tells the +professor if the question matches the test. Example: A hard question +with a high mean value should be answered by good students more often +right than by bad students. If the questions is answered same often by +good and bad students the discriminatory power tells the professor that +the question is more to guess than to know] +

+

The vocabulary test

+A student wants to learn a new language. +While attending the class, he enters the vocabulary for each section +into the assessment system. If he wants to check his learned knowledge +he takes the vocabulary test which will show him randomized words to be +translated. Each word will have a ranking stating how probable it is +for the word to show up in the test. With each correct answer the +ranking goes down, with each wrong answer it goes up. Once a section +has been finished and all words have been translated correctly, the +student may proceed to the next section. Possible types of questions: + +To determine the correct answer it is possible to do a +char-by-char compare and highlight the wrong parts vs. just displaying +the wrong and correct answer (at the end of the test or once the answer +is given). +

The quizz

+To pep up your website you offer a quiz, which +allows users to answer some (multiple choice) questions and get the +result immediately as a percentage score in a table comparing that +score to other users. Users should be able to answer only a part of the +possible questions each time. If the user is in the top 2%, offer him +the contact address of "Mensa", other percentages should give +encouraging text.
+

Scoring

+The computer science department has a final exam for the students. The +exam consists of 3 sections. The exam is passed, if the student +achieves at least 50% total score. In addition the student has to +achive at least 40% in each of the sections. The first section is +deemed more important, therefore it gets a weigth of 40%, the other two +sections only 30% towards the total score. Each section consists of +multiple questions that have a different weigth (in percent) for the +total score of the section. The sum of the weigths has to be 100%, +otherwise the author of the section get's a warning. Some of the +questions are multiple choice questions, that get different percentages +for each answer. As the computer science department wants to discourage +students from giving wrong answers, some wrong answers have a negative +percentage (thereby reducing the total score in the section).
+
+

Reuse in other packages

+The information gathering capabilities of the assessment system should +be able to be reused by other packages. +

User profiling

+In order to join a class at the university the +student has to fill out some questions. The answers can be viewed by +the administrator but also by other students (pending the choice of the +user). This latter functionality should not be part of assessment +itself, but of a different module, making use of assessment. The GPI +user-register is a good example for this. +

Includes

+Using a CMS the editor wants to include the poll on +the first page on the top right corner. The result should be shown on a +separate page or be included in the CMS as well. +

Information gathering for developers

+A developer needs +functionality for gathering dynamic information easily. For this he +should be able to easily include an assessment instead of using ad_form +directly in his code. This gives the administrator of the site the +option to change the questions at a later stage (take the questions in +the user sign-up process as an example). +

Database questions

+Some answers to questions should be stored +directly in database tables of OpenACS in addition to the assessment +system. This is e.g. useful if your questions ask for first_names and +last_name. When answering the question, the user should see the value +currently stored in the database as a default. +

Action driven questions

+The company conducting the QA wants to +get more participants to it's survey by recommendation. For this each +respondee is asked at the end of the survey if he would recommend this +survey to other users (with the option to give the email address of +these users). The answer will be processed and an email send out to all +given emails inviting them to take the survey. +

User Types

+

There are several types of administrative users and end-users for +the +Assessment package which drive the functional requirements. Here is a +brief synopsis of their responsibilities in this package. +

+

+

+

Package-level Administrator

+Assigns permissions to other users for administrative roles. +

+

Editor

+

Has permissions to create, edit, delete and organize in repositories +Assessments, Sections and Items. This includes defining Item formats, +configuring data validation and data integrity checks, configuring +scoring mechanisms, defining sequencing/navigation parameters, etc. +

+

Editors could thus be teachers in schools, principal +investigators or biostatisticians in clinical trials, creative +designers in advertising firms -- or OpenACS developers incorporating a +bit of data collection machinery into another package. +

+

+

Scheduler

+

Has permissions to assign, schedule or otherwise map a given +Assessment or set of Assessments to a specific set of subjects, +students or other data entry personnel. These actions potentially will +involve interfacing with other Workflow management tools (e.g. an +"Enrollment" package that would handle creation of new Parties (aka +clinical trial subjects) in the database. +

+

Schedulers could also be teachers, curriculum designers, site +coordinators in clinical trials, etc. +

+

+

Analyst

+

+Has permissions to search, sort, review and download data collected via +Assessments. +

+

Analysts could be teachers, principals, principal investigators, +biostatisticians, auditors, etc. +

+

+

Subject

+

Has permissions to complete an Assessment providing her own +responses or information. This would be a Student, for instance, +completing a test in an educational setting, or a Patient completing a +health-related quality-of-life instrument to track her health status. +Subjects need appropriate UIs depending on Item formats and +technological prowess of the Subject -- kiosk "one-question-at-a-time" +formats, for example. May or may not get immediate feedback about data +submitted. +

+

Subjects could be students, consumers, or patients. +

+

+

Data Entry Staff

+

Has permissions to create, edit and delete data for or about the +"real" Subject. Needs UIs to speed the actions of this trained +individual and support "save and resume" operations. Data entry +procedures used by Staff must capture the identity if both the "real" +subject and the Staff person entering the data -- for audit trails and +other data security and authentication functions. Data entry staff need +robust data validation and integrity checks with optional, immediate +data verification steps and electronic signatures at final submission. +(Many of the tight-sphinctered requirements for FDA submissions center +around mechanisms encountered here: to prove exactly who created any +datum, when, whether it is a correct value, whether anyone has looked +at it or edited it and when, etc etc...) +

+

Staff could be site coordinators in clinical trials, insurance +adjustors, accountants, tax preparation staff, etc. +

+

System / Application Overview
+

+

Editing of Assessments

+

+

+ +

+

+

Scheduling of Assessments

+

+

+ +

+

+

Analysis of Assessments

+

+

+ +

+

+

Performance of Assessments

+

+

+ +

+ + Index: openacs-4/packages/assessment/www/doc/sequencing.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/sequencing.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/sequencing.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,275 @@ + + + + + Assessment Item Checks + + +

Sequencing

+

+Along with Data Validation and Versioning, probably the most vexing +problem confronting the Assessment package is how to handle conditional +navigation through an Assessment guided by user input. Simple branching +has already been accomplished in the "complex survey" package via hinge +points defined by responses to single items. But what if +branching/skipping needs to depend on combinations of user responses to +multiple items? And how does this relate to management of data +validation steps? If branching/skipping depends not merely on what +combination of "correct" or "in range" data the user submits, but also +on combinations of "incorrect" or "out of range" data, how the heck do +we do this? +

+

One basic conceptual question is whether Data Validation is a +distinct process from Navigation Control or not. Initially we thought +it was and that there should be a datamodel and set of procedures for +checking user input, the output of which would pipe to a separate +navigation datamodel and set of procedures for determining the user's +next action. This separation is made (along with quite a few other +distinctions/complexities) in the IMS "simple sequencing" model +diagrammed below). But to jump the gun a bit, we think that actually it +makes sense to combine these two processes into a common +"post-submission user input processing" step we'll refer to here as +Sequencing. (Note: we reviewed several alternatives in the archived +prior discussions here. +

+

So here's the current approach. First, we think that the QTI +components +nicely capture the essential pieces needed for both Data Validation and +Navigation Control (the combination of which we're referring to as +Sequencing). But though not explicitly part of the QTI schema, +implicitly there is (or should be) another component: +

+

+ +

+Next we note that there are two scopes over which Sequencing needs to +be handled:

+

+ +

+So how might we implement this in our datamodel? Consider the +"sequencing" subsystem of the Assessment package:
+

+

+
+

Data Modell Graphic

+
+Here is how this might work: +

+ +

+

+

Specific Entities

+ + + Index: openacs-4/packages/assessment/www/doc/versioning.html =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/versioning.html,v diff -u --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ openacs-4/packages/assessment/www/doc/versioning.html 13 Jun 2004 23:20:44 -0000 1.1 @@ -0,0 +1,223 @@ + + + + + Versioning + + +

Overview

+

+This topic requires special mention because it is centrally important +to Assessment and one of the most radical departures from the current +packages (in which "surveys" or "questionnaires" are all one-shot +affairs that at best can be cloned but not readily modified in a +controlled fashion). +

+

During its lifetime, an Assessment may undergo revisions in the +midst of data collection. These revisions may be minor (change of a +label on an Item or adddition of a new Choice to an Item) or major +(addition or deletion of an entire Section). Obviously in most +applications, such changes are undesirable and people want to avoid +them. But the reality is that such changes are inevitable and so the +Assessment package must accommodate them. Clinical trial protocols +change; teachers alter their exams from term to term. And still, there +is a crucial need to be able to assemble and interpret data collected +across all these changes. +

+

Another type of "revision" occurs when a component (an Item +Choice, Item, Section, or the entire Assessment) needs to be translated +into another language. Even if the semantics of the component are +identical (and they should be or you need a better translator ;-), the +Assessment package needs to handle this situation correctly: an admin +user needs to be able to "assign" the right language version to a set +of subjects, and the returned user data need to be assembled into +trans-language data sets. +

+

Note that two orthogonal constructs are in play here: +

+

+ +

Approach

+

The Content Repository (CR) in OpenACS is designed to handle these +complex design issues, though it is still undergoing refinements and +how best to use it is also still being discovered. So the ideas here +are still somewhat exploratory. +

+

For each of the package components that need to be versioned +(certainly the core components as_assessments, as_sections, as_items, +and as_item_choices; but also other components like as_policies), we +extend the basic CR entities cr_items and cr_revisions. Thus we +actually have, for instance, two tables for Items: +

+

+ +

+This pattern of dual tables is used for all components that need to +behave this way. When an admin user creates a new Item, a new row is +inserted into the as_items and the as_items_revs table. Then when the +same admin user (or another admin user) changes something about the +Item, a new as_items_revs row is inserted. +

+

Now here is where things become tricky, though.. Any time a +component is changed, there is a simultaneous implicit change to the +entire hierarchy. +Data collected after this change will be collected with a semantically +different instrument. Whether the difference is large or small is +immaterial; it is different, and Assessment must handle this. And the +CR doesn't do this for us automagically. +

+

So what the package must do is version both the individual +entities and also all the relationships over which we join when we're +assembling the entire Assessment (whether to send out to a requesting +user, to stuff the database when the form comes back, or to pull +collected data into a report).

+

This doesn't involve merely creating triggers to insert new mapping +table rows that point to the new components. We also need to insert new +revisions for all components higher up the hierarchy than the component +we've just revised. Thus: +

+

+ +

+Another key issue, discussed in this +thread, +involves the semantics of versioning. How big of a modification in some +Assessment package entity needs to happen before that entity is now a +"new item" instead of a "new version of an existing item"? If a typo in +a single Item Choice is corrected, one can reasonably assume that is +merely a new version. But if an Item of multiple choice options is +given a new choice, is this Item now a new one? +

+

One possible way this could be defined would derive from the +hierarchy model in the CR: cr_items -- but not cr_revisions -- can +contain other entities; the parent_id column is only in cr_items. Thus +if we want to add a fifth as_item_choice to an as_item (while +preserving the state of the as_item that only had four +as_item_choices), we need to insert a new as_item and not merely a new +as_item_rev for the existing as_item. +

+

On the other hand, if we manage the many-many hierarchies of +Assessment package entities in our own mapping tables outside of the CR +mechanism, then we can handle this differently. At this point, we're +not sure what is the best approach. Please post comments! +

+

A final point concerns the mapping tables. The OpenACS +framework provides a variety of special-purpose mapping tables that are +all proper acs_objects (member_rels, composition_rels, acs_rels, and +the CR's own cr_rels). These provide additional control over +permissioning but fundamentally are mapping tables. Whether to use them +or just simple two-key tables will depend on the need for permission +management in the given relationship. Presumably for most of the +relations over which joins occur (ie that aren't exposed to outside +procs etc), the simple kind will be superior since they are far lighter +weight constructs. +

+

+

Specific Versionable Entities

+

Within each subsystem of the Assessment package, the following +entities will inherit from the CR. We list them here now, and once +we've confirmed this selection, we'll move the information out to each +of the subsystems' pages. +

+

+ + + Index: openacs-4/packages/assessment/www/doc/images/assessment-datafocus.jpg =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-datafocus.jpg,v diff -u Binary files differ Index: openacs-4/packages/assessment/www/doc/images/assessment-groupingfocus.jpg =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-groupingfocus.jpg,v diff -u Binary files differ Index: openacs-4/packages/assessment/www/doc/images/assessment-itemfocus.jpg =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-itemfocus.jpg,v diff -u Binary files differ Index: openacs-4/packages/assessment/www/doc/images/assessment-page-flow.jpg =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-page-flow.jpg,v diff -u Binary files differ Index: openacs-4/packages/assessment/www/doc/images/assessment-schedfocus.jpg =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-schedfocus.jpg,v diff -u Binary files differ Index: openacs-4/packages/assessment/www/doc/images/assessment-sequencefocus.jpg =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment-sequencefocus.jpg,v diff -u Binary files differ Index: openacs-4/packages/assessment/www/doc/images/assessment.jpg =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/images/assessment.jpg,v diff -u Binary files differ