Index: openacs-4/packages/assessment/www/doc/requirements.adp
===================================================================
RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/requirements.adp,v
diff -u -r1.1.2.2 -r1.1.2.3
--- openacs-4/packages/assessment/www/doc/requirements.adp 25 Aug 2015 18:02:19 -0000 1.1.2.2
+++ openacs-4/packages/assessment/www/doc/requirements.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3
@@ -40,14 +40,14 @@
system under development.
Use Cases
-The assessment module in it's simplest form is a dynamic
+The assessment module in it's simplest form is a dynamic
information gathering tool. This can be clearly seen in the first
group of use cases, which deal with surveys (one form of
assessment, e.g. for quality assurance or clinical trials). An
extension of this information gathering the possibility to conduct
an evaluation on the information given, as we show in the second
group of use cases (testing scenarios). Last but not least, the
-assessment tool should be able to provide it's information
+assessment tool should be able to provide it's information
gathering features to other packages within the OpenACS framework
as well.
It is very important to note, that not all parameters and
@@ -74,12 +74,13 @@
with all the questions the author added to the survey.
Quality Assurance
-A company wants to get feedback from users about it's product. It
-creates a survey which offers branching (to prevent users from
+A company wants to get feedback from users about it's product.
+It creates a survey which offers branching (to prevent users from
filling out unnecessary data, e.g. if you answered you have never
-been to Europe the question "Have you seen Rome" should not show
-up) and multi-dimensional likert scales (To ask for the quality and
-importance of a part of the product in conjunction).
+been to Europe the question "Have you seen Rome" should
+not show up) and multi-dimensional likert scales (To ask for the
+quality and importance of a part of the product in
+conjunction).
Professional data entry
A clinic wants to conduct a trial. For this research assistants are
@@ -126,9 +127,9 @@
Multiple languages
The quality assurance team of the company mentioned above realizes
-that the majority of it's user base is not native English speakers.
-This is why they want to add additional translations to the
-questions to broaden the response base. For consistency, the
+that the majority of it's user base is not native English
+speakers. This is why they want to add additional translations to
+the questions to broaden the response base. For consistency, the
assessment may only be shown to the subject if all questions used
have been translated. Furthermore it is necessary to store the
language used along with the response (as a translation might not
@@ -237,8 +238,8 @@
immediately as a percentage score in a table comparing that score
to other users. Users should be able to answer only a part of the
possible questions each time. If the user is in the top 2%, offer
-him the contact address of "Mensa", other percentages should give
-encouraging text.
+him the contact address of "Mensa", other percentages
+should give encouraging text.
Scoring
The computer science department has a final exam for the students.
@@ -249,7 +250,7 @@
two sections only 30% towards the total score. Each section
consists of multiple questions that have a different weigth (in
percent) for the total score of the section. The sum of the weigths
-has to be 100%, otherwise the author of the section get's a
+has to be 100%, otherwise the author of the section get's a
warning. Some of the questions are multiple choice questions, that
get different percentages for each answer. As the computer science
department wants to discourage students from giving wrong answers,
@@ -291,8 +292,8 @@
Action driven questions
The company conducting the QA wants to get more participants to
-it's survey by recommendation. For this each respondee is asked at
-the end of the survey if he would recommend this survey to other
+it's survey by recommendation. For this each respondee is asked
+at the end of the survey if he would recommend this survey to other
users (with the option to give the email address of these users).
The answer will be processed and an email send out to all given
emails inviting them to take the survey.
@@ -320,8 +321,8 @@
Assessment or set of Assessments to a specific set of subjects,
students or other data entry personnel. These actions potentially
will involve interfacing with other Workflow management tools (e.g.
-an "Enrollment" package that would handle creation of new Parties
-(aka clinical trial subjects) in the database.
+an "Enrollment" package that would handle creation of new
+Parties (aka clinical trial subjects) in the database.
Schedulers could also be teachers, curriculum designers, site
coordinators in clinical trials, etc.
Analyst
@@ -336,35 +337,35 @@
completing a health-related quality-of-life instrument to track her
health status. Subjects need appropriate UIs depending on Item
formats and technological prowess of the Subject -- kiosk
-"one-question-at-a-time" formats, for example. May or may not get
-immediate feedback about data submitted.
+"one-question-at-a-time" formats, for example. May or may
+not get immediate feedback about data submitted.
Subjects could be students, consumers, or patients.
Data Entry Staff
Has permissions to create, edit and delete data for or about the
-"real" Subject. Needs UIs to speed the actions of this trained
-individual and support "save and resume" operations. Data entry
-procedures used by Staff must capture the identity if both the
-"real" subject and the Staff person entering the data -- for audit
-trails and other data security and authentication functions. Data
-entry staff need robust data validation and integrity checks with
-optional, immediate data verification steps and electronic
-signatures at final submission. (Many of the tight-sphinctered
-requirements for FDA submissions center around mechanisms
-encountered here: to prove exactly who created any datum, when,
-whether it is a correct value, whether anyone has looked at it or
-edited it and when, etc etc...)
+"real" Subject. Needs UIs to speed the actions of this
+trained individual and support "save and resume"
+operations. Data entry procedures used by Staff must capture the
+identity if both the "real" subject and the Staff person
+entering the data -- for audit trails and other data security and
+authentication functions. Data entry staff need robust data
+validation and integrity checks with optional, immediate data
+verification steps and electronic signatures at final submission.
+(Many of the tight-sphinctered requirements for FDA submissions
+center around mechanisms encountered here: to prove exactly who
+created any datum, when, whether it is a correct value, whether
+anyone has looked at it or edited it and when, etc etc...)
Staff could be site coordinators in clinical trials, insurance
adjustors, accountants, tax preparation staff, etc.
System / Application Overview
Editing of Assessments
- Manage the structure of Assessments -- the organization of
-series of questions (called "Items") into Sections (defined
-logically in terms of branch points and literally in terms of
-"Items presented together on a page"), along with all other
-parameters that define the nature and fuction of all Assessment
-components.
- Create, edit and delete Assessments, the highest level in the
+series of questions (called "Items") into Sections
+(defined logically in terms of branch points and literally in terms
+of "Items presented together on a page"), along with all
+other parameters that define the nature and fuction of all
+Assessment components.
- Create, edit and delete Assessments, the highest level in the
structure hierarchy. Configure Assessment attributes:
- Assessment name, description, version notes, instructions,
@@ -395,10 +396,11 @@
Assessment, including editing of the Assessment itself, access to
collected Assessment data, and control of scheduling
procedures.
- A "clear" button to wipe all user input from an
-Assessment.
- A "printer-friendly" version of the Assessment so that it can
-be printed out for contexts in which users need to complete it on
-paper and then staff people transcribe the answers into the web
-system (yes, this actually is an important feature).
+Assessment.
- A "printer-friendly" version of the Assessment so
+that it can be printed out for contexts in which users need to
+complete it on paper and then staff people transcribe the answers
+into the web system (yes, this actually is an important
+feature).
Create, edit, clone and delete Sections -- the atomic grouping
unit for Items. Configure Section attributes:
@@ -410,46 +412,50 @@
Items.Item data integrity checks: rules for checking for expected
relationships among data submitted from two or more Items. These
define what are consistent and acceptable responses (ie if Item A
-is "zero" then Item B must be "zero" as well for example).Navigation criteria among Items within a Section -- including
+is "zero" then Item B must be "zero" as well
+for example).Navigation criteria among Items within a Section -- including
default paths, randomized paths, rule-based branching paths
responding to user-submitted data, including possibly looping
paths.Any time-based attributes (max time allowed for Section,
-minimum time allowed)A "clear" button to clear all user values in a Section.
+minimum time allowed)A "clear" button to clear all user values in a
+Section.
Create, edit, clone and delete Items -- the individual
"questions" themselves. Configure Item attributes:
Support of combo-box "other" choice in
+multiple-choice Items (ie, if user selects a radiobutton or
+checkbox option of "other" then the textbox for typed
+entry gets read; if user doesn't select that choice, then the
+textbox is ignored).A "clear Item" button for each Item type that
+can't be directly edited by user.
-Create, edit, clone and delete Item Choices -- the "multiple
-choices" for radiobutton and checkbox type Items:
+Create, edit, clone and delete Item Choices -- the
+"multiple choices" for radiobutton and checkbox type
+Items:
- Choice data types: integer, numeric, text, boolean
- Choice formats: horizontal, vertical, grid
- Choice values: labels, instructions, numeric/text encoded
values
- Choice-specific feedback: configurable text/sound/image that
@@ -462,54 +468,57 @@
- Scoring Algorithms: names and arithmetic calculation formulae
to operate on submitted data when the form returns to the server.
-These include standard "percent correct -> letter grade" grading
-schemes as well as formal algorithms like Likert scoring
-(conversion of ordinal responses to 0-100 scale scores).
- Names and descriptions of Scales -- the output of Algorithm
+These include standard "percent correct -> letter
+grade" grading schemes as well as formal algorithms like
+Likert scoring (conversion of ordinal responses to 0-100 scale
+scores).
- Names and descriptions of Scales -- the output of Algorithm
calculations.
- Mapping of Items (and/or other Scales) to calculate a given
Scale Scores.
- Define data retrieval and display alternatives: tabular display
in web page tables; tab-delimited (or CSV etc) formats; graphical
-displays (when appropriate).
- Note: manual "grading by the teacher" is a special case of
-post-submission Assessment Processing in that no automated
+displays (when appropriate).
- Note: manual "grading by the teacher" is a special
+case of post-submission Assessment Processing in that no automated
processing occurs at all; rather, an admin user (the teacher)
-retrieves the subject's responses and interacts with the subject's
-data by in effect annotating it ("This answer is wrong" "You are
-half right here" etc). Such annotations could be via free text or
-via choices configured during editing of Items and Choices (as
-described above).
+retrieves the subject's responses and interacts with the
+subject's data by in effect annotating it ("This answer is
+wrong" "You are half right here" etc). Such
+annotations could be via free text or via choices configured during
+editing of Items and Choices (as described above).
Note that there are at least three semantically distinct
concepts of scoring, each of which the Assessment package should
support and have varying levels of importance in different
contexts. Consider:
-- Questions may have a "correct" answer against which a subject's
-reponse should be compared, yielding some measure of a "score" for
-that question varying from completely "wrong" to completely
-"correct". The package should allow Editors to specify the nature
-of the scoring continuum for the question, whether it's a
-percentage scale ("Your response is 62% correct") or a nominal
-scale ("Your response is Spot-on" "Close but No Cigar" "How did you
-get into this class??")
- Raw responses to questions may be arithmetically compiled into
+
- Questions may have a "correct" answer against which a
+subject's reponse should be compared, yielding some measure of
+a "score" for that question varying from completely
+"wrong" to completely "correct". The package
+should allow Editors to specify the nature of the scoring continuum
+for the question, whether it's a percentage scale ("Your
+response is 62% correct") or a nominal scale ("Your
+response is Spot-on" "Close but No Cigar" "How
+did you get into this class??")
- Raw responses to questions may be arithmetically compiled into
some form of Scale, which is the real output of the Assessment.
This is the case in the health-related quality-of-life measures
-demo'd here. There is
-no "correct" answer as such for any subject's responses, but all
-responses are combined and normalized into a 0-100 scale.
- Scoring may involve summary statistics over multiple responses
-(one subjects' over time; many subjects' at a single time; etc).
-Such "scoring" output from the Assessment package pertains to
-either of the two above notions. This is particularly important in
-educational settings.
+demo'd here.
+There is no "correct" answer as such for any
+subject's responses, but all responses are combined and
+normalized into a 0-100 scale.
Scoring may involve summary statistics over multiple responses
+(one subjects' over time; many subjects' at a single time;
+etc). Such "scoring" output from the Assessment package
+pertains to either of the two above notions. This is particularly
+important in educational settings.
Create, edit, clone and delete Repositories of Assessments,
Sections and Items. Configure:
-- Whether a Repository is shareable, and how/with whom.
- Whether a Repository is cloneable, and how/with whom.
- Note: this is the concept of a "Question Catalog" taken to its
-logical end -- catalogs of all the organizational components in an
-Assessment. In essence, the Assessment package is an Assessment
-Catalog. (The CR is our friend here ;-)
- Versioning is a central feature of this repository; multiple
-"live" versions of any entity should be supported, with attributes
-(name, version notes, version creation dates, version author, scope
--- eg subsite/group/etc) to make it possible to identify, track and
-select which version of any entity an Assessment editor wants to
-use.
+- Whether a Repository is shareable, and how/with whom.
- Whether a Repository is cloneable, and how/with whom.
- Note: this is the concept of a "Question Catalog"
+taken to its logical end -- catalogs of all the organizational
+components in an Assessment. In essence, the Assessment package is
+an Assessment Catalog. (The CR is our friend here ;-)
- Versioning is a central feature of this repository; multiple
+"live" versions of any entity should be supported, with
+attributes (name, version notes, version creation dates, version
+author, scope -- eg subsite/group/etc) to make it possible to
+identify, track and select which version of any entity an
+Assessment editor wants to use.
@@ -525,17 +534,17 @@
Provide these additional functions:
-- Support optional "electronic signatures" consisting simply of
-an additional password field on the form along with an "I attest
-this is my response" checkbox that the user completes on submission
-(rejected without the correct password) -- ie authentication
-only.
- Support optional "digital signatures" consisting of a hash of
-the user's submitted data, encrypted along with the user's password
--- ie authentication + nonrepudiation.
- Perform daily scheduled procedures to look for Subjects and
+
- Support optional "electronic signatures" consisting
+simply of an additional password field on the form along with an
+"I attest this is my response" checkbox that the user
+completes on submission (rejected without the correct password) --
+ie authentication only.
- Support optional "digital signatures" consisting of a
+hash of the user's submitted data, encrypted along with the
+user's password -- ie authentication + nonrepudiation.
- Perform daily scheduled procedures to look for Subjects and
Staff who need to be Invited/Instructed or Reminded to
participate.
- Incorporate procedures to send Thanks notifications upon
completion of Assessment
- Provide UIs for Subjects and for Staff to show the status of
-the Assessments they're scheduled to perform -- eg a table that
+the Assessments they're scheduled to perform -- eg a table that
shows expected dates, actual completion dates, etc.
@@ -557,16 +566,17 @@
Handle user Login (for non-anonymous studies)Determine and display correct UI for type of user (eg kiosk
format for patients; keyboard-centric UI for data entry Staff)Deliver Section forms to userPerform data validation and data integrity checks on form
submission, and return any errors flagged within formDisplay confirmation page showing submitted data (if
-appropriate) along with "Edit this again" or "Yes, Save Data"
-buttonsDisplay additional "electronic signature" field for password
-and "I certify these data" checkbox if indicated for
-AssessmentProcess sequence navigation rules based on submitted data and
+appropriate) along with "Edit this again" or "Yes,
+Save Data" buttonsDisplay additional "electronic signature" field for
+password and "I certify these data" checkbox if indicated
+for AssessmentProcess sequence navigation rules based on submitted data and
deliver next Section or terminate event as indicatedTrack elapsed time user spends on Assessment tasks -- answering
a given question, a section of questions, or the entire Assessment
--- and do something with this (we're not entirely sure yet what
+-- and do something with this (we're not entirely sure yet what
this should be -- merely record the elapsed time for subsequent
analysis, reject over-time submissions, or even forcibly refresh a
-laggard user's page to "grab the Assessment back")Insert appropriate audit records for each data submission, if
+laggard user's page to "grab the Assessment
+back")Insert appropriate audit records for each data submission, if
indicated for AssessmentHandle indicated email notifications at end of Assessment (to
Subject, Staff, Scheduler, or Editor)