Debugging and Automated Testing

Important

This section is a work in progress.

by Joel Aufrecht

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

Debugging

Developer Support. The Developer Support package adds several goodies: debug information for every page; the ability to log comments to the page instead of the error log, and fast user switching so that you can test pages as anonymous and as dummy users without logging in and out.

PostgreSQL. You can work directly with the database to do debugging steps like looking directly at tables and testing stored procedures. Start emacs. Type M-x sql-postgres. Press enter for server name and use openacs-dev for database name. You can use C-(up arrow) and C-(down arrow) for command history.

Hint: "Parse error near *" usually means that an xql file wasn't recognized, because the tcl file is choking on the *SQL* placeholder that it falls back on.

Watching the server log. NOTE: explain how to add tcl to directly write your own log output

To set up real-time monitoring of the AOLserver error log, type

less /usr/local/aolserver/log/openacs-dev-error.log

F to show new log entries in real time (like tail -f)
C-c to stop and F to start it up again. 
G goes to the end.
? searches backward 
/ searches forward. 
          

Manual testing

Make a list of basic tests to make sure it works

Test NumActionExpected Result
001Browse to the index page while not logged in and while one or more notes exist.No edit or delete or add links should appear.
002Browse to the index page while logged in. An Edit link should appear. Click on it. Fill out the form and click Submit.The text added in the form should be visible on the index page.

Other things to test: try to delete someone else's note. Try to delete your own note. Edit your own note. Search for a note.

Write automated tests

by Simon Carstensen

OpenACS docs are written by the named authors, and may be edited by OpenACS documentation staff.

It seems to me that a lot of people have been asking for some guidelines on how to write automated tests. I've done several tests by now and have found the process to be extremely easy and useful. It's a joy to work with automated testing once you get the hang of it.

I just wrote a test script for the acs-service-contract package and thought I'd might as well post a step-by-step run-through, since some people have been asking for this. Here goes.

  1. Create the directory that will contain the test script(s):

    $ cd /web/simon/packages/acs-service-contract/tcl
    $ mkdir test
    
  2. Create the .tcl library that holds the test procs:

    $ cd test
    $ emacs acs-service-contract-procs.tcl
    
  3. Write the tests. This is obviously the big step :)

    The script should first call ad_library like any normal -procs.tcl file:

    ad_library {
        ...
    }
    

    To create a test case you call aa_register_case test_case_name.. Once you've created the test case you start writing the needed logic. Let's say you just wrote an API for adding and deleting notes in the notes packages and wanted to test that. You'd probably want to write a test that first creates a note, then verifies that it was inserted, then perhaps deletes it again, and finally verifies that it is gone.

    Naturally this means you'll be adding a lot of bogus data to the database, which you're not really interested in having there. To avoid this I usually do two things. I always put all my test code inside a call to aa_run_with_teardown which basically means that all the inserts, deletes, and updates will be rolled back once the test has been executed. A very useful feature. Instead of inserting bogus data like: set name "Simon", I tend to generate a random script in order avoid inserting a value that's already in the database:

    set name [ad_generate_random_string]
    

    Here's how the test case looks so far:

    aa_register_case acs_sc_impl_new_from_spec {
    
       aa_run_with_teardown \
           -rollback \
           -testcode  {
              ... logic ...
           }
    }
    

    Now let's look at the actual test code. That's the code that goes inside -testcode {}. In my case I had added a new column to acs_sc_impls (pretty_name), which meant that I had to change the datamodel and the Tcl API to support this new change. To make sure I didn't screw up, I wrote a test that created a new service contract, then a new implementation of that contract, and called acs_sc::impl::get to check that the data in the new column had been added correctly and then finally verified that the pretty_name was actually what I had tried to insert. It looked something like this:

    set spec {
         name "foo_contract"
         description "Blah blah"
         ...
    }
    
    # Create service contract
    acs_sc::contract::new_from_spec -spec $spec
    
    set spec {
         name "foo_impl"
         description "Blah blah blah"
         pretty_name "Foo Implementation"
         ...
    }
    
    # Create implementation
    set impl_id [acs_sc::impl::new_from_spec -spec $spec]
    
    # Get the values of the implementation we just created
    acs_sc::impl::get -impl_id $impl_id -array impl
    
    #Verify that the pretty_name column has the correct value
    aa_equals "did column pretty_name get set correctly?" $impl(pretty_name) "Foo Implementation"
    

    Now you might not know how acs-service-contract works, but that doesn't matter. I'm basically inserting data into the database, then querying for the database to check that it got inserted and then finally, using aa_equals, I compare the result with what I inserted to verify that everything is correct.

    There are number of other useful procs for determening whether a test case was successful or not, namely:

    aa_true "is this true?" [expr ![empty_string $foo]]
    aa_false "is this true?" [empty_string $foo]
    

    There a number of other useful procs and I will encourage you to look at the few packages for which tests have already been implemented. That is perhaps the best documentation we have so far. See also the section called “Automated Testing”.

View comments on this page at openacs.org