Updated 2018-01-06 18:07:22 by pooryorick

RHS Tcltest is an amazing little package for writing simple unit-type tests. However, there are a lot of things that tcltest makes it very hard to do. There are other things that aren't hard to do, but make the tests that do them hard to read. I'd like to put together a list of desired functions for a new test package, in hopes of eventually writing such a beast (or, someone else coming along and writing it). The things I'm looking for are:

  • Simple to write simple tests, like tcltest. If I just want to call a proc and test the return value, it should be a very small bit of code to do so.
  • Things that aren't part of the actual testing framework should be in a separate package. For example, things like makeFile and makeDirectory should be in a separate package that comes with the framework.
  • Testing the errorCode should be as easy as testing the return code.
  • Complex test cases should still be easy to read and understand.
  • Assertions should be available, and the test cases should know that the test failed if any assertions failed.
  • It should be easy to say If this assertion fails, the test is finished, don't bother with the rest.
test myComplexTest-1.1 {
    A sample of a complex test, with comments
} -setup {
    set data {a 1 b 2 c 3}
    set filename [extrapackage::makeFile $data myComplexTestFile-1.1]
    catch {unset myArray}
    catch {unset expectArray} ; array set expectArray $data
} -body {
    set code [readFileToArray $filename myArray]

    assertEquals -nofail 1 $code "The read failed, don't bother with other assertions"
    assertEquals -nofail 1 [info exists myArray] "The array did not get created"
    assertArrayEquals expectArray myArray "The array results were incorrect"
}

  • Tests w/o a -result flag are assumed to return with a code of 0/2, and their actual result doesn't matter. The success/failure of such a test case (if it returns with code 0 or 2) depends on the assertions in the test
  • It should be possible to create "Test Suites" that group a bunch of tests/test files together
  • It should be possible to programatically run a test, a whole file of tests, a test suite, a while directory of tests
  • It should be possible to retrieve the results of programatically run tests

I'm sure I can come up with other requirements. More importantly, though, is... what would other folks want in a testing package?

RHS 2004-08-20:

It occurs to me that it would be useful to have a -description element also. This would be different from the short description that is the 2nd argument to the test, in that it would be a clear explanation of what function/requirement is being proven by this test. The idea is that one could ask the test suite for a summary of all the tests, and it would print out the test names along with their longer description, which could be used as a way to document what the current requirements for the project are.

On the other hand, perhaps I should just be using the provided description argument to better use. I tend to keep that argument as short and direct as possible. Perhaps I should be adding more detail there. I do, however, like the idea of being able to do something like:
test myproc-2.1 {
    Throw an typed error if class is out of range
} -description {
    The 'class' parameter can have a value from 0 to 6. If
    the provided value is outside that range, throw a typed
    error.
} -body {
    foreach class {-1 7} {
        set code [catch {myproc $class} result]
        assertEquals -nofail 1 $code "Proc call did not throw an error"
        assertEquals {CALLER {INVALID PARAMETER VALUE}} $::errorCode
        assertEquals \
            "Invalid value '$class' for input class. Must be between 0 and 6, inclusive" \
            $result "Error message was incorrect"
    }
}

And then be able to automatically get a summary like

  • myproc-2.1: The 'class' parameter can have a value from 0 to 6. If the provided value is outside that range, throw a typed error.

When one gets the summary for the entire test suite(s) for a project, it should be a complete summary of all the requirements for the project.

RHS 2004-08-24:

Having had time to put some thought into the mechanism I'd like to use to define tests and test suites, I've run into some "issues". I'd very much like to be able to define a Test Suite, and specify what tests go into that Suite, much like one does when running tcltest now. In addition, I'd like to be able to build Suites containing other Suites, and so on.

Let's say we have a directory structure of:
 test
 test/module1
 test/module2
 test/module3

I'd like to be able to have the following files:
# File: test/suite.test
set suite [testsuite::suite]
# Add each of the module directories
# This adds the suites defined in those directories, if there is one
$suite add directory ./module1
$suite add directory ./module2
$suite add directory ./module3

# File: test/moduleX/suite.test
# Add all the .test files in the directory
# suite.test is automatically excluded unless otherwise stated
set suite [testsuite::suite]
$suite add files ./*.test

# File: test/module1/myprocs.test
testsuite::test aproc-1.1 {
    A test for aproc
} -body { ... } -result { ... }

Anyways, that's the basic layout I'd like to have.

I'd very much like suites and tests to act like object commands and return a command name that can be used to access the information about it. It would then be possible to load a test suite, get a list of its tests, and then iterate over the tests to get information about them.

My problem is that I'm not sure how to have tests that are defined be automatically added to the test suite that is loading the test file. My thought is to have a test suite register itself as the current suite before it loads any files (or runs any tests, or does anything interesting, etc.). Then, the test proc will add its command name to the currently defined suite when it's defined. However, this seems like a bit of a hack. I was hoping someone else might have run into a similar sitauation and be able to comment on what they did, if they think it's better. Any helpful thoughts would be much appreciated.

I'm also considering having a suite load all its tests before it actually runs any of them. However, this means that the commonly used pattern of running general setup code at the beginning of a test file (i.e., setting up data structures that are used in the tests, etc.) no longer works. The solution to this is to have a way for a test file to register setup code and cleanup code that are run before/after the tests that are defined in that file are run. I'm not sure on this particular aspect yet, though.

RHS 2004-08-29:

I'm trying to figure out what the options to the suite object should be for handling of loading various files. I'd like to be able to express the following with more ease:
foreach directory [getDirectoriesMatching $someDirPattern] {
    if { [matchesPattern $directory $someExcludeDirPattern] } {
        continue
    }
    if { [file exists [file join $directory suite.test]] } {
        loadFile [file join $directory suite.test]
    } else {
        foreach file [getFilesMatching $directory $someFilePattern] {
            if { [matchesPattern $file $someExcludeFilePattern] } {
                continue
            }
            loadFile [file join $directory $file]
        }
    }
}

I'd like to be able to express the above in -options for the suite, possibly. And/or, I'd like to be able do it with something like:
testsuite::suite mysuite {
    a description
} -eval {
    loadFiles ... some ... info ... that ... expresses ... what ... is ... above ...
}

I'm just not sure how to do that without making the options (to the suite or to loadFiles), particularly scary. Perhaps:
loadFiles -includeDirs $someDirPattern \
          -excludeDirs $someExcludeDirPattern \
          -includeFiles $someFilePattern \
          -excludeFiles $someExcludeFilePattern

With the same options being allowed to the suite constructor.

Of course, I'd also like to be able to say "do that, recursively", perhaps via a -depth option that represents how many directories deep to go, with -1 being as deep as possible. My current thought for the default configuration is to be something like:
loadFiles -includeDirs * \
          -excludeDirs . .. \
          -includeFiles *.test \
          -excludeFiles suite.test \
          -depth -1
loadFiles -includeDirs . \
          -includeFiles *.test \
          -excludeFiles suite.test
          -depth 0

Only, I'm still not sure how to say "if there's a suite.test file, load that instead of the other files, and don't recurse into that directory structure any deeper." /sigh It's getting ugly.

So, to relate this info to what tcltest does:

  • -includeDirs = -relateddir
  • -excludeDirs = -asidefromdir
  • -includeFiles = -file
  • -excludeFiles = -notfile

What I want is very much like what tcltest does, except for the following:

  • When recursing into a subdirectory: If there is a suite.test file, source that; otherwise, source all the *.test files (contrained by -includeFiles and -excludeFiles)
  • The recursion into subdirectories stops in directories that contain a suite.test file. Technically, it will continue, but it will be as a result of this new suite.test file (if it creates a suite that allows it)
  • The -depth option can be used to put a limit on how deep to recurse, with the default being forever (-1)

RHS 2004-09-02:

Having spent a lot of time working on the package, I'm considering changing one of the aspects of defining suites and tests.

My design, previously, had been to do things the way tcltest does them. You define tests, and they get added automatically to the "suite" of tests that are run. Technically, there's only one suite with tcltest, and the tests are run as they are defined. I want to allow for multiple suites to be defined, as well as allowing one to specify that tests should be run right after they are defined, or once all the tests have been loaded.

My problem is that it's not easy to set up something so that tests are automatically loaded into the correct suite. It is possible, but it's not easy, and it requires a bit of magic, and I'm not a big fan of magic, since it means there may be weird instances where it's very hard to make things work or to figure out what's wrong. I'd like things to be simple and easy to understand. It also makes it very difficult in cases where one wants to define a Suite or Test inside a Test (i.e., to test the testsuite package).

My thought is to define tests/suites like:
## file 1:
set suite1 [suite package1 "description" ... other options]

$suite1 add [test procone-1.1 {description} ... other options]

set suite11 [suite package1.1 "description" ... other options]

$suite11 add [test $proctwo-1.1 {description} ... other options]

$suite1 add $suite11

# would be the same as '$suite1 add [source $someFile]'
$suite addFile $someFile

# loads files in a directory based on -includeFiles and -excludeFiles
$suite addDirectory $someDirname

# loads directories based on -includeDirs -excludeDirs
$suite addChildren

return $suite1

The main things to note here are:

  • Tests and Suites are added to a specific Suite; there's no magic needed to figure out what to add them to.
  • It is expected that a single Test or Suite is returned at the end of a test definition file. That Suite/Test is what gets added to the Suite that loaded it (or run, if it's the top level file).

Problems with this approach:

  • Issues with variable names when defining suites:
## File 1.test:
set suite [suite file1 ...]
$suite addFile "File 2.test"
$suite add [test ...]
return $suite

## File 2.test
set suite [suite file2 ...]
$suite add [test ...]
return $suite

As can be seen in the above code, the variable name $suite is used in both files. By the time the first suite goes to define the test, the variable $suite contains the second suite, not the first.

The obvious solution to this is to give every suite variable a unique name. However, that can get a bit annoying after the depth of the file structure gets complex. It would be possible to add a command to generate a unique name based on the filename, but do we really want to have to do:
## File 1.test:
set [::testsuite::getVarName]1 [suite file1 ...]
[::testsuite::getVarName]1 add [test ...]
...

That just seems ugly to me. Other possibilities are:

  • Source each file in a separate interp
  • Source each file in a separate process
  • Source each file in a separate namespace (not a fan of this, since a bad choice in a test definition file could break it)

Both separate interps and separate processes have their own issues with not being able to share data well. I'm still putting thought into how to best handle this whole situation.

Lars H: A fourth possibility is to source the file doing the above suite definitions locally inside a procedure. This separates variables of one definition from variables of another, although it of course also means that any auxilliary variables like $suite will cease to exist once the file has been sourced. It could look something like this
proc source_suitedef {fname args} {
    set default_options $args
    register_suite [source $fname]
}

where the $default_options are meant to be a mechanism for passing some extra options to all tests in a suite (I'd expect that kind of thing to be useful, but perhaps you already have some mechanism for that). Another perk of sourcing in a procedure is that the default namespace is the one of that proc, so if it is in the testsuite namespace then all ::testsuite::* commands are available without namespace qualification.

MAKR: I like the current tcltest package and the proposed additions are more like the notorious bells and whistles making it far too complex. Nevertheless if the current functionality is retained, then I have nothing to complain about.

What's really missing, however, is statistics and reports when used with large applications. E.g. when testing the whole application I currently overload the proc command to be able to get some numbers which procs and commands got added through package require statements and directly or indirectly tested. So in the end I have a report which procs are still untested and an estimate about the overall testing coverage....

To my mind things along this way need more attention.

PJM: The points raised by MAKR seem pretty solid, some of the things I need to do are:

  1. Have a simplified (pretends its not TCL) interface using keywords such as Name:, Description: and Test:. This is just sugar but its useful if you want user/engineers to write the tests. You can then introduce TCL to the test writers slowly, e.g. did you realise you can generate 10 tests using a foreach statement!
  2. Provide ability to ask a human to perform or verify the result in some parts of the test. These are just commands such as please do connect the ethernet and check that the green light is on. It just happens that some things are difficult to automate.
  3. Generate a report/summary in a variety of pretty formats suitable for the customers, we've used HTML tables in the past to generate both summaries and detailed (like very detailed, enough to kill an elephant if it fell on it) reports.

For what its worth a summary/implementation of the ideas above can be found in PTL - a pretty test language