Updated 2014-05-27 00:30:47 by AMG

Test-Driven Development, or Test-Driven Design, is a programming methodology that focuses on using tests of the program functionality to define requirements of a program, document the program, and guard against regression in the behaviour of the program.

Reference  edit

Introduction to Test Driven Development (TDD), by Scott Ambler
wikipedia
Practical TDD and Acceptance TDD for Java Developers
RLH: A good book that, while it uses Java, explains the process very well.

Tools  edit

Simple Test
a basic test suite, similar to e.g., that is intended to support TDD using namespaces

Description  edit

In test-driven development, the tests are written prior to the program, and so inform the programmer of the program requirements. They also serve as criteria for determining the completion the relevant part of the program.

As a program develops in complexity, it becomes more challenging to make changes without introducing unintended effects. A good set of tests can lower the barrier to further development by giving code writers some confidence that their modifications did no harm.

Discussion  edit

LV: Test driven development is so neat - I don't know why people are not taught to program in that style more often. Mark suggests that there's a Tcl heritage of this style. It would be neat to put up a log of the way this style of development worked, using Tcl, for some relatively simple Tcl/Tk application some time...

sheila: We've been collecting thoughts on this where I work on our own wiki. The cppunit wiki has a nice collection of notes to use as a touchstone for thoughts on the concept, and for a tclunit tool.

MR: I've been developing ProjectForum using TDD. Basic approach is like all other TDD... for any new thing or bug fix in the code, write a test that fails, write code to do what you want, iterate until test passes. Tcl already ships with the excellent 'tcltest' which we use for running all the tests. PF has two levels of tests... lower-level unit tests that exercise all the itcl classes that do the behind-the-scenes stuff, and more functional tests that exercise it at the feature level, via the normal web interface. The latter is still done with tcltest, relying on the http package, an html parser, and a bunch of other cobbled together utilities to make it easier.

LV: So, the tests in tcltest are written before the Tcl code is written to implement the features? I just figured they were all testing cases of bugs that were reported after the fixes were in.

MR: yes, tests get written before a new feature gets put into the code

DKF: I do a lot of the core maintenance this way, and I believe RHS works this way too. The down-side is that when you're at an early stage it can be quite tricky to understand what tests you need to adequately cover your desired functionality.

RHS: Indeed, I try to use TDD as much as possible. There are times when I'll add a feature before implementing the test for it, buts that's rare nowadays. The general idea is to add a test to cover the functionality as you need it, and program only enough to pass that test. This, combined with refactoring and forethought, can lead to some very simple, powerful code that has a full test suite behind it.

As for "Tests written to cover bugs", I do those too. I'll add a test or two each time I, or someone else, finds a bug in my code. These bugs are usually oversights on my part (like forgetting to escape certain things, etc). The test shows the code isn't working and, when it passes, you know you've fixed it.

[... agile ...]

[... http://c2.com/cgi/wiki?CodeUnitTestFirst ...]

TV Remembers a practicum he did as an EE student, where the assignment was to program an assembler in PDP assembly, which I finished by preparing the whole thing on paper, going to the terminal, typing the well through-thought thing in, and running the tests showing no error at all at first try. That was good and decent. Currently I make more typos in a single sentence when I'm not careful than in such a whole project... WHen the assignments are actually not really science, there shouldn't be all too much trial and error, though I like to sit down with a tcl interpreter and do some things (like recently I hooked up a LCD display to the printer port) a l'inproviste, and learn about certain commands and preferred working methods that way.

For creating a quality product, that is possible, but not the most professional option, I'm sure.

In science, like computer programs for scientific purposes or as part of a scientific project, the experimentation should be about the subject at hand, not the programming efforts. Decent C skills (with the exception of the part-wise vague process programming) and normally accepted structured working methods should do the trick.

Open source projects are a bit different I guess, at least in that they also serve the purpose of learning tools, and of course there is courtesy in there, not only professional pride and competition.

AMG: One very powerful trick I've used a few times is intentionally putting bugs in my code (e.g. change > to >=) then rerunning the test suite. If the test suite passes, it's inadequate, and I design new tests that fail due to the bug. What's interesting is that the new tests sometimes continue to fail after I remove the intentional bugs, signifying that I also have previously unknown bugs somewhere else. Finding and fixing those other bugs is generally pretty easy when I have a test for them. When I'm really serious about coverage, I go line-by-line through my program and inject bugs all over the place in every combination that makes sense to me, frequently rerunning the test suite. At the end of this process, the program and the test suite go together just like a lock and a key.