A recent tweet from Pete Marks reminded me of a long running discussion within the Agile community: what exactly is the purpose of Test Driven Development. Before I get into that, here’s an illustration of what I think TDD is:
I’m a developer in a pair that has to implement a development task.
Step 1 Update to the latest code and run all the tests; check we have a green bar.
Step 2 After a bit of a chat about what the task is all about, write a few tests that describe what we expect to happen when we’re done.
Step 3 Run the new tests; no surprise that we have a red bar. This is the point at which we start driving development with testing.
Step 4 Write code to deliver the functionality we’re looking for. This is usually a multi-stage process as we keep running the new tests, fixing them one at a time, until we get a green bar. We may find that the tests sometimes don’t pass when we expect them too, exposing flaws in the tests or our thinking. We may also think of further useful tests as we go so we add them to the test suite(s) asap.
Step 5 Once we have a green bar for the new tests we run all the tests. If our new code breaks any of the existing tests we fix them by either changing our code or altering the tests as appropriate.
Step 6 Once all tests are passing we grab the build token, update our our code to the latest, run all the tests again (we need to make sure that the updates haven’t broken our tests or our new code broken the updated tests), make any necessary fixes, then we can commit.
Step 7 Once we’ve committed we revisit our code and start refactoring the ugly bits until we’re happy with the design. If this is a simple job we probably don’t update/integrate/commit until we’re done, if its a long job we may update/integrate/commit as we go.
So what’s going on here? Some people claim that TDD is about proving the correctness of the system. I struggle with this. Testing is clearly about proving correctness but TDD is not just about testing. If it was it wouldn’t matter when the tests are written or who writes them. Besides most projects need other forms of tests than those written by developers during development to truly show correctness (exploratory testing, random testing, non-functional testing, and so on) and I usually have a dedicated QA team for all my projects to carry out these additional tests. Certainly developer tests contribute to the overall ‘proof of correctness’ but they are rarely sufficient in themselves.
Others claim that TDD, specifically TDD Unit Testing, is actually about design. Again I struggle with this concept. Writing testable code definitely encourages a particular form of design but then so does refactoring smelly code, adopting a team style, defining coding standards tests and defining an application architecture (or having it imposed on you by using some form of framework or component model). And whilst testable code often meets many of the accepted criteria of ‘good’ object-oriented design it certainly doesn’t promote ‘good’ design at the level of metaphors, domain concepts, entities and services or whatever the architecture of your system revolves around.
To me TDD is all about answering two questions: What are we trying to achieve? And have we achieved it yet? These are two of the most important questions in development and being able to answer them clearly is an excellent indication that you’re working efficiently towards delivering a high-quality solution.
In the illustration above:
Step 2 is where we turn the description of the task into a specification of what the outcome will be. ‘Specification’ is a dirty word these days, bringing to mind reams of formal or semi-formal documentation that still manages to have just enough ambiguity to be near-useless. But one of XP’s great insights was to put specification into a very definite box: do just enough for the task in hand at the point at which you are about to start the task. And do it with running code rather than a model. At this point any differences in each pair’s understanding of the task will come out, leading to a chat with other team members or the customer if necessary, and discussion about the detail of the solution will start.
Step 3 is a statement of intent: even though we haven’t changed functionality of the system it is effectively broken as it doesn’t do what we now want it to do. It’s also a check that the tests don’t unexpectedly pass or blow up in surprising ways.
During Step 4 we develop our understanding of the task and develop our design for the solution. The tests help us keep focussed on the job in hand and encourage us to keep things simple. No super-clever, highly generic solutions here … just enough code to get the tests to pass until we reach the point where we know we’ve finished the first phase of our task: a solution that works in isolation.
Steps 5 and 6 are about the second phase of our task: integrating our solution into the rest of the system in a way that ensures it doesn’t break existing functionality. Again the tests help us keep focussed: do just enough to produce some working, valuable software.
Step 7 is the point where we actually move away from TDD. Now the tests are acting as a safety net to support the changes that the pair want to make to remove the nasty, scratchy, smelly bits of code that they felt uncomfortable with during step 4. For me, this is where good design really happens and it is one of the great features of Pair Programming … between them the pair will be able to see most of the improvements to be made and will be able to ensure they don’t get carried away with making unnecessary changes. Knowing that the new functionality is now in place frees the pair to spend a bit of time making the code a better place to work.
So TDD is really about having a clear idea of what you’re doing, why you’re doing it, and how much you should do before you stop. Having said that if you’re doing TDD and your motivation is greater proof of correctness or driving object-oriented design then keep on going because you’re usually better off with TDD than without it.