Another programming fad, obviously fundamentally flawed but as with pair programming widely accepted by people intelligent and experienced enough to know better. How things usually work. we have a new project, described in vague terms e.g. user experience or business model. We set to work writing a functional design specification that outlines the interfaces, data flow, primitives, and once we get agreement from all principles we set out coding it. Within weeks if not within days or even hours of putting the design into implementation we run into exigencies that nobody had thought of. Ideas that turn out to not work; changing requirements; feature that we remove, UAT feedback. Maybe we update the design to reflect these but that often falls by the wayside. By the time the project is released for distribution we have made substantial changes to the original design, and we have developed tests from unit tests of components up to user interface testing by actually operating the software itself on its target platform. By this time it bears scant resemblance to the original as we found we hadn’t thought of things, or that some features were useless, or that users didn’t like it. “Test driven development” is a concept from a certain sort of fanatic I’ve had the misfortune of working with, and supposes that the changes we make between spec and release don’t happen. The idea is that the software is ready to release once it passes all the tests that were written based on the specification. This is garbage. Every time the reality wanders from the design, and it ALWAYS does, the tests have to be revisited and revised, extra work, with the Atlas-load of software development process (read. meetings) that goes with it. That’s one problem. The greater problem is that developers are expected to write the tests, and that is a tremendous folly. Because as any serious developer (as opposed to those who just leap from one management fad to another to another to another), we will have the same blind spots in writing our tests (be t before or after the coding of the piece being tested) that we had writing code. This is another of those fads like pair programming and illegible formatting gimmicks that anyone could dismiss with a few seconds of unrestrained thought, but which we are saddled with by buzzword-addled managers and compulsively obedient coworkers. What a dumb idea. Yeah sure write tests for pieces that are design-frozen but to write tests for as rapidly moving a target as the design of complex software is just stupid, stupid, stupid.
There is no reason to expect that your tests will still be applicable, and most likely they won’t be. How does the design affect your ability to write the test suite? Or is it an effect of changes you made after the spec? The ability to actually write the tests is the ability to have a meaningful unit of verification that will ensure that the software as it is written will not break any time soon. Which it doesn’t. How do you verify that? Test suites are not a replacement for verifying the software. In the case of the design, your tests are meant to verify that something does not change. And if it does change, it will take a change that you did not anticipate. Therefore, it will take at least as long to validate the original design spec to its conclusion as you expected. There is never a.