To Test or Not to Test? I Say Test.
May 26th, 2009 | Published in productivity, technology adoption, testing | Bookmark on Pinboard.in
The title of this blog entry is inspired by Kent Beck’s posting on the topic. There, he describes some situations in which he feels not writing a test is OK, explaining that depending on whether you’re in the “short game” or “long game” your testing strategy might be different.
I believe Kent’s “short” and “long” games line up well with the Technology Adoption Lifecycle curve. Geoffrey Moore’s “Crossing the Chasm” and “Inside the Tornado” characterize how companies have to adjust their actions and approaches for developing and marketing a given product depending on what part of the curve that product currently addresses.
If your product targets the extreme left side of the curve, you can get away with less testing because customers on that part of the curve — visionaries and early adopters — are mostly concerned with your ideas and approach and are less concerned with the details of how your product operates and performs. If they’re kicking the tires and they hit a glaringly huge bug, you can just say, “Oops, we’ll have to fix that” and that type of customer is pretty much always OK with that. But once you get to the point of attempting to cross the chasm, or if you’ve already crossed the chasm, then testing grows significantly in importance. This is because the customers you’ll be chasing there are the pragmatists, and for them, the thing has to pretty much do what it’s supposed to do, though they’ll tolerate bugs here and there especially if there are workarounds. If you make it through that part of the lifecycle and your product lives to see the downslope on the right side of the curve, your tests have to be far better still because the conservative and skeptical customers over there really don’t like finding any defects in your product.
What this means, then, is that I believe Kent’s “short game” is short indeed, applying primarily to the portion of the Technology Adoption Lifecycle curve lying to the left of the chasm and possibly also to the point immediately to its right. But even there, testing is still very important, not so much immediately for the customer but more for yourself, for at least the following reasons:
-
Testing can enhance your team’s productivity by ensuring that code coming from different parts of the team actually works together and stays that way. (As commenters on Kent’s posting point out, this isn’t such a big deal in Kent’s case because he’s working alone.)
- Testing can help you identify what functionality in the product is expected to work, which is very helpful if a potential customer is kicking the tires and wants a demo.
-
Some developers are under the incredibly mistaken belief that testing isn’t their job, or that they can tell their management they can have either the functionality or the tests or not both. I’ve heard this many times over the course of my career, and frankly, it’s pretty weak. Having testing on the agenda from the start makes it clear that you consider testing to be a regular part of every developer’s job.
-
There can be a huge difference between code that’s written to be testable and code that isn’t. If testing is delayed, the cost of refactoring down the road to make the code testable can be prohibitive.
-
If you write code that other developers have to build on, testing can help you weed out code that isn’t easy for them to use (this is essentially a form of Extreme Programming’s “simplicity” value).
- Speaking of XP, testing can also help developers know when they’re finished, and can make them more courageous when it comes to fixing problems, adding enhancements, or refactoring.
Kent isn’t saying it’s OK to skip testing; rather, he’s saying that having a clear testing strategy and plan makes it easier to adapt your testing to different needs of the product at different times in its lifecycle. I think, then, that what I’ve written here is just a different focus on what he wrote. I agree completely with him that testing strategy can and should vary depending on where you are on the Technology Adoption Lifecycle curve, but for all the reasons mentioned above and more, I feel it’s important to stress that including testing as a key component of your efforts from Day One is critical, something I think Kent’s posting assumes but does not explicitly say.