To Test or Not to Test? I Say Test.

May 26th, 2009  |  Published in productivity, technology adoption, testing  |  5 Comments  |  Bookmark on Pinboard.in

The title of this blog entry is inspired by Kent Beck’s posting on the topic. There, he describes some situations in which he feels not writing a test is OK, explaining that depending on whether you’re in the “short game” or “long game” your testing strategy might be different.

I believe Kent’s “short” and “long” games line up well with the Technology Adoption Lifecycle curve. Geoffrey Moore’s “Crossing the Chasm” and “Inside the Tornado” characterize how companies have to adjust their actions and approaches for developing and marketing a given product depending on what part of the curve that product currently addresses.

If your product targets the extreme left side of the curve, you can get away with less testing because customers on that part of the curve — visionaries and early adopters — are mostly concerned with your ideas and approach and are less concerned with the details of how your product operates and performs. If they’re kicking the tires and they hit a glaringly huge bug, you can just say, “Oops, we’ll have to fix that” and that type of customer is pretty much always OK with that. But once you get to the point of attempting to cross the chasm, or if you’ve already crossed the chasm, then testing grows significantly in importance. This is because the customers you’ll be chasing there are the pragmatists, and for them, the thing has to pretty much do what it’s supposed to do, though they’ll tolerate bugs here and there especially if there are workarounds. If you make it through that part of the lifecycle and your product lives to see the downslope on the right side of the curve, your tests have to be far better still because the conservative and skeptical customers over there really don’t like finding any defects in your product.

What this means, then, is that I believe Kent’s “short game” is short indeed, applying primarily to the portion of the Technology Adoption Lifecycle curve lying to the left of the chasm and possibly also to the point immediately to its right. But even there, testing is still very important, not so much immediately for the customer but more for yourself, for at least the following reasons:

  • Testing can enhance your team’s productivity by ensuring that code coming from different parts of the team actually works together and stays that way. (As commenters on Kent’s posting point out, this isn’t such a big deal in Kent’s case because he’s working alone.)

  • Testing can help you identify what functionality in the product is expected to work, which is very helpful if a potential customer is kicking the tires and wants a demo.

  • Some developers are under the incredibly mistaken belief that testing isn’t their job, or that they can tell their management they can have either the functionality or the tests or not both. I’ve heard this many times over the course of my career, and frankly, it’s pretty weak. Having testing on the agenda from the start makes it clear that you consider testing to be a regular part of every developer’s job.

  • There can be a huge difference between code that’s written to be testable and code that isn’t. If testing is delayed, the cost of refactoring down the road to make the code testable can be prohibitive.

  • If you write code that other developers have to build on, testing can help you weed out code that isn’t easy for them to use (this is essentially a form of Extreme Programming’s “simplicity” value).

  • Speaking of XP, testing can also help developers know when they’re finished, and can make them more courageous when it comes to fixing problems, adding enhancements, or refactoring.

Kent isn’t saying it’s OK to skip testing; rather, he’s saying that having a clear testing strategy and plan makes it easier to adapt your testing to different needs of the product at different times in its lifecycle. I think, then, that what I’ve written here is just a different focus on what he wrote. I agree completely with him that testing strategy can and should vary depending on where you are on the Technology Adoption Lifecycle curve, but for all the reasons mentioned above and more, I feel it’s important to stress that including testing as a key component of your efforts from Day One is critical, something I think Kent’s posting assumes but does not explicitly say.

Responses

  1. Kent Beck says:

    May 26th, 2009 at 10:31 am (#)

    Ah the danger of a (slightly) clever title. In the blog I describe two one-line fixes. The one I could test in a few minutes I wrote a test for. The one that would have taken me several hours to test when it was pretty obviously correct I didn’t write a test for.

    Even though I am at an early stage of the product, I still write quite a few tests. The difference from my usual practice is that if the cost is high and the risk is low I don’t insist on testing everything. The question I ask, because this is such an early stage product and I need customer feedback to survive, is, “Will this test help me get more customer feedback sooner?” In a more mature product I ask, “Will this test reduce cost over time?” That’s the distinction I was trying to make.

    I was surprised because I had thought that the “correct” rule wasn’t flexible. Turns out everything we do as programmers should serve business purposes. That shouldn’t have been a surprise but it was.

  2. William Pietri says:

    May 26th, 2009 at 12:31 pm (#)

    Sounds like I agree with both of you.

    I develop and coach almost entirely at startups, and I teach people to test from day 1. Testing supports refactoring, and it enables rapid, radical changes in product direction, both vital in startups. If the question is “test or no test”, my answer is always: test, test, test!

    Still, Kent’s absolutely right: there are cost-benefit tradeoffs involved. If somebody has a couple years experience with TDD and they have the discipline to actually backfill tests later on, I have no problem with them leaving small things untested if they’re currently hard to test. What’s hard to test today may be easy to test in a couple of weeks.

    More accurately, I have no problem just as long as they’re working in the release-early-and-often context that Kent is. If you make it very easy to report bugs (and hopefully automatically report exceptions), then your customers will quickly tell you when you’ve cut too many corners on quality. Being too cavalier for a few hours or a few days is survivable, especially, as Steve Vinoski points out, on the left edge of the adoption curve. Being sloppy for weeks or months, though, is one of the many ways to kill your startup dead.

  3. Gergely Orosz says:

    May 26th, 2009 at 4:40 pm (#)

    Absolutely agree. I work on an open source ECMS system (http://www.sensenet.hu) having been developed for over 2 years now. We have about 500 unit tests and recently we had to make some breaking changes at the very bottom layer. I absolutely agree about tests making programmers more courageous: if we hadn’t been sure that the tests had at least 95% percent of all use cases covered we wouldn’t have been confident in making (all) the changes that were necessary. And it was obvious that something was wrong until even one test failed.

    And the other way round: we receive the most bug reports on features not tested – the GUI and client side. Thats another story on how those kinds of tests could be automated.

  4. Darach says:

    May 28th, 2009 at 5:08 am (#)

    Hi Steve,

    I think you and Kent simply differ in your determination of ‘high cost’ and so navigate the tradeoffs differently; perhaps as you both differ (greatly) in focus.

    We can’t avoid bugs. But we can do a lot to ensure that the bugs we do write are ‘higher quality bugs’. IBM’s Orthogonal Defect Classification (ODC) scheme/methodology is one good example.

    http://www.research.ibm.com/softeng/ODC/ODC.HTM

    The trick it seems, is ensuring project teams all agree on how to navigate the cost benefit tradeoffs and are informed enough (continuously) so that these tradeoffs can be tweaked and tuned over time.

    Defining an explicit test strategy on day one certainly helps focus efforts in the right direction. The maturity to follow that strategy empowers teams to meet quality demands. The wisdom to adapt/tailor it to changing needs/focus is needed to scale it over time.

    Cheers,

    Darach.

  5. Steve Loughran says:

    May 29th, 2009 at 2:17 pm (#)

    I give a lecture to the local CS undergraduates on testing every year:
    http://people.apache.org/~stevel/slides/testing.pdf

    One problem I’ve encountered is that some people in organisations (companies, standards bodies) got into their positions before the renaissance in testing began -one which Kent Beck and colleagues deserve the credit- and as a result, they often view the time spent testing as a waste. You have to lie and say you are debugging, which is what they can relate to.