May 18th, 2011 |
Published in
erlang, performance, testing, web | Bookmark on Pinboard.in
In my previous blog entry I questioned the value of most web server benchmarking, particularly as related to Erlang. Typical benchmarks are misleading, inaccurate, and poorly executed. Perhaps worse, the intent of publishing them seems to be to assert that the fastest web server (at least according to the tests performed) is of course also the best web server. You’d think the flaws of this fallacy would be so obvious that nobody would fall for it, but think again: watching the delicious “erlang” tag over the past few days revealed the benchmarks my blog post referred to to be one of the most bookmarked Erlang-related pages during that timeframe.
Not surprisingly, though, it looks like I’m not the only one bothered by poor benchmarking practices. Over on his blog, Mark Nottingham just published a brilliant set of rules for HTTP load testing. It’s quite instructive to take your favorite set of published web server benchmarks and see just how many of Mark’s rules they violate.
Like I hinted last time, if you want benchmarks, you are best off by far if you run them yourself. That way, their relevance to the problems you’re addressing will be much more likely, and you can run them in a similar, or even the same, environment on which you plan to deploy. You can also gear the benchmarks to much more closely resemble your applications and the loads you require them to handle. Doing the benchmarking work yourself will give you valuable hands-on experience with the servers and frameworks you’re considering, allowing you to get a feel for important factors such as feature completeness and correctness, ease of development, flexibility, and ease of deployment and runtime management/monitoring, none of which can be gauged by someone else’s performance benchmarks. Finally, by doing your own benchmarking you can also help ensure the validity and usefulness of your results by following Mark’s load testing rules.
May 26th, 2009 |
Published in
productivity, technology adoption, testing | Bookmark on Pinboard.in
The title of this blog entry is inspired by Kent Beck’s posting on the topic. There, he describes some situations in which he feels not writing a test is OK, explaining that depending on whether you’re in the “short game” or “long game” your testing strategy might be different.
I believe Kent’s “short” and “long” games line up well with the Technology Adoption Lifecycle curve. Geoffrey Moore’s “Crossing the Chasm” and “Inside the Tornado” characterize how companies have to adjust their actions and approaches for developing and marketing a given product depending on what part of the curve that product currently addresses.
If your product targets the extreme left side of the curve, you can get away with less testing because customers on that part of the curve — visionaries and early adopters — are mostly concerned with your ideas and approach and are less concerned with the details of how your product operates and performs. If they’re kicking the tires and they hit a glaringly huge bug, you can just say, “Oops, we’ll have to fix that” and that type of customer is pretty much always OK with that. But once you get to the point of attempting to cross the chasm, or if you’ve already crossed the chasm, then testing grows significantly in importance. This is because the customers you’ll be chasing there are the pragmatists, and for them, the thing has to pretty much do what it’s supposed to do, though they’ll tolerate bugs here and there especially if there are workarounds. If you make it through that part of the lifecycle and your product lives to see the downslope on the right side of the curve, your tests have to be far better still because the conservative and skeptical customers over there really don’t like finding any defects in your product.
What this means, then, is that I believe Kent’s “short game” is short indeed, applying primarily to the portion of the Technology Adoption Lifecycle curve lying to the left of the chasm and possibly also to the point immediately to its right. But even there, testing is still very important, not so much immediately for the customer but more for yourself, for at least the following reasons:
-
Testing can enhance your team’s productivity by ensuring that code coming from different parts of the team actually works together and stays that way. (As commenters on Kent’s posting point out, this isn’t such a big deal in Kent’s case because he’s working alone.)
- Testing can help you identify what functionality in the product is expected to work, which is very helpful if a potential customer is kicking the tires and wants a demo.
-
Some developers are under the incredibly mistaken belief that testing isn’t their job, or that they can tell their management they can have either the functionality or the tests or not both. I’ve heard this many times over the course of my career, and frankly, it’s pretty weak. Having testing on the agenda from the start makes it clear that you consider testing to be a regular part of every developer’s job.
-
There can be a huge difference between code that’s written to be testable and code that isn’t. If testing is delayed, the cost of refactoring down the road to make the code testable can be prohibitive.
-
If you write code that other developers have to build on, testing can help you weed out code that isn’t easy for them to use (this is essentially a form of Extreme Programming’s “simplicity” value).
- Speaking of XP, testing can also help developers know when they’re finished, and can make them more courageous when it comes to fixing problems, adding enhancements, or refactoring.
Kent isn’t saying it’s OK to skip testing; rather, he’s saying that having a clear testing strategy and plan makes it easier to adapt your testing to different needs of the product at different times in its lifecycle. I think, then, that what I’ve written here is just a different focus on what he wrote. I agree completely with him that testing strategy can and should vary depending on where you are on the Technology Adoption Lifecycle curve, but for all the reasons mentioned above and more, I feel it’s important to stress that including testing as a key component of your efforts from Day One is critical, something I think Kent’s posting assumes but does not explicitly say.