can downstream testing ever be the bottleneck?

Everywhere I go I find managers complaining that some team or other is short of staff; and (so far) that has always turned out to be a mirage.

TOC’s 5 Focusing Steps say that adding resources to the bottleneck is the last thing one should do. Before that, a much more cost-effective step is to “exploit” the bottleneck — ie. to try to ensure that bottleneck resources are only employed in adding value. So in the case where testing is the bottleneck, perhaps one should begin by ensuring that testers only work on high quality software; because testing something that will be rejected is waste.

And from the Lean Manufacturing camp, Shigeo Shingo (I think) said something along the lines of “testing to find defects is waste; testing to prevent defects is value”. Which seems to imply that waterfall-style testing after development is (almost) always waste.

Which in turn implies (to me at least) that testing in a waterfall process can never be the bottleneck. The bottleneck must be the policy that put those testers at that point in the flow. Does that sound reasonable to those of you who know a lot more about this kind of stuff than I do?

Advertisements

5 thoughts on “can downstream testing ever be the bottleneck?

  1. This has an interesting implication further upstream: if the developers are the bottleneck, then that means the business analysts are producing too many specifications to implement. And if the business analysts are producing too many specs, that either means the customer is asking for too much, or the business analysts are extracting too much from them. Taken to its logical conclusion, this means one business analyst should only determine one feature at a time and pass it to the sole developer (who may be the business analyst too), who passes it on to the sole tester.

    But that doesn’t match reality, because that would leave software being delivered slower than the customer might need it. And at this edge case, there’s no way to make the upstream process more efficient. Now, ten developers CAN produce code faster than one, maybe not 10x faster, but certainly faster. Which means the bottleneck is development, and the solution is at last to man the bottleneck, until the point where the testers become the bottleneck again.

    My conclusion: there is something unusual about post-development testing. And I believe it is this: it’s a pointless thing to do. If the developers are releasing code that does not pass QA, they are releasing code that even a casual observer could see is wrong. They should be determining their integration tests directly from customer descriptions of how the software should behave. Then they KNOW it’s correct, the only issue is whether what the customer thought was correct is actually what they need it to do. And if not, that’s no big deal – at least in an agile process – just make it a story for the next iteration.

    So I agree, that in a waterfall process, the downstream testing should not be a bottleneck. I suspect if it is, it is a sign that the developers do not have adequate and sufficiently-frequent contact with the customer. But that is pure speculation, so I hope commentator number 3 will have an opinion on that…

    Interesting posts by the way Kevin! I haven’t felt inclined to comment on a blog for a long time.

  2. Pingback: downstream testing implies a policy constraint « silk and spinach

  3. Pingback: is CruiseControl waste? « silk and spinach

  4. I’ve re-read this in light of the later article, and it was me just misunderstanding you. So don’t think it was any lack of communication skills on your part, more my ignorance of ToC. Although the word bottleneck did throw me :)

    I’ve also thought about it in light of the CruiseControl post. I’ve come to the conclusion that it’s not the downstream nature of waterfall-style testing that’s the problem, it’s the delay before you hit that part of the stream. Preparing tests before coding and running them immediately after is idea. Running them five minutes later less so. Preparing tests five minutes after coding but running them immediately, even less so. Preparing them 6 months down the line based on out-of-date requirements documents, SPECTACULARLY less than ideal.

    So I think you were right all along – it must be the management policy that allows this that forms the constrain on the project. I think I finally learnt something!

    I wonder if you could use this inductive reasoning to convince a reluctant management that testing should be done as early as possible?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s