downstream testing implies a policy constraint

As usual, it takes me multiple attempts to figure out what I really want to say, and how to express myself. Here’s a bit more discussion of what I believe is implied by downstream testing:

The very fact that downstream testing occurs, and is heavily consuming resources, means that management haven’t understood that such activity is waste. (If management had understand that, then they would re-organise the process and put the testing up front — prevention of defects, instead of detection.) No amount of tinkering with analysis or development will alter that management perception, and therefore the process will always be wasteful and low in quality. So the constraint to progress is management’s belief that downstream testing has value.

downstream testers: better is worse

This week I’ve been reflecting on why it is that some “agile” teams seem to really fly, while others never seem to get out of second gear. Part of the answer, at least in the teams I’ve looked at, lies in the abilities of their testers. I wrote about the phenomenon over three years ago:

The tester in Team 1 was very good at his job, whereas the tester in Team 2 wasn’t. And as a result, the developers in Team 1 produced significantly poorer code than those in Team 2!

Have you seen this effect? What did you do to harness the skills of your great testers so that they constructively support your great coders?

can downstream testing ever be the bottleneck?

Everywhere I go I find managers complaining that some team or other is short of staff; and (so far) that has always turned out to be a mirage.

TOC’s 5 Focusing Steps say that adding resources to the bottleneck is the last thing one should do. Before that, a much more cost-effective step is to “exploit” the bottleneck — ie. to try to ensure that bottleneck resources are only employed in adding value. So in the case where testing is the bottleneck, perhaps one should begin by ensuring that testers only work on high quality software; because testing something that will be rejected is waste.

And from the Lean Manufacturing camp, Shigeo Shingo (I think) said something along the lines of “testing to find defects is waste; testing to prevent defects is value”. Which seems to imply that waterfall-style testing after development is (almost) always waste.

Which in turn implies (to me at least) that testing in a waterfall process can never be the bottleneck. The bottleneck must be the policy that put those testers at that point in the flow. Does that sound reasonable to those of you who know a lot more about this kind of stuff than I do?

a second pair of eyes

I’ve just been working with a team which has a pairing policy: every item of code must have been seen by two pairs of eyes before it can be checked in. It doesn’t work.

The effect of the policy is to replace pair programming – instead developers do a “pair check-in” at the end of each development episode. So a developer will beaver away working on a feature for a day or so, getting it right, making it work, passing all the tests. And then he’ll call over to another team member to request a “pair check-in”. The other team member comes to the developer’s station and is walked through the changes in the version control tool. And then the code is checked in and the two team members part company again.

The problem here is that the process sets the two people up to be in opposition: the developer is effectively asking for approval, instead of asking for help. It’s natural for the developer to feel a sense of ownership, because he’s worked hard to get that code complete and correct. Not many people can graciously accept negative feedback after all that hard work.

It can also be hard for the reviewer – the “second pair of eyes” – to come up to speed quickly enough. The developer knows these changes intimately, but the reviewer is being asked to understand them cold. He has little chance of being effective in that situation.

So this process has all of the demerits of Inspections, with none of the advantages. The team would be more effective adopting true pair programming, I feel.

give him what he wants

This week I started on my first proper project here. I’m coming in just after the completion of the bid, which was accepted last week by the GoldOwner. My job is to set up the project and run it on behalf of the supplier organisation.

Interestingly, the said supplier organisation has quoted to run a classical waterfall project, but the GoldOwner wants it run “iteratively.” My bosses have told him that’s too risky, and that we’ll stick to falling water, thank-you-very-much. I, on the other hand, told him we’ll do it iteratively. I suspect my tenure here may be short-lived…

Update:
Here’s a conflict cloud expressing the issue I have here:

waterfall

the missing piece

This week I was co-opted to act as a temporary project manager for three weeks while someone’s on holiday. I’ve spent much of the week shadowing the PM I’m replacing, and it’s now 4:15 on Friday afternoon. I’m packing up my laptop and beginning to think of fish and chips, when I see the said PM and the technical lead in conversation a few desks away. I have much to learn, so I can’t pass up on an opportunity to listen in and learn some more.

It turns out that they’re discussing Monday’s milestone, in which some parts of the project will be handed over to the live deployment team. As they check off the modules in this mini-release, it becomes apparent to them that there’s one missing! The technical lead knew, at the start of the design phase (yes, I know), that it was needed. But for some reason it had never made its way into the project manager’s plan. And the developers only worked on things that were in the plan. So it was never developed! Very honourably, the manager took the blame and set about finding a solution…

I’ve been wondering for a few hours now how such a blatant error could happen (I know it isn’t unique, and I surmise that problems of this nature probably cost this organisation millions annually). I’ve come to know these two people a little in recent days: the manager is proudly non-technical – it’s his job to schedule other people’s work, not to understand it; and the designer is proudly technical – it’s his job to design solutions, not to run projects. This corporation sees nothing wrong with that, and in fact selects people for precisely these characteristics. So the manager’s role was to create a plan from technical input he couldn’t understand; and the technician was not required to review that plan for correctness. And that’s part of the problem: the culture here is one of specialists in silos. In this case something fell between the cracks (in many ways it’s remarkable that they actually spotted it before live deployment).

But why did neither the manager nor the designer develop any interest in what the other was doing? Because another part of the reason for this failure is that each of them, indeed everyone here, is always concurrently working on two, three, even twenty projects! No-one has enough time to care about what they are producing.

Eli Goldratt would have a field day here…

late feedback

This project has been running since July. It’s now the day before the ‘development complete’ milestone for part of the system, and all that remains is to show it to the users and get them to sign off that they’ve manually tested it and they’re happy. As it turns out, this afternoon is the first time they’ve seen anything of the new software. After over three months of design and development. Guess what happens…

discontinuous integration

Today I discovered that the corporate software development process here is iterative! Which means that there’s an “iterative” design/code phase, followed by an “iterative” integration/test phase… (This is where a sense of humour becomes a survival mechanism.)

My department is involved in the integration/test phase for every project. And in my quest to find ways to begin measuring throughput, naturally I asked how long a project will typically spend in that phase. It turns out that a rare few can get done in a couple of days, most require 2-6 weeks, and at least one project took over four months to successfully integrate.

I wonder if I’ve bitten off more than I can chew…