fix everything except scope

I’ve just read a process guideline for managers who are unfamiliar with iterative development. It states that each iteration must have a detailed plan, and that in creating that plan the manager will be making “constant trade-offs between scope, quality and schedule”. (The whole document seems to be written with the intention of making iterative development seem more difficult than waterfall! Is there a subtle agenda, attempting to steer managers down familiar paths?)

The advice seems reasonable at first glance. After all, if we’re behind in adding the key feature for this iteration, we’ve always had the options of cutting it out, or not testing it, or slipping the date – right? I don’t think so. We should always reduce the scope of the iteration.
Continue reading

discontinuous integration

Today I discovered that the corporate software development process here is iterative! Which means that there’s an “iterative” design/code phase, followed by an “iterative” integration/test phase… (This is where a sense of humour becomes a survival mechanism.)

My department is involved in the integration/test phase for every project. And in my quest to find ways to begin measuring throughput, naturally I asked how long a project will typically spend in that phase. It turns out that a rare few can get done in a couple of days, most require 2-6 weeks, and at least one project took over four months to successfully integrate.

I wonder if I’ve bitten off more than I can chew…

local optimisation

For the last couple of days I’ve been studying the results of a local process improvement exercise. The exercise was run earlier this year, and had its own business case, complete with a financial justification. The plan was to reduce by 50% the number of defects that this department’s support team had to deal with, and the justification was that this would save around £30,000 per month in overheads. Now of course I believe that fixing defects is muda, so any reduction in this waste gets my full support. And indeed the improvement exercise was successful, as the bug statistics for recent months have shown.

So the company has saved a load of money, right? Well, certainly there are fewer bugs to fix in this one department. But no-one was laid off – instead, people now simply spend less time in support, and more time doing other work. This is a classic case of what TOC calls local optimisation: this department is now spending less of its own money, but as a result some others are probably spending more. And as I look around I find that the entire organisation – which is large – is incentivised in a similar way. Each department’s objective is to “save” costs by cross-charging its staff to other departments. But because they all ultimately work for the same company, this local accounting is obscuring the bigger picture. I’m convinced that end-to-end project costs are therefore significantly higher than they could be.

Could it be done differently? TOC says it can. The key measure of success in this company’s business is time to market. What if we could find a way for each department to somehow be measured on throughput? (and such that fixing defects is seen to reduce throughput)

I guess I’ve found a mission…

proactive planning

Today, as part of my learning process, I sat in on a planning/progress meeting for one of the plethora of projects currently happening here. At one point the project manager set a quite aggressive (ie. early) date for the first release. A few people in the room expressed some nervousness about whether such an early date would cause quality to be compromised, for example. “No, of course not – because we’ll be doing a bugfix release a month later,” he proudly announced. “Tremendous,” said everyone. “Good plan. Well done.”

I sat very quietly in my corner, observing. If that’s good practice, I really do have a lot to learn…