One of the teams I coach decided this week to adopt kanban-style limits on the WIP at each part of their value stream. It’s much too early to tell how beneficial this will prove; the transition, though, showed up a few points of interest:
In order to set WIP limits we had to map the company’s value stream more completely. In retrospect, the previous fuzzy understanding of the value stream had been one cause of the bottlenecks towards the customer end of the process. Kanban was the catalyst to fixing this, but otherwise unrelated.
Re-mapping the value stream temporarily focussed everyone on clearing some inventory, because a couple of WIP piles were initially bigger than the WIP limits we had all agreed.
We quickly discovered that some of the existing inventory was caused by cards only having customer value in batches. Cards had been held up in huge piles at the “Beta” and “UAT” stages, because they didn’t represent stand-alone demonstratable user value.
The introduction of WIP limits helped everyone to realise that the cards had been representing engineering tasks rather than customer value. So either the WIP limits had to be multiplied by the number of engineering tasks per user story, or future cards had to change and become user stories.
The introduction of WIP limits also served to focus everyone’s attention on the whole value stream. Previously the developers had been throwing cards “over the wall” into “Beta” and “UAT” and then forgetting about them. WIP limits quickly encouraged the developers to address bottlenecks in all areas.
Some of the existing value stream stages turned out to be buffers for others; and at least two of these buffers were “discovered” during the mapping exercise. We set low WIP limits on these buffers, to help squash the “over the wall” mentality.
No doubt WIP limits will throw up other subtle side-effects as the next month or so unfolds. Interesting times ahead…
As usual, it takes me multiple attempts to figure out what I really want to say, and how to express myself. Here’s a bit more discussion of what I believe is implied by downstream testing:
The very fact that downstream testing occurs, and is heavily consuming resources, means that management haven’t understood that such activity is waste. (If management had understand that, then they would re-organise the process and put the testing up front — prevention of defects, instead of detection.) No amount of tinkering with analysis or development will alter that management perception, and therefore the process will always be wasteful and low in quality. So the constraint to progress is management’s belief that downstream testing has value.
The New York Times recently ran an interview with Ed Reilly (American Management Association) on the distractions that can result from technology. Email, for example:
“Companies go to great lengths to set up lists of authorized approvals, meaning who can approve what size of purchase. But you will find that people who are not authorized to spend $100 on their own are authorized to send e-mails to people and waste hundreds of thousands of dollars’ worth of company time.”
For me, this is not just about the distraction caused by receiving email (although re-acquiring the flow state does definitely cost). In Ed’s comment I see the muda of working on the wrong stuff – of spending time on conversations, research, or even whole projects, that aren’t on the value stream.
This isn’t about finding the right balance between creative freedom and strict customer pull. It’s about making sure that everyone can always identify the business value of the things they spend their time on. Is this project right for our strategy and markets? Is this feature ever going to be used? I’ve spent time in three very large (>20,000 people) organisations at various times in my career, totalling sixteen years in various roles in software development. And only twice did I work on projects that were actually delivered into the hands of users. The remainder all seemed to be great ideas to someone, I’m sure. But the waste incurred was huge. This is the muda of working on the wrong stuff, and very often it begins with an email…
In Excellence Is Not Enough Bill Waddell tells the story of Rittal, a German company with a plant in the US. The company has executed a startling turnaround in three years by implementing lean thinking from top to bottom:
“What particularly impressed me is that they approached lean from the opposite direction than that taken by most companies. They started with their organizational structure and made value streams the formal way they work, instead of the old functional departments the rest of us cannot seem to get past. They scrapped the data systems, thinking that just talking to each other was better than using bad data.”
This fits very well with my experience of transitions to agile in the softare development domain. It’s one thing to have a fast, effective, high-quality development team; it’s something better entirely when that team sits within a lean organisation. When the bottleneck is outside of development, it often seems as if we have a sportscar being pulled along by a donkey. And the message from Rittal is clear: as long as we retain functional divisions, the organisation is unlikely to improve much. Having an agile software development department is good; but having a software development department that is an effective part of a lean organisation where the money is.
In The Productivity Metric James Shore adds fresh fuel to the “running tested features” debate. The article is well worth reading, and James concludes:
“There is one way to define output for a programming team that does work. And that’s to look at the impact of the team’s software on the business. You can measure revenue, return on investment, or some other number that reflects business value.”
I whole-heartedly agree with this conclusion, although in my experience there are a couple of hurdles to overcome:
First, the figures may be hard to come by or difficult to compute. This is particularly true of infrastructure software, or tooling that’s used internally to the business. How do you compute the development team’s ROI from the impact they have on an admin clerk’s day-to-day? There will always be the danger of monetising some intermediate measure, and thereby creating a local optimum. (If you have examples of this being solved successfully, please get in touch.)
And second, the development team may feel that the figures are too dependent on other business processes, such as sales or dispatch. Even where the software is the company’s product, the value stream is often not as short or highly tuned as one might wish; and so the developers may not wish to be measured against the whole stream’s effectiveness. In theory, rapid feature development and compelling usability ought to energise the sales team and the market to the point where demand dominates supply; in which case the value/time metric will work well. In practice, the necessary pull is too often absent. (Maybe in that case the metric is still valuable, telling us something about the whole value stream…)
Bad things can happen to a project if the Product Owner doesn’t manage the story pile. And consequently bad things can happen to the software development team that is cast adrift in the doldrums.
There are many reasons why a software development team might lose productivity. One that is often overlooked – especially in discussions of extreme programming – is when the Product Owner fails to drive product from the team. Maybe the market research hasn’t been completed. Or maybe the project stakeholders can’t agree. Maybe there is no clear vision. Or maybe this is a research project with a “let’s see what turns out” approach. Whatever the cause, the effect is a Backlog with too few stories in it. Or worse, a Backlog with conflicting stories, and stakeholders who bicker at the iteration review meeting.
Textbooks and training courses tend to give developers the impression that the Backlog just appears, manna from heaven, perfect and always in just the right amounts. No-one told them what to do when that isn’t the case. Most teams will fail to notice the situation for some time, and will fall into destructive behaviour. The developers with a strong technical sense will push for the addition of ‘obvious and essential’ architectural stories to fill the gaps in the Backlog. At the same time those with a domain background will just ‘know’ what is the right direction, and will push for new stories to support their own personal hobby-horses. Those who need strong leadership will fall into despond, and the more vocal of these will cast a cloud over the team room and everyone in it.
If the story pile continues to be under-managed, the team’s worsening morale will inevitably affect productivity. (I attended an iteration review meeting recently in which the developers demonstrated no new features and the Product Owner came to the table with no new stories – and yet everyone knew there was loads to be done and very little time in which to do it! In order to fill the vacuum some of the developers began tabling their pet stories for consideration. Chaos ensued and, unnoticed, morale slipped another notch.)
Look at the problem from a slightly different perspective. All of the agile software development methods share their underlying principles with lean manufacturing. Productivity is based on short cycles and interlocking feedback loops. And the pace of development is set by the arrival rate of kanban – story cards in our case. The whole basis of agile software development relies on the Customer pulling features from the production team. And the rate of pull must be sufficient to drive the cycles and feedback loops. The supply process will disintegrate when there is insufficent demand.
As part of the preparation for my workshop, I spent a while today trying to create Value Stream Maps for some of the projects I’ve worked on. It was a revealing exercise, and I got really stuck on one point…
In some software development processes, the same group of people carry out more than one ‘phase’ – for example, on a recent project of mine the same small team did an initial design, then did all the development of that design, and then tested the results. The natural thing to do was to map this as three processes, with inventory piling up between them as features get designed or coded. But in all the books I’ve read so far, when a single operator runs a connected sequence of machines the whole sequence (flow) should be mapped as a single process. So should I draw one process or three?
On one hand, the design-code-test flow is a single process, because it has only one operator. There is no design coming down the line while testing is in progress, because the designers are manning the testing. But if I draw only one process, I can’t use the lean thinking tools to look inside it and make it better.
And on the other hand, inventory piles up between the steps, and this inventory could be reduced or removed by reducing batch sizes (ie. the number of features in each release). So I should draw three processes. But because they are operated by the same people, I now seem to lose the ability to calculate things like takt time.
I’m coming to the conclusion that takt time etc are not meaningful for software development, and so drawing three processes to explicitly show inventory will be the most useful approach. But I expect to encounter more difficulties next time, when I try to add in the fact that the same people also write the documentation…!