In these last few posts I’ve been trying to edge towards a deeper undestanding of what agile coaches do. These are thought experiments. My aim is to be able to give our clients a means to assess when and whether they are deriving value from the use of our services. And I believe that this will also give us greater means to implement successful improvements in the effectiveness of the software teams we meet.
Several times, both very recently and in the dim past, I’ve seen folks advocate running software process improvement exercises as if they were agile projects. What would that be like in practice? I imagine we would identify process changes we wish to make, creating a story card for each. There’d be a daily stand-up meeting, at which the change team discusses progress, obstacles and plans. There’d be iteration reviews and planning games, and a backlog of change stories, and the change team would chart the project’s velocity, presumably in some kind of “change points”.
What would that change project’s backlog look like? Well if we’re in the business of implementing change to a recipe, I guess it might have a series of stories, each of which puts in place a specific brick of the whole edifice. Sounds like we know where we’re heading already. We’re in danger of turning the software team into a local optimum within the organisation as a whole. We may even shift the bottleneck away from development altogether. Then what?
But if we’re in the business of improving effectiveness there will be no backlog. There can be no backlog. Each change (possibly a small suite of changes) will have been worked out as a response to the overall organisation’s current greatest need. By asking “where’s the bottleneck?”, “where’s the current constraint on throughput?”, “what is the common root cause of our major incidents?”
And after we’ve implemented this change, what next? We must measure, observe, inspect. We must let it settle in and then measure our new levels of throughput and expense. Then ask the questions again – looking for where the bottleneck has moved to now. This is hansei-kaizen, an endless cycle of inspect-adapt-inspect-adapt, as Scrum puts it. And at each cycle it is the overall organisation’s Goal pulling each change through the system.
Now, what about the individual change “stories”, what will they look like? Will each have acceptance tests? Can we assign each a monetary value? Questions for another day…
What’s the Right Way™ to carry out an agile transformation? Indeed, is an agile transformation always the Right Thing to do?
On the one hand, an agile coach may be hired to “implement” Scrum or XP, because that’s the way the organisation has decided to go. Alternatively, one may be engaged to help improve an organisation’s productivity, say, without regard to whether the resulting system behaviours will be recognisable as this or that agile method.
In the domain of lean manufacturing, productivity improvement itself can – and indeed must – be seen as a pull activity:
“Taiichi Ohno remarked that TPS is very much like the scientific method of experimentation. When this is not kept in mind, the result is “push” style Lean (Do as you are told), rather than “pull” Lean implementation (What is the biggest problem?)”
This is a very clear answer. “Lean” is the name given after the fact to a very effective type of system that was “grown” piecemeal by kaizen – continuous gradual change. (Ohno states elsewhere that all of the key aspects of the Toyota Production System arose individually as solutions to the root cause of some specific incident.) Lean was not the goal, it is simply a name for the resulting shape of the organisation.
Similarly in software development, we must not let the “agile” banner get in the way of our goal, which is the creation of an effective software department. The body of agile knowledge is invaluable as a source of ideas and experience when we are faced with effectiveness problems to solve. But it must not be used dogmatically. “Extreme at all costs” may well be better than we were before, but without the pull from the overall system’s Goal it is unlikely to be sustainable in the long term.
What interests me is effective software development; that is, software departments that contribute to the overall organisation’s Goal of present and future profitability. Such effectiveness must be achieved gradually, by repeatedly resolving the root cause of the biggest problem. The Goal will pull changes through the system, if we let it.
In The Productivity Metric James Shore adds fresh fuel to the “running tested features” debate. The article is well worth reading, and James concludes:
“There is one way to define output for a programming team that does work. And that’s to look at the impact of the team’s software on the business. You can measure revenue, return on investment, or some other number that reflects business value.”
I whole-heartedly agree with this conclusion, although in my experience there are a couple of hurdles to overcome:
First, the figures may be hard to come by or difficult to compute. This is particularly true of infrastructure software, or tooling that’s used internally to the business. How do you compute the development team’s ROI from the impact they have on an admin clerk’s day-to-day? There will always be the danger of monetising some intermediate measure, and thereby creating a local optimum. (If you have examples of this being solved successfully, please get in touch.)
And second, the development team may feel that the figures are too dependent on other business processes, such as sales or dispatch. Even where the software is the company’s product, the value stream is often not as short or highly tuned as one might wish; and so the developers may not wish to be measured against the whole stream’s effectiveness. In theory, rapid feature development and compelling usability ought to energise the sales team and the market to the point where demand dominates supply; in which case the value/time metric will work well. In practice, the necessary pull is too often absent. (Maybe in that case the metric is still valuable, telling us something about the whole value stream…)
In A Kaizen Event for the Holidays Joe Ely gives us the nutshell version of (what sounds like) a very effective process improvement exercise in his plant. But his main point is much deeper:
“… our marketing team is central to achieving kaizen success. Why? By keeping a growing backlog of work for our team. […] If the business isn’t growing, kaizen won’t work.”
(Joe goes on to suggest that hugging a marketing person might be a good thing to do today. Can I pass…?)
The ‘pull’ from downstream is essential in so many ways. Without it, many upstream activities are simply guesses as to what might be needed. And when workers see their managers guessing, low morale and low productivity are going to follow. (I mused about the very same thing for software development in the product owner must pull and the product owner must pull (revisited) last Spring.)
Bad things can happen to a project if the Product Owner doesn’t manage the story pile. And consequently bad things can happen to the software development team that is cast adrift in the doldrums.
There are many reasons why a software development team might lose productivity. One that is often overlooked – especially in discussions of extreme programming – is when the Product Owner fails to drive product from the team. Maybe the market research hasn’t been completed. Or maybe the project stakeholders can’t agree. Maybe there is no clear vision. Or maybe this is a research project with a “let’s see what turns out” approach. Whatever the cause, the effect is a Backlog with too few stories in it. Or worse, a Backlog with conflicting stories, and stakeholders who bicker at the iteration review meeting.
Textbooks and training courses tend to give developers the impression that the Backlog just appears, manna from heaven, perfect and always in just the right amounts. No-one told them what to do when that isn’t the case. Most teams will fail to notice the situation for some time, and will fall into destructive behaviour. The developers with a strong technical sense will push for the addition of ‘obvious and essential’ architectural stories to fill the gaps in the Backlog. At the same time those with a domain background will just ‘know’ what is the right direction, and will push for new stories to support their own personal hobby-horses. Those who need strong leadership will fall into despond, and the more vocal of these will cast a cloud over the team room and everyone in it.
If the story pile continues to be under-managed, the team’s worsening morale will inevitably affect productivity. (I attended an iteration review meeting recently in which the developers demonstrated no new features and the Product Owner came to the table with no new stories – and yet everyone knew there was loads to be done and very little time in which to do it! In order to fill the vacuum some of the developers began tabling their pet stories for consideration. Chaos ensued and, unnoticed, morale slipped another notch.)
Look at the problem from a slightly different perspective. All of the agile software development methods share their underlying principles with lean manufacturing. Productivity is based on short cycles and interlocking feedback loops. And the pace of development is set by the arrival rate of kanban – story cards in our case. The whole basis of agile software development relies on the Customer pulling features from the production team. And the rate of pull must be sufficient to drive the cycles and feedback loops. The supply process will disintegrate when there is insufficent demand.