Evolving the kanban board

My wife and I are planning to move house. We aren’t sure where we want to move to, or indeed how much we have to spend. Naturally, though, we want to get the highest possible selling price for our current house in order that we have as many options as possible. So we called in a “house doctor” to help.

After she (the house doctor, not the wife) had recovered from the initial shock of seeing how we have customised a fairly standard 4-bedroom house into a 6-bedroom eclectic disaster, she produced a report containing a list of cosmetic improvements we should make in order to attract prospective buyers. The list is long, with jobs for myself, my wife, and our local handyman. We needed to make the project manageable in a way that would allow us all to contribute as and when we have the time. So I found an old whiteboard in the garage and made this:


As you can see, I drew a very rough plan of the house, including sections for upstairs, downstairs, the attic, and the outside. We then wrote a small sticky note for every improvement suggested by the house doctor (blue) and some that we had always wanted to do ourselves (yellow).

When we finish a task, we simply remove the ticket. For example, you can see that we have already finished all of the tasks needed in the Office (priorities, right?).


And why am I telling you all this? Because this is what I recommend teams do for their software projects. When you pick up the next feature, draw an architecture diagram and populate it with sticky notes. The resulting board is a “map” showing the feature and the tasks that need to be done in order to deliver that feature (thanks to Tooky for that analogy).

  • The diagram you draw for Feature A might differ from the one you draw for Feature B, because you might be touching different parts of your estate. That’s cool. The diagram needs to be the one that’s most appropriate for the work you’re about to do.
  • The visual representation of your architecture allows more people to be engaged in discovering the tasks that need to be done to deliver the feature.
  • And it allows everyone, often including non-programmers, to see and understand the scope and impact of what is to be done.
  • Sometimes doing a task will spawn others: things we didn’t consider when we did the original feature break-down; things we’ve learned by making changes or completing spike tasks; things we or the Product Owner couldn’t envisage sooner. That’s fine — we simply add and remove sticky notes as we think of them (and look for opportunities to slice off a separate feature that we can push back onto the ideas heap). The whole thing is quite dynamic, and yet very well controlled at the same time.
  • If possible I like to include testing in the scope of the stickies, possibly adding a few explicit testing task stickies where necessary.
  • As you finish each task (whatever “finish” means for your team), either remove the task’s sticky note or mark it with a big green tick. For our house doctor board, we’ve decided that removing the stickies is best. But for software teams, I generally recommend adding big green ticks to completed tasks. This allows anyone to see how much progress you have made through the current feature, and which areas still need more work.
  • Sometimes the distribution of ticked and un-ticked stickies will suggest opportunities for splitting the feature and releasing a subset earlier than planned.
  • Hold stand-up meetings around the diagram as often as you need, and certainly whenever anything significant changes. (Some of the teams I coach have been known to hold informal stand-ups 4-5 times each day.) The architecture diagram helps facilitate and focus these discussions, and makes it much easier for everyone to contribute.
  • Note that all of the above works best when the team has a single feature in flight. A WIP limit of one. Single piece flow.
  • This approach works well when combined with the 5-day challenge.

As usual with the recommendations I write in this blog, this idea is probably not a new one. But it is highly effective, and I recommend you try it.


I gave a lightning talk on this topic at the Lean Agile Manchester meetup this week. There is a video (below), although unfortunately you can’t actually see what’s on the slides. So I uploaded the slides here.



carnival of the agilists, 6-sep-07

Welcome to the latest edition of the Carnival of the Agilists, the blogroll that takes an oblique slice through current events in the agile blogsphere. This week’s theme is: visualization

First up is Kenji Hiranabe’s InfoQ article entitled Visualizing Agile Projects using Kanban Boards. Hiranabe describes and explores a number of different ways of visualizing the current state of a project, including the niko-niko calendar that charts the team’s moods: “a Japanese creation, showing team member’s mood for each day. Everyone puts a smiley mark onto their own calendar after the day’s work, before leaving the team room. It looks at the project from the viewpoint of member’s mental health and motivation.” (Jason Yip wants to try this with South Park avatars!) An interesting survey, even though I’m not sure about the utility of translating these boards into a software system.

In Magnetized Teams Simon Baker uses the metaphor of magnetism to explain the value of having everyone in the team focused on the same things: “In a magnetized material the magnetic dipole moments are aligned in parallel and in the same direction. […] Imagine for a moment that every magnetic dipole moment is a person in a team. In a magnetized team everyone shares the same vision and pulls in the same direction, working together to achieve the same goal”. Simon has also turned his Agile Zealot’s Handbook into a helluva ride poster.

In Towards A Unified Model of Dependency Management Jason Gorman is looking for ways to model and visualize the spread of change through a software system. Does change spread among code snippets in the same way that forest fires spread? And how can we simulate the impact of change using simple models and games? “If we visualise our code as a network of nodes and connections, we can begin to get a handle on how this ripple effect might spread through the network, and what properties of the network might help to localise the effect. More importantly, we might be able to relate these network properties to software design principles and begin to build a general theory of dependency management that could be applied at the code level all the way up to the enterprise level.”

In Continuous Integration as Quality Reflection Oren Ellenbogen reminds us all that making the state of the build visible – via a continuous integration server such as CruiseControl – can act as a spur to drive many of the steps of quality improvement in a software process: “The immediate value is priceless. the ability to SEE whether your source-code is stable enough to allow other programmers perform Get Latest and continue their work and the “fail-fast” attitude can save you a lot of time in the long run.”

And to continue with the benefits of continuous integration, Jay Flowers has been doing some systems thinking. In Loop Diagrams he presents a series of models showing the effects of delayed feedback, finishing with a seriously complex model of effects related to build frequency. Jay points out that this is just the beginning: “This diagram is by no means complete. It is meant to spark a dialogue on this subject. I would like it to to be a resource for people who are evaluating possible changes to their environment or even just to understand what is going on in there existing environment.” I’ve found myself using systems models like these more and more frequently in recent months, and I believe our industry has a lot to gain from modelling and visualising the complex feedback systems inherent in our working environments; so I will be watching the development of Jay’s models with great interest…

And finally…

Deb Hartmann notes that “Agilists are more likely to gather in a pub or cafe to exchange ideas, than to write a formal paper. As a result, the number of small local gatherings held within the international Agile community is staggering” – so she’s created a public AgileEvents Calendar at upcoming.org. I’ve already added the next AgileNorth event – why not add yours now!

To suggest items for a future carnival – especially from a blog we haven’t featured before – email us at agilists.carnival@gmail.com. As ever, this and all previous editions of the Carnival are catalogued at the Agile Alliance website. Look out for the next edition of the Carnival around September 20th, hosted by Pete Behrens.

the big DIP

Several times today my online reading has brought me to Henrik Mårtensson’s Kallokain blog. Henrik has a great knack of taking concepts from lean manufacturing or the Theory of Constraints and creating worked examples to show how they play out in day-to-day software development.

Take for example what Henrik calls the DIP – design-in-process – the software development equivalent of inventory. DIP is any work that has been begun, but not yet shipped to end users.

An easy, and highly visible, way to measure DIP, is to make a Project Control Board. Use a whiteboard, or a large mylar sheet on a wall. Make a table with one column for each hand-off in your development process. […] At the start of each iteration, write sticky notes for each story (use case, feature description, or whatever you use). […] Whenever a developer begins to work on a new story, the developer moves the story from the stack of sticky notes to the first column in the table. The developer is then responsible for moving the story to a new column each time the story goes into a new phase of development. […] The sticky notes in the table are your DIP. Track it closely. If you have a process problem, you will soon see how sticky notes pile up at one of your process stages.

This fits with everything I’ve written about task boards, but Henrik then goes on to show how to calculate the current value of your DIP – ie. the investment you have made in the unshipped work on the board. In his worked example, a requirements change causes 20 story points worth of DIP to be scrapped; and as each story point costs 1,000 EUR to develop, the requirements change means we must write off 20,000 EUR of development costs.

It is worth noting that some of the 1,000 EUR per story point is “carrying costs” for the work that was scrapped. Just as manufacturing inventory costs money to store and move around, unshipped software (not to mention uncoded designs) has to be regularly read, searched, built and tested. It was there all the time, and we had to respect it and live with it. So it slowed us down; we had to step around it, and we would have tripped along more lightly if it had never been there. Looked at another way: with lower DIP inventory our cost per story point would be lower; equivalently, we could achieve higher throughput for the same expense.

(Of course, there are two obvious reactions to this write-off cost: make changes really difficult, or reduce inventories and cycle times so that DIP is always lower. Each has its sphere of applicability, I guess…)

reporting during the daily stand-up

airmen In On Scrum and the curse of the three questions Lasse Koskela throws doubt on one of the central pillars of Scrum – the three questions that everyone on the team must answer during the daily stand-up meeting.

“I think the way we talk about “answering three questions” contributes to the difficulty of getting people away from reporting progress to the Scrum Master (who almost unanimously is the same guy who used to be the project manager before adopting Scrum). […] Apparently words like “answer” and “question” tend to be associated with ideas closer to being interrogated rather than communicating information.”

This is a really good point, and hits the nail right on the head. Whenever I visit a team I always ask to attend their daily stand-up meeting. And time after time I see the team members treating the session as time to report progress to their boss. In some cases the team members even look at that person as they speak.

That’s not what the meeting is for. The daily stand-up is the team’s primary opportunity to self-organise. The outcome should be that the team has collectively agreed what to work on today, and what impediments they need their ScrumMaster to remove along the way.

So I like Lasse’s rewording of the “three questions”, but I prefer to go further. As I said in an earlier post, focus on the task board. Each team member’s report should be directed through the board: “Yesterday Chet and I completed this [points at a card in the Completed column] and this [points at another]. Annie and I started this [points at a card in the In Progress column], and we’ll continue with that today.”

Note the use of language here: “worked on” is vague to the point of redundancy. Note also the physicality: point, move cards around, update the burndown chart. Around eighty percent of all human communication is non-verbal; we should capitalise on that fact to make daily stand-up meetings more effective.

developing a sense of urgency (revisited)

I posted a copy of developing a sense of urgency to the scrumdevelopment group on Yahoo!, and it sparked off a really great discussion. Among the wise words, Laurent Bossavit posted this:

I think the important thing you brought to the table with 2-point estimates is that an estimate is just what the probability density graph implies – it’s not one number, but a distribution.

What the team might have forgotten is that if you consistently give out conservative estimates, then 90% of the tasks should be finished way early, and therefore being mostly “on time” is very poor performance.

Much better to have a system where being “on time” is a challenging but achievable objective. No blame accrues for being “late”, and some praise is due for being merely “on time”.

Earlier, Simon Baker asked: “Could you elaborate on why you think the new estimates gave your team a sense of urgency?” To which I replied:

I suppose part of the answer lies in the fact that I didn’t tell them about the buffer ;-) Seriously. I only talked about the “luck” thing: With conservative estimates and Parkinson, we never benefit from good luck (ie. when something is easier than we had imagined it would be). But we’re always hit by the slightest hint of bad luck – and in an iteration of 20 tasks, one of them is dead certain to go badly. The team had complained – at several successive iteration retrospectives – that they got squeezed at the end of the iteration, no matter what they tried. So I talked about luck. No mention of buffers. No graphs or fancy science (actually, this was the hardest part for me, a Maths PhD ;-).

Another big part of the answer is that, once we got into the iteration, they saw they were finishing tasks. The team room started buzzing a couple of days into the first iteration we tried this, because tasks were moving rapidly into the ‘Completed’ column on the board. And as more cards moved, the team realised that they were in a space they’d never been to before, and the buzz increased. I guess the Big Visible Chart showing completed tasks acted as a positive feedback loop.

As I say, the whole thread makes very interesting reading…

only count finished tasks

Your team has the ritual that the burndown chart is updated ceremonially at the end of each daily stand-up meeting. In today’s stand-up someone points to a card and says “I’m halfway through doing this one.” So you’re tempted to chart the burn of half that task’s points. Don’t. Get into the habit of scoring successes only for completed tasks.

There’s at least one major reason why reporting partial progress is bad: it’s a lie. Just what is 50% of a programming task anyway? Or 80%? Or 90%? What if an unforeseen difficulty arises while developing the last 10%? In nearly twenty years of leading teams and managing projects I’ve learned to believe only three states for a development task: “not started”, “started” and “done.” Everything else (includng “I’ll be done tomorrow”) is optimistic or dangerous or both. So I only count progress when a task is complete.
Continue reading