carnival of the agilists, 6-sep-07

Welcome to the latest edition of the Carnival of the Agilists, the blogroll that takes an oblique slice through current events in the agile blogsphere. This week’s theme is: visualization

First up is Kenji Hiranabe’s InfoQ article entitled Visualizing Agile Projects using Kanban Boards. Hiranabe describes and explores a number of different ways of visualizing the current state of a project, including the niko-niko calendar that charts the team’s moods: “a Japanese creation, showing team member’s mood for each day. Everyone puts a smiley mark onto their own calendar after the day’s work, before leaving the team room. It looks at the project from the viewpoint of member’s mental health and motivation.” (Jason Yip wants to try this with South Park avatars!) An interesting survey, even though I’m not sure about the utility of translating these boards into a software system.

In Magnetized Teams Simon Baker uses the metaphor of magnetism to explain the value of having everyone in the team focused on the same things: “In a magnetized material the magnetic dipole moments are aligned in parallel and in the same direction. […] Imagine for a moment that every magnetic dipole moment is a person in a team. In a magnetized team everyone shares the same vision and pulls in the same direction, working together to achieve the same goal”. Simon has also turned his Agile Zealot’s Handbook into a helluva ride poster.

In Towards A Unified Model of Dependency Management Jason Gorman is looking for ways to model and visualize the spread of change through a software system. Does change spread among code snippets in the same way that forest fires spread? And how can we simulate the impact of change using simple models and games? “If we visualise our code as a network of nodes and connections, we can begin to get a handle on how this ripple effect might spread through the network, and what properties of the network might help to localise the effect. More importantly, we might be able to relate these network properties to software design principles and begin to build a general theory of dependency management that could be applied at the code level all the way up to the enterprise level.”

In Continuous Integration as Quality Reflection Oren Ellenbogen reminds us all that making the state of the build visible – via a continuous integration server such as CruiseControl – can act as a spur to drive many of the steps of quality improvement in a software process: “The immediate value is priceless. the ability to SEE whether your source-code is stable enough to allow other programmers perform Get Latest and continue their work and the “fail-fast” attitude can save you a lot of time in the long run.”

And to continue with the benefits of continuous integration, Jay Flowers has been doing some systems thinking. In Loop Diagrams he presents a series of models showing the effects of delayed feedback, finishing with a seriously complex model of effects related to build frequency. Jay points out that this is just the beginning: “This diagram is by no means complete. It is meant to spark a dialogue on this subject. I would like it to to be a resource for people who are evaluating possible changes to their environment or even just to understand what is going on in there existing environment.” I’ve found myself using systems models like these more and more frequently in recent months, and I believe our industry has a lot to gain from modelling and visualising the complex feedback systems inherent in our working environments; so I will be watching the development of Jay’s models with great interest…

And finally…

Deb Hartmann notes that “Agilists are more likely to gather in a pub or cafe to exchange ideas, than to write a formal paper. As a result, the number of small local gatherings held within the international Agile community is staggering” – so she’s created a public AgileEvents Calendar at I’ve already added the next AgileNorth event – why not add yours now!

To suggest items for a future carnival – especially from a blog we haven’t featured before – email us at As ever, this and all previous editions of the Carnival are catalogued at the Agile Alliance website. Look out for the next edition of the Carnival around September 20th, hosted by Pete Behrens.

systems thinking for borrowed time

Last week in the Carnival of the Agilists I linked to Emmanuel Gaillot’s article about borrowing time as a way to effect guerilla process improvement. Willem van den Ende has followed up on this by creating diagrams of effects to model the original article. Recommended reading.

designing metrics at SPA2006

The second session I attended at SPA2006 was Designing metrics for agile process improvement, run by Jason Gorman and Duncan Pierce. Their thesis is that metrics are difficult to design, and making a mistake can lead an organisation down the wrong path entirely.

vernier To demonstrate this, they divided us into four groups and assigned us each a “goal”. My group’s goal was “more frequent releases“. We had to brainstorm metrics that might act as measures of an organisation’s progress towards that goal, and then pick one to carry forward to the remainder of the exercise. We spent some time thrashing around wondering why an organisation might have such a goal, and finally we plumped for the metric average time in days between releases.

At this point, one person in each group took his group’s metric to the next table, where that group (minus one) would attempt to “game” it. The objective of this part of the afternoon was to try to get the metric looking right, but such that the organisation was clearly doing something very undesirable behind the scenes. Our table was joined by Sid, bringing along a metric whose stated goal was “fewer bugs in the product” and whose metric was number of bugs per function point per release. During the next half-hour we had great fun trying to find ways to write more bugs, while still having fewer per function point. These included testing previous release really hard and fill up the bugs database, then hardly test future releases at all; and making it really difficult for users to report bugs. Meanwhile another group was gaming our own metric, by producing zero-change releases and by saving up releases into rapid-fire batches all at once.

Next, if I recall, we had to re-design our original metrics in the light of the panning they had received in the hands of the gamers. We ended up measuring the goal of “more frequent releases” using the metric days per story point between customer feature request and customer acceptance. We tried to phrase it so that it averaged out over easy and difficult features, and so that the count forced us to release the feature to an end user. This neatly side-steps all attempts at gaming (that we could discover), and is roughly equivalent to throughput – or to Kent Beck’s software in process metric.

For the workshop’s final segment, we recognised that a single metric by itself is rarely sufficient to direct process improvement. Change one thing for the better and something else may well slip out of line. Jason and Duncan’s thesis is that we need a system of metrics, each of which balances the others in some way. So as a final exercise, we began an attempt to take the metrics of all four groups and weave them into a system, showing which measures reinforced which others etc. For me this is the key observation from the whole afternoon, and I would like to have spent more time here. I had never really seen the attraction of Systems Thinking, but these models derived from the need for metrics do seem to be a good application of the technique. Food for thought indeed…