lowering the water level

In Lowering the Water Level Dan Markovitz over at Superfactory draws a sharp analogy between inventory levels in a factory and the time available to a knowledge worker:

“Imagine a value stream or a production process as a river. Reducing the inventory in the process – “lowering the water level” – exposes the “rocks” that represent all of the hidden costs and waste in production. Only by revealing those rocks can you improve the process and reduce the waste.

This metaphor works for knowledge workers, too. In this case, however, their key inventory item is time. Having too much time to do one’s work hides the waste and inefficiencies in the process”

Dan exhorts us to lower the water level by restricting the time we have available for knowledge tasks. Not only will this tackle Parkinson’s Law head-on, it will also encourage us to look for more efficient ways to carry out our regular tasks. And although I didn’t use the same metaphor as Dan uses,this is the motivation behind my ten-minute rules for software developers: restrict the time “allowed” between check-ins, for example, in order to discover new ways of incrementing the working system in small chunks.

scrum in manufacturing

Hal Macomber is a lean construction consultant who is currently developing a responsibility-based planning approach for a firm that designs high-tech manufacturing plants. The resulting process will be a hybrid of lean’s Last Planner System with Scrum. And Hal is managing the development of that process using Scrum. He has even hired a ScrumMaster from the world of software development to help with the iterative focus of the project.

I believe this is the kind of application for which Scrum is ideally suited. It brings a clear and simple structure to the process of developing something, and expects the application’s domain to provide the tools and techniques that will deliver value each iteration.

I’ll be watching with interest as Hal’s experiment unfolds.

the big DIP

Several times today my online reading has brought me to Henrik Mårtensson’s Kallokain blog. Henrik has a great knack of taking concepts from lean manufacturing or the Theory of Constraints and creating worked examples to show how they play out in day-to-day software development.

Take for example what Henrik calls the DIP – design-in-process – the software development equivalent of inventory. DIP is any work that has been begun, but not yet shipped to end users.

An easy, and highly visible, way to measure DIP, is to make a Project Control Board. Use a whiteboard, or a large mylar sheet on a wall. Make a table with one column for each hand-off in your development process. […] At the start of each iteration, write sticky notes for each story (use case, feature description, or whatever you use). […] Whenever a developer begins to work on a new story, the developer moves the story from the stack of sticky notes to the first column in the table. The developer is then responsible for moving the story to a new column each time the story goes into a new phase of development. […] The sticky notes in the table are your DIP. Track it closely. If you have a process problem, you will soon see how sticky notes pile up at one of your process stages.

This fits with everything I’ve written about task boards, but Henrik then goes on to show how to calculate the current value of your DIP – ie. the investment you have made in the unshipped work on the board. In his worked example, a requirements change causes 20 story points worth of DIP to be scrapped; and as each story point costs 1,000 EUR to develop, the requirements change means we must write off 20,000 EUR of development costs.

It is worth noting that some of the 1,000 EUR per story point is “carrying costs” for the work that was scrapped. Just as manufacturing inventory costs money to store and move around, unshipped software (not to mention uncoded designs) has to be regularly read, searched, built and tested. It was there all the time, and we had to respect it and live with it. So it slowed us down; we had to step around it, and we would have tripped along more lightly if it had never been there. Looked at another way: with lower DIP inventory our cost per story point would be lower; equivalently, we could achieve higher throughput for the same expense.

(Of course, there are two obvious reactions to this write-off cost: make changes really difficult, or reduce inventories and cycle times so that DIP is always lower. Each has its sphere of applicability, I guess…)

agile, agile and more agile

In Lean, Lean and More Lean Kevin Meyer bemoans the apparent trend in manufacturing companies to adopt “lean” as a fashion item:

“It may just be me, but it appears that the number of companies citing lean manufacturing in their financial reports, news releases, and general news articles has been increasing. Of course many of them really have no idea what lean is really about, or perhaps they only understand the waste reduction pillar without even knowing about respect for people. Lean is apparently becoming a requirement, or even an analogy, for success.” — Kevin Meyer

Substitute the word “agile” where Kevin uses “lean” and his whole article (indeed the whole Superfactory blog) becomes a commentary on the current state of the software industry too.

carnival of the agilists, 17-may-07

Pete Behrens has edited the latest edition of the carnival, which includes links to a nicely balanced pot-pourri of recent agile and post-agile blog posts. I’m particularly pleased that Pete has highlighted Esther Derby’s reminder of the lean principle that we must always focus on optimising the whole system, and not be distracted by creating local maxima just because it may be easy to do so.

(Subscribe now to receive notice of all future agile Carnivals.)

carnival of the agilists, 3-may-07

When it comes to agile software development my preference swings heavily towards the influence of lean. The two central principles of lean are eliminate waste and respect people – although the second of these seems to be too often forgotten in the stories we read of lean (manufacturing) transitions. The agile movement (I hesitate to use that word in the present climate) has a strong tradition of demanding respect for people, arising from the manifesto’s exhortation to value individuals and interactions over processes and tools. And as I sifted through the agile blogosphere again this week I found that tradition very much to the fore. So this week’s carnival is a single-topic issue, bringing together a smattering of your thoughts on “individuals and interactions”…

To get us in the mood, in Respect People Alan Shalloway brings together a few pithy quotes, while in Individuals and interactions over processes and tools Simon Baker attempts to unpick what that particular plank of the agile manifesto means in practice. Dave Nicolette asks Is agile’s greatest strength also its most significant risk factor? and suggests that emphsising people over process is both liberating and risky.

The openness of agility is forcing us to (re-)discover some of the deeper foundations of effective communication. One of these is trust, and in I Told You So Ed Gibbs discovers one of the side-effects of not being trusted. (Trust has also turned out to be the theme of Clarke Ching’s Rolling Rocks Downhill book, which he would like you to help him rename.) And in Attachment Employment Jack Vinson points out that knowledge management initiatives will yield poor results when the workforce doesn’t trust it’s employer.

Speaking of knowledge management, in Osmotic communication – keeping the whole company in touch Tom Scott discusses the idea of using IRC (Internet Relay Chat) to promote information flow throughout a company, and specifically to provide a live commentary on what’s happening to version-controlled resources. A fascinating thought experiment, and I’m very interested to hear from any group who try it.

When it comes to working together, Jeremy Miller finds many ways in which Self Organizing Teams are Superior to Command n’ Control Teams (although some of his readers appear to disagree). And Jon Miller at Gemba Panta Rei discusses how to use Skill Matrices as a key part of the knowledge management in a lean organisation.

And finally, after all that reading, something completely different. Why not try simulating variation in task estimates, as suggested by Clarke Ching? Try it a couple of times, then imagine that Heads equates to getting good luck on a task (so it finishes early) and Tails equates to getting bad luck (so the task finishes late). What does the simulation say about plans and planning?

If you have something that you want to see in a future carnival – especially from a blog we haven’t featured before – email us at agilists.carnival@gmail.com. All previous editions of the Carnival are referenced at the Agile Alliance website. The next carnival is due to appear around May 17, hosted by Pete Behrens.

agile set-based design

As you know, I’ve always tended toward the “lean” school of agile software development, even though sometimes I’ve found parts of the mapping difficult to envisage. One area I never understood clearly until recently is set-based or concurrent design, in which competing potential solutions to a problem are developed in parallel until it is absolutely necessary to decide among them. Why and when would it make sense to do such a thing during software development?

My initial answer to that question was “YAGNI”. By having code only for the requirements we have scheduled (and which are therefore definitely needed), we leave the product in a better state to respond to whatever comes next. If we have persued YAGNI thoroughly, our current codebase represents the intersection of all possible decisions in the future; we might therefore be said to have developed all of these possible futures in parallel upto the present moment.

Which is probably valid, but unconvincing. Then a recent thread on one of the lean/agile lists had me thinking about the problem again. In discussing motor car design – specifically the Toyota Prius – Mary Poppendieck said:

“I think that the trick is to determine what is, in fact, easy to change later, and what will not be easy to change, and spend some time considering those things that are going to be very expensive to change later. And, of course, the trick is also to keep such things to a minimum – through the use of layers, services, etc.”

Then I understood. I’m currently just starting up a new project to develop an application to help with my business. Before writing any code and before writing any stories, I thought about what existing process(es) will be helped / enhanced / automated by this new software, and therefore how it needs to be accessed. I decided that I want to access the software from a browser, for various reasons. And I expect that decision will be relatively cheap to change later, so I make the decision quickly and move on.

But then I have to decide what technology to use in implementing the thing. Should I use Java, which I know very well, or is this a chance to try Rails? Either way, this will be a very hard decision to reverse, so I need to make it carefully. I need to know that this concrete decision will be made right before I invest any long-term effort in the project. Consequently, I’m currently in the middle of a fairly large spike exploring what Rails has to offer. This is concurrent engineering (even though it’s proceeding serially due to the fact that there’s only one of me). I know that “architecture” can be a dirty word in some places, but that’s what I’m doing right now: making those few decisions that will cost the most to change later. I’m exploring the options by producing product-quality software, and I will soon arrive at the point when I am forced to choose. This will be Mary’s “last responsible moment”. The decision will be made in the light of concrete evidence, and at that point either Java or Rails will be discarded in favour of the other.

So that’s now my answer to the question I posed above: Spikes are the agile equivalent of Toyota’s set-based design. Make all decisions as late as possible, and make them based on evidence from concrete experiments. In my case I do have to choose a development language & tools, and the decision will be costly to reverse. So I create spikes to help me decide, because the cost of being wrong is greater than the cost of the spike.

Bill Wake mentions both of the above interpretations in this article.

test-driven installation

Although I’ve been administering (to) a Joomla site for a while now, I recently had the opportunity to install Joomla from scratch for the first time. The process was remarkably smooth, and with a little help from friends around the web it was done in no time at all. And surely one of the keys to the simplicity of the installation was this: After untarring the distribution files, the next step is to fire up a browser and load the URL of the installation directory, which shows the first page of the Joomla installation process. The page is essentially a test report; in my case it was mostly green, with a couple of red markers that showed things I needed to fix before installation could proceed.

This is rather cool – test-driven installation – and it’s becoming more common these days (I saw the same thing when I installed MovableType for this blog). The page tells me what the software expects from its environment, and also where I currently fail to meet those expectations. Neat. In the Joomla case I had test failures telling me that some files weren’t writeable, and that I had no MySql support in PHP. I fixed those and refreshed the page – green! Onto the next page in the Joomla installation sequence, and the installation worked fine.

As an aside, this is one case of a genuinely test-driven process. The distributed package is shrink-wrapped, intended to be installed many times under varying conditions, and the tests are a kind of poke-yoke. Their aim is to ensure that the upstream process (preparing your environment) is completed correctly without passing errors on to the later step. (Contrast this with the example-driven processes we use in “test-driven development”. Here we create sample situations in order to assist – and constrain – the design of something new, during a process that essentially occurs only once.)

one foot in the past?

[I wrote this piece on May 12th 2007. I found it today in my Drafts folder, and it still feels relevant. What do you think?]

While collating this week’s Carnival of the Agilists I very much wanted to break with tradition. In the usual format we use the four values from the Agile Manifesto as the main headings in the digest. These headings (“people over process” etc) represent the rallying cry of the agile movement, and express a set of desires in adversarial terms: we prefer this compared to that. That kind of thinking was all well and good in 2001, when the great and good in Snowbird felt they had a war to win. But I grow less comfortable with that approach over time, and I feel the need for something different.

Moreover, I wanted to lift the digest’s eyes a little, away from the nitty-gritty of what colour task cards to use and onto the reasons I feel agile software development is important and necessary: delivering business value. I wanted to change the headings to the three metrics suggested by Mary Poppendieck a while ago:

  1. reduce cycle time;
  2. increase business case realisation; and
  3. increase net promoter score.

Guess what: I couldn’t do it. In particular, I couldn’t fill those first two sections. The blogsphere is full of the minutiae of task-based planning, test-driven development, customer collaboration and so on. But there’s very little being said about business drivers or measuring benefits. I have found a small number of bloggers who write about cycle time – usually from a Theory of Constraints perspective – such as Pascal van Cauwenberghe and Clarke Ching. And there are a number of good blogs in the “user experience” category – notably Kathy Sierra, the folks at 47signals and those at Planet Argon.

But compared to the hundreds of blogs about task cards (and I’ve been as guilty as any), those mentioned above are a drop in the ocean. It’s as if most of the software development community is fixated on “being” agile, instead of being fixated on delivering. So come on, prove me wrong. By all means talk about what you do and how you do it, but try also to relate that to why you do it – map everything back to those metrics (or anything similar). Let me report in my next turn at the Carnival that we’ve lifted our eyes towards our goal.

feedback, flow and friction

Brad Appleton has coined this neat epithet, which distills the essences of Agile (maximise feedback), Lean (maximise flow) and the Theory of Constraints (minimise friction). I like it a lot, and it does neatly characterise those three improvement approaches.

(If I were to be pedantic I might say that agile and lean are aimed at minimising cycle time – the period between order and delivery – while TOC is largely about reducing overproduction. But that would be another gross over-simplification, so I won’t say it. And I certainly won’t mention the difference between manufacturing and product development.)