It’s bikesheds all the way down

TL;DR: Here’s an interesting hypothesis: In software development there are no nuclear reactors, only bikesheds hiding behind other bikesheds.

bicycle_shed

Yesterday the XP Manchester group began an attempt to test-drive a sudoku solver. You can follow along with the git repository at kirschstein/xpsudoku. In about 80 minutes we didn’t get far, and there was some concern about this in our retrospective at the end of the evening. To some of those present, it seemed we spent an inordinate amount of time messing around with string formatting, and in deciding whether to represent the sudoku grids as strings

var input = @"
    - 2
    2 1";

or as 2-dimensional arrays

var input = new[,]
{
    {0, 2},
    {2, 1}
};

It felt to many in the room that there were more pressing issues with the code, such as a dodgy conditional and horrible hard-coded return values. There was a feeling that we had spent our final half-hour bikeshedding.

I’m not going to disagree. There are always many points of view, particularly with twenty people in the room contributing to the discussion. Much of the drive to try something other than strings came from me, for which I apologise to the group. And yet had I been working on my own I would have done the same. Here’s why.

While we were working on creating new tests and getting them to pass, we could only see one test on the projector screen at any time. I lost track of what tests we had, and at one point the group spent some time discussing “new” tests, only to discover that we already had the test in question. It seemed to me that we had a lot of similarity and duplication among the tests, which in themselves were conceptually quite simple. Left to my own devices I always invest the time in cleaning those up and introducing a single test with a table of input-output cases. I want to be able to easily see which cases are present and which are missing, and tabular tests do that well. Only then would I consider the code in the solver itself, by first looking for new test cases to drive out the hard-coded values etc. (See also Steve Freeman’s wonderful talk Given When Then considered harmful, in which refactoring a suite of tests reveals that there is a missing case.)

So for me, clean readable tests are a pre-requisite for working code.

Later, as I was walking back to the station to get my train home, another thought struck me. Isn’t the whole of TDD based on this kind of activity? Isn’t Refactor Mercilessly one of the core practices of XP? Indeed, isn’t it a premise of evolutionary design that sophisticated solutions will arise out of merciless refactoring? Does this mean that by paying attention to the details, great designs will emerge? The Agile Manifesto hints at this with:

Continuous attention to technical excellence and good design enhances agility.

Could this mean that in TDD there are no nuclear reactors, only bikesheds that need attention? And that by keeping the simple things clean and simple, sophisticated things can always emerge? I have no idea. It’s a big topic, and one that has already been discussed at length. I suspect there’s still a lot of research to be done into TDD and evolutionary design, and many new techniques to be discovered. This is in part why I am interested in connascence. I believe it can be grown into a tool that can help steer refactoring priorities. But all of that is in the future…

When I am programming I make a conscious effort to go slowly, paying close attention to details, particularly the clarity of the tests. Colleagues have often commented that I appear to be going very slowly — bikeshedding even. Possibly true. My Asperger’s makes me methodical, and I do enjoy that kind of programming. And yet overall I still create working software in good time.

Clean Code: what is it?

Recently I helped facilitate some discussion workshops on the topic of Clean Code. Each of the discussions seemed to be predicated on a belief that readability is the most important criterion by which to assess whether code is Clean. Indeed, the groups spent a lot of time discussing ways to establish and police coding standards and suchlike. While I agree that this can be useful, I felt the discussions missed the aspects of Clean Code that I consider to be the most important.

So I thought it might be useful here to attempt to describe what I mean by the term…

Continue reading

Connascence of Value: a different approach

Recently I wrote a series of posts in which I attempted to drive a TDD episode purely from the point of view of connascence. But as I now read over the articles again, it strikes me that I made some automatic choices. I explicitly called out my design choices in some places, while elsewhere I silently used experience to choose the next step. So here, I want to take another look at the very first step I took.

Continue reading

Peculiar algorithms

Am I over-thinking things with this Checkout TDD example? Or is there a real problem here?

Based on insightful input from Pawel and Ross, it is clear to me (now) that there is CoA between the currentBalance() method and the special offer object(s), because the method doesn’t give those objects any opportunity to make final adjustments to the amount of discount they are prepared to offer.

However, as things stand there is no requirement demanding that. Does that still mean the connascence exists? Or is it a tree falling in the forest, with no tests around to hear it?

Continue reading

Connascence: a mis-step

I had a very interesting discussion today with Ross about my recent connascence/TDD posts. Neither of us was happy about either of the solutions to the corruption problem with the special offer object. Even with the cloning approach, it still seems that both the Checkout and the MultiBuyDiscount have to collude in avoiding the issue; if either gets it wrong, the tests will probably fail.

After a few minutes, we realised that the root of the problem arises from the MultiBuyDiscount having state, and we began to cast around for alternatives. At some point it dawned on me that the origins of the problem go right back to the first article and the first couple of refactorings I did.

Continue reading