TDD and random numbers in ruby

I’m about to TDD a Ruby class whose behaviour will involve the use of random numbers. I expect the algorithms within the class to evolve as I implement new stories, so I don’t want to design and build a testing mechanism that will be brittle when those changes occur. But before I can write the next example, I need to figure out how to control the random numbers needed by the code under test. Off the top of my head I can think of four options:

  1. One way would be to set a fixed seed just before each test and then simply let the random algorithm do its thing. But for each new behaviour I would need to guess the appropriate seed, which is likely to be time-consuming. Furthermore, the relationship between each magic seed and the specific behaviour tested is likely to be obscure, possibly requiring comments to document it for the future reader. And finally, if the algorithm later evolves in such a way as to consume random numbers in a different order or quantity, the seed may turn out to be inappropriate, leading to broken tests or, worse, tests that pass but which no longer test the planned behaviour.
  2. Alternatively I could redefine Kernel::rand — but that could potentially interfere with stuff I don’t know about elsewhere in the object space.
  3. Within the class under test I could self-encapsulate the call to Kernel::rand, and then override the encapsulating method in a subclass for the tests. But then I’m not testing the class itself.
  4. Finally, I could parameterize the class, passing to it an object that generates random numbers for it. This option appears to give me complete control, without being too brittle or trampling on anyone else in the object space.

So I’ll go with option 4. Right now, though, I’m not sure what interface the randomizer object should provide to the calling class. Looking ahead, I expect I’ll most likely want to select a random item from an array, which means selecting a random integer in the range 0...(array.length). And for this next example all I’ll need is a couple of different randomizers that return 0 or 1 on every request, so I’ll simply pass in a proc:

obj.randomize_using { |ceil| 0 }

And if ever I need to provide a specific sequence of varying random numbers, I can do it like this:

rands = [1, 0, 2]
obj.randomize_using { |ceil| rands.shift || 0 }

Later that same day…

The class I’m developing has evolved quite a lot and split into three. And suddenly, with the most recent change, three of the tests have begun failing. A little investigation reveals that the code is now consuming a random number when it didn’t need to in the past, and so some of my randomizer procs now provide inappropriate values. It turns out that two of the failing examples actually boil down to being a single test of a piece that has now been refactored out into another class; by refactoring the tests to match I can remove the dependency on random numbers altogether. And the last broken test is fixed by providing a randomizer that respects the ceiling passed to it (not an unreasonable request):

obj.randomize_using { |ceil| [2, ceil-1].min }

This works, and I get no more surprises during the session.

Advertisements

insurance for software defects

The more I think about it, the more astonished I become. Maintenance contracts for (bespoke) software: Buying insurance to cover against the possibility that the software doesn’t work.

I know the consumer electronics industry does the same, and I always baulk at the idea of spending an extra fifty quid in case the manufacturer cocked up. I wonder what percentage of purchasers buy the insurance? And I wonder what percentage of goods are sent back for repairs? Perhaps the price could be increased by 10% and all defects fixed for free. Or perhaps the manufacturer could invest a little in defect prevention.

It seems to me that software maintenance contracts are an addiction. Software houses undercut each other to win bids, and then rely on the insurance deal to claw back some profits. So no-one is incentivised to improve their performance, and in fact the set-up works directly against software quality. Perhaps it’s time now to break that addiction…

If a software house were able to offer to fix all defects for free, would that give them enough of a market advantage to pay for the investment needed to prevent those defects? Is “zero defects or we fix it for free” a viable vision? (Does any software house offer that already?) And how many software companies would have to do it before the buyer’s expectations evolved to match?

As an industry, do we now know enough to enable a few software houses to compete on the basis of quality?

carnival of the agilists, 5-jul-07

This latest edition of the Carnival focuses on what is rapidly becoming a cornerstone of agile methods: Test-Driven Development. Or Test-Driven Design if you prefer. Or Example-Driven Development. Or Behaviour-Driven Development.

First up, Jeremy Miller discusses Designing for Testability: “I have yet to see a codebase that wasn’t built with TDD that was easy to test.”

New blogger Eric Mignot gets a lot of Pleasure from introducing a developer to the joys of TDD: “He had discovered that even if you don’t use it from the beginning of your project, TDD is the most fun and efficient way to correct bugs.”

And David Laribee speaks about the suspension of disbelief to those who have never tried TDD, afraid to make the leap of faith: “Check your disbelief at the door, stay in the cycle, and wait for the payoff. A bit further down the path, when you’ve finished the story and tests are passing, you’ll all of a sudden have a working, shippable, F5-able feature without ever having hit the debugger.”

Brian Marick laments the standard pitfall of demonstrating TDD using an existing codebase: “The usual reason it fails is that the code wasn’t written test-first, so it’s hard to do anything without instantiating eighteen gazillion objects.” So Brian has created a (draft) TDD workbook, in which you get to add features to a real application that has already been developed for you using TDD; potentially the best of both worlds.

When it comes to writing good tests (as opposed to just writing tests), many bloggers – most recently James Newkirk – have mentioned or reviewed Gerard Meszaros’ xUnit Test Patterns, which Eugene Wallingford says is really three books in one: “I have the best of two worlds: a relatively short, concise, well-written story that shows me the landscape of automated unit testing and gets me started writing tests, plus a complete reference book to which I can turn as I need”.

If you prefer behaviour-driven development (BDD) to TDD Dan North has recently developed rbehave for Ruby: “Inspired by [rspec], I wanted to find a simple and elegant way in Ruby to describe behaviour at the application level.” Moments later Joe Ocampo ported the idea to NUnit for C# too! The BDD world is fast-moving right now, with new tools and new experience posts popping up all over; is it the “next big thing”?

And finally, please nominate someone for one of this year’s Gordon Pask awards. If you don’t know what they are, or what they mean to the software development community, read the thoughts of Laurent Bossavit, one of last year’s winners. And then nominate someone from your corner of that community.

(While putting this carnival together I was shocked to discover that many recent “TDD” blog posts involved writing code and then writing tests. As so often, it seems the buzzword has greater velocity than the practice itself…)

To suggest items for a future carnival – especially from a blog we haven’t featured before – email us at agilists.carnival@gmail.com. All previous editions of the Carnival are referenced at the Agile Alliance website. The next carnival is due to appear around July 19, hosted by Pete Behrens.