use tests as a failsafe

Yesterday I wrote about tolerating the red bar, when a few scary tests fail only occasionally. And today it strikes me that one of the contributors to the persistence of this situation is our tool-set.

As I said, we run the tests, get a red bar (sometimes), and then open up the test list to check that only the expected tests have red dots against them. If we see nothing untoward we just get on with the next task. But of course we aren’t really looking at the reason for the failure. Perhaps the tool itself is making it too easy for us. Or perhaps we interpret those dots too liberally?

So here’s the thought: What would life be like if our tools actually prevented check-ins while there are any failing tests? This would effectively “stop the line” until the problem was sorted out. And it would force us to address each problem while that part of the codebase was fresh in our minds.

I also suspect that peer pressure (“hey, I can’t check anything in now!”) might quickly cause us develop a culture in which we tried to eradicate the root causes of test failures. Instead of relying on CruiseControl to “deodorise” our stinky practices…

(If you’ve tried this I’d love to hear your experiences. Drop me a line.)


One thought on “use tests as a failsafe

  1. Pingback: keep the tests simple and readable « silk and spinach

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s