interviewed at scottish ruby conf

Way back in March of this year, while I was attending the Scottish Ruby Conference, Werner Schuster interviewed me for InfoQ. The video has been online since August, and this is me finally getting around to telling you about it; watch it here. I talk about Reek, Refactoring in Ruby, and agile development in general.

This was the first time I’ve done a video interview, so I’m keen to hear your comments on how it went!

reek mailing list

Discussion about Reek seems to be popping up all over the place at the moment, so I’ve created a Google group where problems, ideas and suggestions can be shared in one place. I’ll be making release announcements there and seeking feedback from you on my ideas for Reek’s future.

So if you’re using Reek, hop over to http://groups.google.com/group/ruby-reek and let’s get the ball rolling.

discussion on TDD and code analysis tools

During the last few weeks I’ve been participating in an email discussion about the relationship between static analysis tools (such as Reek) and TDD. The discussion was instigated by Pat Eyler, and he has now organised and posted the results on his On-Ruby blog.

To help me get an initial handle on the topic, I found it extremely useful to list the main areas of discomfort I feel when using Reek in my own work. Then for each of these (there were two) I constructed conflict clouds to get a balanced view of the problem. I don’t have the clouds to hand now, but you can read the results in Pat’s article. I’ll definitely be using that technique again, because it very quickly helped me to organise my thoughts. (Note to self: throw nothing away.)

the “anchor adapter”

Warning: academic theorizing and hypothesizing follow. Oh, and half-baked pontification.

I just finished refactoring reek to drive in a major new chunk of functionality (configuration files) which I’ll release soon, when I’ve had time for some thorough testing.

The refactoring needed to accommodate the change was huge, occupying much of my free time over the course of two months. Pretty much the whole of the tool’s original architecture has been revised. Why so big and so complex? Because the original code relied heavily on constants and class methods; they helped me get the early versions written quickly, but they represented a significant barrier to long-term flexibility. I’ve been wondering why that should be; why do constants and class methods stand in the way of adaptable test-driven code?

I think the answer lies in viewing the application through the lens of Hexagonal Architecture. Let me explain…

It seems to me that constants, global variables, classes, class methods, etc all live in a space that’s “anchored” to the runtime environment, which is itself a singleton. Anything anchored to that singleton is going to hinder the independence and isolatedness of unit tests, and also reduce the application’s flexibility in well-known ways. So far so standard. Now, suppose we model the singleton as a notional point that is external to the application. Hexagonal Architecture tells us we must access the singleton via an Adapter — in this case, an Adapter provided by the programming language and/or runtime. I’ll refer to the singleton as the application’s Anchor, and therefore claim that it is accessed through language features in an Anchor Adapter.

Now, I believe that the Domain Middle should not depend directly on Adapters. So any code that makes direct use of the Anchor Adapter must therefore be considered outside of the Domain Middle, and hence part of an Adapter — and hence also inherently outside the space where unit tests live comfortably.

Which is why constants and class methods add friction to unit testing.

Or rather: This model fits nicely with my penchant for Hexagonal Architecture, and lets me justify my unease at testing in and around class methods. And probably adds nothing to our understanding of software development.

a different use for CruiseControl

Until recently I had always thought of CruiseControl as a backstop, something that would check for silly mistakes during commit, or certain classes of environmental dependency in build, tests or deployment. And then, during December, two things happened to change that view; and now I use CruiseControl quite differently.

As you know, I’ve been developing this tool called reek, which looks for code smells in Ruby source files and prints warnings about what it finds. reek has been developed largely test-first, and now has a couple of hundred tests. The first thing that happened was that I began to be irritated by the huge runtime of the full test suite — 15 seconds is a long time when you’re sitting waiting for feedback, and I noticed I would sometimes dip off into the browser while waiting for green (or red). I needed the coverage, but without the waiting around.

The second thing that happened was that during one of those interminable 15-second waits I read about the Saff squeeze. Then I tried it, enjoyed it, and found that the tests it produced were significantly different from those I wrote “myself”.

So now I use CruiseControl differently. I began by splitting the tests into two suites: fast and slow. The fast tests comprise over 90% of the total, and yet run in under 1 second; I use them as normal during my development cycle — no more waiting around! The slow tests consist of runs of the whole tool on known source code; there are only a dozen of them, but they take 14 seconds to run. (You might call them “unit tests” and “integration tests”, and indeed that’s what the two suites comprise right now. But the key attribute is their speed, not their scope.)

So now the slow tests only get run by CruiseControl, not by me (it does run the fast tests too, btw). I’ve delegated the 14 seconds of waiting, and now my development state of flow is much easier to maintain. And if the slow tests fail, it means I have one or more missing fast tests — so I whip out the Saff squeeze to go from one to the other. I’m using CruiseControl as a batch test runner, and the Saff squeeze as a mechanical tool to help generate a missing fast test from a failing slow test. (The process of squeezing also tends to tell me something quite interesting about my code. More on that another time.)

I’m sure this use of CruiseControl isn’t new or original, but I’m pleased with how well it works. Let me know if you try something similar?