Martin Fowler’s Mocks aren’t Stubs article presents a balanced comparison of two different styles of unit testing. According to Martin’s taxonomy I’m the type of chap who prefers “state-based” testing over “interaction-based” testing – that is, I prefer using stubs instead of mocks. And I increasingly encounter a downside of mocks that isn’t mentioned by Martin:
Suppose you’re working with legacy code. There are only a few tests; and the tests you already have are all large and complex, because the objects you’re testing were never designed with de-coupling in mind. You have to make a change to one or more of these objects, and you want to proceed test-first. The approach I’ve seen with increasing frequency (probably due to the easy availability of NMock) is to “mock out” a few of the closely-coupled objects. This allows the TDDer to get on with the work, but does nothing to alleviate the underlying problem of the smelly design.
Here, mocks are being misused, as “smart” stubs. I would much prefer to have seen the TDDer spend a little extra time refactoring the production code so that these tests – and all future tests – became easier and quicker to write. (The many excellent techniques in Mike Feathers’ Working Effectively with Legacy Code will help here.) So when I see mock objects being used as the easy way out of a tight corner, I seek instead to refactor away those nasty design smells.
In A thought on mocking filesystems Brian Marick provides yet another reason to think in terms of a hexagonal architecture. Discussing the writing of mock objects for huge components such as a filesystem, he writes:
“So it seems to me that a project ought to write to the interface they wish they had (typically narrower than the real interface). They can use mocks that ape their interface, not the giant one. There will be adapters between their just-right interface and the giant interface. Those can be tested separately from the code that uses the interface.”
Mocks/stubs and adapters are natural bedfellows, and definitely help to keep code clean and testable. I once worked on a project in which the “filesystem” was a JNDI directory service. And then later that was scrapped in favour of a plain old filesystem. How I wish I had known about hexagonal architectures then, because it was a real nightmare to disentangle the JNDI dependencies from the rest of the system.
The standard three- or four-layer models of application architecture seem to dictate the direction of the dependencies between the various “kinds” of object in the system: The UI depends on the application layer, because the UI “drives” what happens “below”. The application layer depends on the business objects, which do all the domain-specific things. The business objects use (and hence depend on) the persistence layer and comms layer, both of which in turn use and depend on external APIs. It is perfectly natural to read a layered model in this way – indeed, that was its purpose.
And it is this very interpretation that has made so many applications difficult or impossible to test. Each of those perceived dependencies ends up enshrined in the code structure, and by stealth the design has cast in concrete the layered model. At which point someone like me comes along an exhorts the developers to improve their test coverage. I’m told: “We tried that, but it takes too long to manage the Oracle testdata we’d need to test the GUI,” or something similar. Or the business decides to switch to a new database or enterprise comms architecture, requiring huge swathes of business logic to be re-engineered.