In order to use a callback API, one or more of our application classes must implement the callback method(s), and must therefore conform to some abstraction defined by the API’s provider. So our classes must depend on the API. Which means that the API can’t be easily mocked or stubbed. We have to treat our callback objects as being part of the adapter for that API, and test the rest of the application by mocking or stubbing them.
In fact, now I’ve written it down it’s trivially obvious, and hardly worth saying.
Warning: academic theorizing and hypothesizing follow. Oh, and half-baked pontification.
I just finished refactoring reek to drive in a major new chunk of functionality (configuration files) which I’ll release soon, when I’ve had time for some thorough testing.
The refactoring needed to accommodate the change was huge, occupying much of my free time over the course of two months. Pretty much the whole of the tool’s original architecture has been revised. Why so big and so complex? Because the original code relied heavily on constants and class methods; they helped me get the early versions written quickly, but they represented a significant barrier to long-term flexibility. I’ve been wondering why that should be; why do constants and class methods stand in the way of adaptable test-driven code?
I think the answer lies in viewing the application through the lens of Hexagonal Architecture. Let me explain…
It seems to me that constants, global variables, classes, class methods, etc all live in a space that’s “anchored” to the runtime environment, which is itself a singleton. Anything anchored to that singleton is going to hinder the independence and isolatedness of unit tests, and also reduce the application’s flexibility in well-known ways. So far so standard. Now, suppose we model the singleton as a notional point that is external to the application. Hexagonal Architecture tells us we must access the singleton via an Adapter — in this case, an Adapter provided by the programming language and/or runtime. I’ll refer to the singleton as the application’s Anchor, and therefore claim that it is accessed through language features in an Anchor Adapter.
Now, I believe that the Domain Middle should not depend directly on Adapters. So any code that makes direct use of the Anchor Adapter must therefore be considered outside of the Domain Middle, and hence part of an Adapter — and hence also inherently outside the space where unit tests live comfortably.
Which is why constants and class methods add friction to unit testing.
Or rather: This model fits nicely with my penchant for Hexagonal Architecture, and lets me justify my unease at testing in and around class methods. And probably adds nothing to our understanding of software development.
In Edge Case Bill de hÓra points out, as does Tony Coates, that a DataClass is not always a bad idea. Both are responding to an assertion made by Martin Fowler, that a DataClass is usually a sign of poor design. And both use serialised objects as a counterexample. I guess I’ve never thought of serialised objects as capable of having behaviour (since they generally live in media that don’t support it); and so perhaps I don’t even think of them as being sufficiently objecty to even count in the discussion. So to me it’s a non-disagreement.
More interesting to my eyes is Bill’s use of the term “application edges”:
“But when it comes to working at the application edges – at the network boundary, over HTTP, between applications, intra-app messaging – well, “I have a doubt.” […] There’s a lot of system edges these days, and agreeing where the edges are is hard.”
There are indeed a lot of edges these days. And trying to think about them in the context of a layered model of the application’s architecture is likely to cause brain-ache. Which is why I keep pushing the hexagonal architecture model. It moves the application and domain objects into the centre, and surrounds them with adapters that connect to the rest of the world. In this architecture, the “edges” have a natural symmetry, and their role in the application becomes easier to visualise.
So in the context of hexagonal adapters, some of Bill and Tony’s data classes will likely be GoF Mementos, others are probably Whole Value objects, and the rest probably really are blobs of data. I would expect the Whole Value objects to have behaviour, but not the others. But the real point, for me, is that the hexagonal architecture approach makes the use of these patterns clearer…
During my ‘hexagonal architecture’ session at XPday Benelux, the discussion gave me some clues as to why I feel the “standard” layered architecture model is sub-optimal: I realised that I feel as if I’m looking at a picture of a pile of stuff from the side. Contrast this with a hexagonal model of the same system, in which I feel as though I’m looking down on the picture.
Why is this important? And what relationship does it have to being agile?
The answer, I believe, lies in Lakoff‘s theory that metaphor shapes much of our thinking. When I look at any architecture model I subconsciously impose a point of view on the picture, because my mind relates what I see now to previous experiences. A layered model “looks like” a pile of books or building bricks; a hexagonal model “looks like” an island on a map (another metaphor in itself!) or a table with chairs arranged around it. The choice of metaphor is made deep in my perceptual system, helping me to make sense of anything I see. And once the metaphor has been selected, my mind will then automatically supply it with a whole load of related beliefs, many learned as a baby. Among these are the effects of gravity and mass, together with related implications of downward dependency.
These associations cause me to believe that the things at the bottom of the pile are hard to move or change. Whereas in the hexagonal view I instinctively feel the system’s components are more loosely coupled – perhaps because they are associated only by proximity, and not by gravity.
So because of these deep-seated metaphorical associations, maybe we build less adaptable systems when we think of them in layers…?
Last week I spent an enjoyable day at XPdays Benelux in Rotterdam. I ran a couple of sessions, attended a couple more, and met up with friends old and new. Here are brief recollections of the highlights in my day…
The day began with every presenter offering a 1-minute sales pitch for their session. I got to do two: one for Hexagonal Architecture and one for Jidoka. A minute is longer than you’d think, and I’d guess that most of the pitches were over inside twenty seconds. I decided to be a little different, so I had everyone stand on one leg – hopefully to demonstrate that the standard layered architecture model compromises agility… Continue reading →
In A thought on mocking filesystems Brian Marick provides yet another reason to think in terms of a hexagonal architecture. Discussing the writing of mock objects for huge components such as a filesystem, he writes:
“So it seems to me that a project ought to write to the interface they wish they had (typically narrower than the real interface). They can use mocks that ape their interface, not the giant one. There will be adapters between their just-right interface and the giant interface. Those can be tested separately from the code that uses the interface.”
Mocks/stubs and adapters are natural bedfellows, and definitely help to keep code clean and testable. I once worked on a project in which the “filesystem” was a JNDI directory service. And then later that was scrapped in favour of a plain old filesystem. How I wish I had known about hexagonal architectures then, because it was a real nightmare to disentangle the JNDI dependencies from the rest of the system.