How I use React and Redux

Recently I wrote a couple of tweets about React/Redux that attracted some interest and simultaneously divided opinion down the middle:

I’ve since been asked a couple of times to expand beyond tweet-sized soundbites and explain myself more fully. Here goes…

I’ve been using React, in both commercial and personal projects, since the middle of 2015; I’ve been using Redux since I heard about it in early 2016. I’ve only used them for client-side development. What follows is my personal, opinionated, idiosyncratic approach to React/Redux development, forged in half a dozen attempts to pursue test-driven development at a good pace. I’m a complete amateur when it comes to client-side development, and I expect I’ve made many choices that you might find laughable. I also know that what you are about to read violates much Redux “best practice”. Nonetheless, for the most part my code works, is well tested, and allows me to work at a reasonable speed. I’ll take that.

Here’s how I do it:
  • I learned about React around 3 years ago and loved it right away. But then managing multiple stores became a real pain, so I’ve used Redux for absolutely every project in the last 2 years.
  • I found the shared knowledge between action creators, reducers and selectors meant that changing the shape of state can be costly, especially if each of those things is unit tested. So I wrote a test framework that integrates action creators, reducers and selectors; this means I have high test coverage of everything outside components:
import deepFreeze from 'deep-freeze'
import configureStore from '../app/store/'
export const reduce = (actions) => {
const store = configureStore()
actions.forEach(action => store.dispatch(action))
return store.getState()
export const reductio = (reducer, actions) => {
var state = reducer(undefined, {type: 'NO_SUCH_ACTION'})
return apply(reducer, state, actions)
export const apply = (reducer, initialState, actions) => {
let state = initialState
for (var action of actions) {
state = reducer(state, action)
return state
view raw specHelper.js hosted with ❤ by GitHub
switch (process.env.NODE_ENV) {
case 'production':
module.exports = require('./')
case 'test':
module.exports = require('./configureStore.test')
module.exports = require('./')
view raw store.js hosted with ❤ by GitHub


  • (Note the use of deep-freeze to ensure that I am not accidentally mutating state.)
  • I don’t test components. But I also don’t put logic in components.
  • I have quite a lot of “containers”, and I don’t really distinguish these from “components”.
  • I tried writing reducers that mirror the state of the server. But those rarely matched the needs of my views, and as a consequence I also had to write sophisticated selector functions mapping that state to whatever the views needed. The code in these selectors usually became complex — difficult to understand and change. So instead I now write reducers that prepare state specifically for consumption by particular components. I call such reducers “read models”, because that name helps me to remember what their responsibilities are.
  • I don’t mind if two or more read models contain the same state. In fact, I expect it. Each does what it needs to do, decoupled from the needs of the other(s). Their code changes rapidly and independently. From the server’s point of view they probably contain de-normalised state; but the server’s point of view doesn’t matter here.
  • Each read model is likely to also export a bunch of (tested) selector functions. This keeps my numerous containers simpler, and hides most of the detailed state structure from my tests.
  • So, now that I’m thinking of my reducers as being read models, it makes sense to re-orient some other vocabulary: My action creators are now called “commands”, and I call the actions they create “events”.
  • The names of my action creator functions used to consist of a random mix of past tense — eg. toggleSelected() — and imperative mood — eg. selectToggle(). But if I think of these functions as commands that may fail or be rejected, then only the imperative form makes sense. I like to use intentional naming too. So gotoSettingsPage() is better than toggleSelected() or selectToggle(), for example. This would create an event whose type is something like SETTINGS_PAGE_DISPLAYED.
  • I find the “standardised” action format(s) to be huge overkill, so I avoid that whole ecosystem.
  • I wrote a simple API middleware that “just works”. It also provides hooks (commands, in fact) allowing my tests to inject events that represent fake replies from the server. This means that my simple test framework (above) can be used to test the whole dispatch-middleware-store-selector stack. And this in turn means that I can often reorganise state without breaking tests or components.
  • I use thunks. I prefer them to sagas, which I find difficult to test.
  • I organise the app into bounded contexts, usually mapping to the “sections” or “pages” in the user’s mental model. Each of these is a source folder containing components, commands, read models, and tests. If necessary some larger contexts may have sub-contexts, indicated by sub-folders. I definitely don’t use folders for “classification” — eg. there is no reducers/ folder anywhere.
  • I don’t use constants for event types, because I don’t want my read models to be coupled to knowing which commands — or even which bounded context — created an event. In the early stages of development I frequently move things around as I’m discovering where the context boundaries lie, so minimising the number of imports saves me a lot of time. My tests will usually catch any typos.

Again, I’m not claiming this to be a “good” approach, but it works well for me. It is evolving all the time, and will likely be quite different again in 6 months’ time.

Most of the code I’ve written using these mad ideas is commercial, and thus not shareable. But if you want to see what some of my ideas look like in practice, take a look at (you can also visit to see the simulation running). This is the project that turned me off sagas, but it is also where I experimented with (nested) bounded contexts most thoroughly. Note also that I wrote this code before I had the epiphany regarding read models etc.

What do you think?

Your favourite CI tool

A couple of days ago I asked the Twitterverse to pick its favourite CI tool from four I had selected. Here are the results:Screenshot from 2017-09-10 17-28-08

No real surprises there, I think. There was one additional vote for CircleCI, and a bit of a side discussion about make, but otherwise completely predictable.

Does this tell us anything, other than that I had 5 minutes with nothing to do on Friday evening? Maybe it suggests that very few people have used more than one of these tools? And maybe that’s due to very little cross-over between the .Net and Java universes? Who knows. Pointless.

Branching out


You know me as a long-time agile coach and trainer — something I’ve been doing now for a good few years. During that time I have often been asked whether I could spin up a software development team to create a new product or explore an idea for one of my clients or someone they know. And I have always declined, either because I was too busy or because it didn’t fit with the direction I wanted to take at the time.

But in recent months I’ve developed a renewed longing for those far off days when I used to lead development teams and architect software solutions for a wide variety of companies in numerous sectors.

So I’ve decided to broaden my offering, and I’ve begun to take on software development and R&D projects again. I won’t stop doing the coaching and training completely, but henceforward I intend to use all of my spare capacity to grow a team doing bespoke software development and R&D projects.plant-164500_1280

My team’s USP, if it needs one, is that every project we take on is done using modern XP values, principles and practices. This also extends to the commercial aspects of our engagements, so that our clients always have weekly opportunities to re-think, re-plan, pivot, or even cancel. You know me, so you also know that this isn’t the usual Agile-with-a-capital-A bulls**t. I led my first XP team starting back in 1999, and I like to think I’ve avoided falling into the traps of commercial “Agile” nonsense in my coaching and training work. So now it’s time to put all of my XPerience to work, and to get back to doing the thing I enjoy most of all: building working software to solve people’s problems.

So if you have a software application that needs developed or an idea that needs investigation and elaboration, please get in touch and employ the services of the best XP team in the north west!

It’s bikesheds all the way down

TL;DR: Here’s an interesting hypothesis: In software development there are no nuclear reactors, only bikesheds hiding behind other bikesheds.


Yesterday the XP Manchester group began an attempt to test-drive a sudoku solver. You can follow along with the git repository at kirschstein/xpsudoku. In about 80 minutes we didn’t get far, and there was some concern about this in our retrospective at the end of the evening. To some of those present, it seemed we spent an inordinate amount of time messing around with string formatting, and in deciding whether to represent the sudoku grids as strings

var input = @"
    - 2
    2 1";

or as 2-dimensional arrays

var input = new[,]
    {0, 2},
    {2, 1}

It felt to many in the room that there were more pressing issues with the code, such as a dodgy conditional and horrible hard-coded return values. There was a feeling that we had spent our final half-hour bikeshedding.

I’m not going to disagree. There are always many points of view, particularly with twenty people in the room contributing to the discussion. Much of the drive to try something other than strings came from me, for which I apologise to the group. And yet had I been working on my own I would have done the same. Here’s why.

While we were working on creating new tests and getting them to pass, we could only see one test on the projector screen at any time. I lost track of what tests we had, and at one point the group spent some time discussing “new” tests, only to discover that we already had the test in question. It seemed to me that we had a lot of similarity and duplication among the tests, which in themselves were conceptually quite simple. Left to my own devices I always invest the time in cleaning those up and introducing a single test with a table of input-output cases. I want to be able to easily see which cases are present and which are missing, and tabular tests do that well. Only then would I consider the code in the solver itself, by first looking for new test cases to drive out the hard-coded values etc. (See also Steve Freeman’s wonderful talk Given When Then considered harmful, in which refactoring a suite of tests reveals that there is a missing case.)

So for me, clean readable tests are a pre-requisite for working code.

Later, as I was walking back to the station to get my train home, another thought struck me. Isn’t the whole of TDD based on this kind of activity? Isn’t Refactor Mercilessly one of the core practices of XP? Indeed, isn’t it a premise of evolutionary design that sophisticated solutions will arise out of merciless refactoring? Does this mean that by paying attention to the details, great designs will emerge? The Agile Manifesto hints at this with:

Continuous attention to technical excellence and good design enhances agility.

Could this mean that in TDD there are no nuclear reactors, only bikesheds that need attention? And that by keeping the simple things clean and simple, sophisticated things can always emerge? I have no idea. It’s a big topic, and one that has already been discussed at length. I suspect there’s still a lot of research to be done into TDD and evolutionary design, and many new techniques to be discovered. This is in part why I am interested in connascence. I believe it can be grown into a tool that can help steer refactoring priorities. But all of that is in the future…

When I am programming I make a conscious effort to go slowly, paying close attention to details, particularly the clarity of the tests. Colleagues have often commented that I appear to be going very slowly — bikeshedding even. Possibly true. My Asperger’s makes me methodical, and I do enjoy that kind of programming. And yet overall I still create working software in good time.

Evolving the kanban board

My wife and I are planning to move house. We aren’t sure where we want to move to, or indeed how much we have to spend. Naturally, though, we want to get the highest possible selling price for our current house in order that we have as many options as possible. So we called in a “house doctor” to help.

After she (the house doctor, not the wife) had recovered from the initial shock of seeing how we have customised a fairly standard 4-bedroom house into a 6-bedroom eclectic disaster, she produced a report containing a list of cosmetic improvements we should make in order to attract prospective buyers. The list is long, with jobs for myself, my wife, and our local handyman. We needed to make the project manageable in a way that would allow us all to contribute as and when we have the time. So I found an old whiteboard in the garage and made this:


As you can see, I drew a very rough plan of the house, including sections for upstairs, downstairs, the attic, and the outside. We then wrote a small sticky note for every improvement suggested by the house doctor (blue) and some that we had always wanted to do ourselves (yellow).

When we finish a task, we simply remove the ticket. For example, you can see that we have already finished all of the tasks needed in the Office (priorities, right?).


And why am I telling you all this? Because this is what I recommend teams do for their software projects. When you pick up the next feature, draw an architecture diagram and populate it with sticky notes. The resulting board is a “map” showing the feature and the tasks that need to be done in order to deliver that feature (thanks to Tooky for that analogy).

  • The diagram you draw for Feature A might differ from the one you draw for Feature B, because you might be touching different parts of your estate. That’s cool. The diagram needs to be the one that’s most appropriate for the work you’re about to do.
  • The visual representation of your architecture allows more people to be engaged in discovering the tasks that need to be done to deliver the feature.
  • And it allows everyone, often including non-programmers, to see and understand the scope and impact of what is to be done.
  • Sometimes doing a task will spawn others: things we didn’t consider when we did the original feature break-down; things we’ve learned by making changes or completing spike tasks; things we or the Product Owner couldn’t envisage sooner. That’s fine — we simply add and remove sticky notes as we think of them (and look for opportunities to slice off a separate feature that we can push back onto the ideas heap). The whole thing is quite dynamic, and yet very well controlled at the same time.
  • If possible I like to include testing in the scope of the stickies, possibly adding a few explicit testing task stickies where necessary.
  • As you finish each task (whatever “finish” means for your team), either remove the task’s sticky note or mark it with a big green tick. For our house doctor board, we’ve decided that removing the stickies is best. But for software teams, I generally recommend adding big green ticks to completed tasks. This allows anyone to see how much progress you have made through the current feature, and which areas still need more work.
  • Sometimes the distribution of ticked and un-ticked stickies will suggest opportunities for splitting the feature and releasing a subset earlier than planned.
  • Hold stand-up meetings around the diagram as often as you need, and certainly whenever anything significant changes. (Some of the teams I coach have been known to hold informal stand-ups 4-5 times each day.) The architecture diagram helps facilitate and focus these discussions, and makes it much easier for everyone to contribute.
  • Note that all of the above works best when the team has a single feature in flight. A WIP limit of one. Single piece flow.
  • This approach works well when combined with the 5-day challenge.

As usual with the recommendations I write in this blog, this idea is probably not a new one. But it is highly effective, and I recommend you try it.


I gave a lightning talk on this topic at the Lean Agile Manchester meetup this week. There is a video (below), although unfortunately you can’t actually see what’s on the slides. So I uploaded the slides here.



I’ll be in Scotland w/c 6th February

I’ll be in Scotland w/c 6th February

I’m unexpectedly in Scotland (Glasgow, Edinburgh) for the week commencing 6th Feb and I’ve got availability — in short, grab me while you can before I disappear south of the Border again!  In the time available you could book me to run a half-day or full-day training workshop on any aspect of TDD, Redux, software craftsmanship, XP, agile etc (see here for full details). Or I could simply be a fresh pair of eyes, providing an objective expert review of your agile values and practices.

If any of this sounds interesting, email me and let’s figure out if we can work together in Glasgow or Edinburgh that week.

Clean Code: what is it?

Recently I helped facilitate some discussion workshops on the topic of Clean Code. Each of the discussions seemed to be predicated on a belief that readability is the most important criterion by which to assess whether code is Clean. Indeed, the groups spent a lot of time discussing ways to establish and police coding standards and suchlike. While I agree that this can be useful, I felt the discussions missed the aspects of Clean Code that I consider to be the most important.

So I thought it might be useful here to attempt to describe what I mean by the term…

Continue reading

Consumer-driven development

In common with many other programmers I have been using the term “outside-in” development for a long time. I suspect I first encountered it in the writings of Steve Freeman and Nat Pryce, and I’m sure they got it from someone else. Unfortunately the term can be confusing (I find its Wikipedia page baffling), and I find that it doesn’t capture the whole essence of the way I write software these days. I have also tried using the term “programming by intention”, as advocated by Ron Jeffries. But that term seems to have a life of its own which is only tangentially related to the way Ron uses it.

The approach I want to describe is this: I begin with the code that wants to consume the outputs of whatever I’m about to develop. And then I work backwards. First I hard-code those outputs by creating new code “close to” the consumer, so that I can see that they work. Then I push the hard-coded values further down one layer at a time, until I’m done. (I am also likely to write automated tests, but only at the highest convenient levels rather than having tests for every new level of decomposition I discover. And that’s a story for another day.)

So the core of the approach I use is that I begin with a consumer and I write some code to make them happy. Then I treat that new code as the consumer for a new layer of code, and so on. Each layer is written “intentionally”, and does just enough to satisfy the layer above it (and thus all of the layers above that).

And where some layer is providing hard-coded values to its consumer, I think of that code as making simplifying assumptions. It does the job it was asked to do, but only serves a tiny fraction of its audience’s ultimate needs. These hard-coded values aren’t fakes or prototypes, they are a way of creating thin vertical slices quickly. And once I know they are correct, my next coding episode will be to bust one or more of the assumptions by driving the code down to the next layer of detail.

I want to call this Consumer-Driven Development. It’s nothing new, but it seems to surprise teams whenever I demonstrate it.

(I have some availability in the next few months if you would like to see this in action and learn how to apply it to your code.)


Back in the day I used to say this about test-driven development:

If ever I get a surprise, it means I have a missing test.

That is, if I’m in the GREEN or REFACTOR step of the TDD cycle and my changes make something else break, I need to add a test to document something that I must have missed previously.

I don’t think that now. These days I am much more likely to say something like:

If ever I get a surprise, it means I have accidentally discovered some connascence that I was previously unaware of. I need to eliminate it, weaken it, bring the connascent code closer together, or refactor my names so that it is clearly documented.

I’ve discovered some refactoring that needs to be done, and I wouldn’t necessarily rush to add tests.

Don’t forget the developers!

I visit numerous organisations that are implementing “agile transformations”. In many of them I see a familiar pattern:

The managers and business analysts are sent on courses and sent to conferences and given books to read; most of them change their job title to things like Scrum Master or Product Owner; they create their plans using “stories” written on post-it notes, and they organise their projects into Sprints. But only rarely does anyone help the developers change too.

These organisations have cargo cults. The developers and the testers have at least as much to learn, and need as much support as do the managers and business analysts.

In order to deliver a working product increment every two weeks, indeed to work at all effectively in an agile way, programmers need to learn a whole raft of new skills and modes of thought. Continuous delivery, emergent design, test-driven development, pair programming, mob programming, feature slicing, YAGNI, outside-in development, … The list is long; and most of the skills on it can seem at best counter-intuitive to those who have grown up working in the “old ways”.

Agile methods arose from the realisation that the creation of working software should be at the centre of everything, with all other activities subordinated (in the Theory of Constraints sense) to it. And yet I see so many organisations in which the agile transformation stops with the introduction of stand-ups, plans written on post-it notes, and maybe some 3-amigo training for the BAs.

If your agile transformation is focusing on the way execs measure ROI, or on how project plans are written, or even on how teams are managed, you may be missing out on the biggest throughput boost of all: supporting your developers in coping with this whole paradigm shift.

Of course you will get some improvements in throughput by slicing your plans into frequent releases and focusing on maximising value early etc. But it will never really get flying if your developers are still thinking in BDUF terms, integrating late, leaving the testing to be done by someone else later, hoarding knowledge, collecting technical debt, building systems in horizontal layers, relying on the debugger etc etc. Many developers only know how to work this way; many see the XP practices as counter-intuitive, if they’ve even heard of them.

So when you’re considering implementing an agile transformation in your organisation, please remember that it’s all about software development. Without the programming activities, you would have nothing to manage. Make sure to give the programmers enough support so that they can learn to work in a way that fits with and supports and enhances your agile transformation. Find someone who can teach them the XP practices and mentor them through the first 6 months of their adoption. Because if you don’t, the very thing that agile is about – programming – will hold back, nay derail, your agile transformation.

Update, 20 Oct 17

I used this blog post as the basis of a lightning talk at LeanAgile Manchester last night. My slides (without animations) are here.