A testing strategy

The blog post Cucumber and Full Stack Testing by @tooky sparked a very interesting Twitter conversation, during the course of which I realised I had fairly clear views on what tests to write for a web application. Assuming an intention to create (or at least work towards creating) a hexagonal architecture, here are the tests I would ideally aim to have at some point:

  • A couple of end-to-end tests that hit the UI and the database, to prove that we have at least one configuration in which those parts join up. These only need to be run rarely, say on CI and maybe after each deployment.
  • An integration test for each adapter, proving that the adapter meets its contract with the domain objects AND that it works correctly with whatever external service it is adapting. This applies to the views-and-controllers pairings too, with the service objects in the middle hexagon stubbed or mocked as appropriate. These will each need to run when the adapters or the external services change, which should be infrequent once initial development of an adapter has settled out.
  • Unit tests for each object in the middle hexagon, in which commands issued to other objects are mocked (see @sandimetz‘s testing rules, for which I have no public link). And for every mocked interaction, a contract test proving that the mocked object really would respond as per the mocked interaction (see @jbrains‘s Integrated tests are a scam articles). These will be extremely fast, and will be run every few seconds during the TDD cycle.

I’ve never yet reached this goal, but that’s what I’m striving for when I create tests. It seems perfectly adequate to me, given sufficient discipline around the creation of the contract tests. Have I missed anything? Would it give you confidence in your app?

Advertisements

downstream testing implies a policy constraint

As usual, it takes me multiple attempts to figure out what I really want to say, and how to express myself. Here’s a bit more discussion of what I believe is implied by downstream testing:

The very fact that downstream testing occurs, and is heavily consuming resources, means that management haven’t understood that such activity is waste. (If management had understand that, then they would re-organise the process and put the testing up front — prevention of defects, instead of detection.) No amount of tinkering with analysis or development will alter that management perception, and therefore the process will always be wasteful and low in quality. So the constraint to progress is management’s belief that downstream testing has value.

downstream testers: better is worse

This week I’ve been reflecting on why it is that some “agile” teams seem to really fly, while others never seem to get out of second gear. Part of the answer, at least in the teams I’ve looked at, lies in the abilities of their testers. I wrote about the phenomenon over three years ago:

The tester in Team 1 was very good at his job, whereas the tester in Team 2 wasn’t. And as a result, the developers in Team 1 produced significantly poorer code than those in Team 2!

Have you seen this effect? What did you do to harness the skills of your great testers so that they constructively support your great coders?

can downstream testing ever be the bottleneck?

Everywhere I go I find managers complaining that some team or other is short of staff; and (so far) that has always turned out to be a mirage.

TOC’s 5 Focusing Steps say that adding resources to the bottleneck is the last thing one should do. Before that, a much more cost-effective step is to “exploit” the bottleneck — ie. to try to ensure that bottleneck resources are only employed in adding value. So in the case where testing is the bottleneck, perhaps one should begin by ensuring that testers only work on high quality software; because testing something that will be rejected is waste.

And from the Lean Manufacturing camp, Shigeo Shingo (I think) said something along the lines of “testing to find defects is waste; testing to prevent defects is value”. Which seems to imply that waterfall-style testing after development is (almost) always waste.

Which in turn implies (to me at least) that testing in a waterfall process can never be the bottleneck. The bottleneck must be the policy that put those testers at that point in the flow. Does that sound reasonable to those of you who know a lot more about this kind of stuff than I do?

setup and teardown for a ruby TestSuite

My Watir tests for a particular application are grouped into three subclasses of Test::Unit::TestCase. To run them all I have a top-level test suite that looks like this:

require 'test/unit'

server.start

require 'test/first_tester'
require 'test/second_tester'
require 'test/third_tester'

server.stop

But this doesn’t work as intended, because the server.stop line at the end of the script is executed before the test suite is constructed and executed, which is obviously not what I want. The problem is in test/unit: the require causes this script to collect every test method in the ObjectSpace and then invoke the runner on the resulting suite.

What I would like is to have setup and teardown methods on TestSuite; but they aren’t there. I feel sure someone somewhere must have done this already (the closest I could find was this old post on ruby-talk), but I couldn’t find one quickly so I wrote my own:

require 'test/unit/testsuite'
require 'test/unit/ui/console/testrunner'

require 'test/first_tester'
require 'test/second_tester'
require 'test/third_tester'

class TS_MyTestSuite < Test::Unit::TestSuite
  def self.suite
    result = self.new(self.class.name)
    result << FirstTester.suite
    result << SecondTester.suite
    result << ThirdTester.suite
    return result
  end

  def setup
    server.start
  end

  def teardown
    server.stop
  end

  def run(*args)
    setup
    super
    teardown
  end
end

Test::Unit::UI::Console::TestRunner.run(TS_MyTestSuite)

Please let me know if this solution – or anything equivalent – is already published elsewhere, because I hate re-inventing wheels…