Recently at LMAX Exchange, we’ve started using more and more of what we call integration tests which prompted a post on how we use integration tests and how we differentiate them from acceptance tests.
To start off, a few definitions are in order:
- we define an acceptance test as an external client that encompasses enough information on how to drive the system under test (SUT) in order to bring it to the point of asserting something.
- integration tests, on the other hand, are used for testing the internals of the system and are geared towards validating contracts between modules of code.
Acceptance Tests | Integration Tests |
---|---|
used to capture the emergent behaviour that’s inherent to a story | the code produces emergent behaviour as well; with the help of integration tests the units that drive the emergent behaviour can be tested |
use the business domain language | they tend to leak implementation and use more of the code base language since they’re targeting lower functional units which don’t necessarily translate into business concepts |
usually built on top of several abstraction layers and can be understood by the business users | we use abstraction layers in a similar way to our acceptance test framework to get the same efficiency in writing integration tests; business users have a limited interest in them |
feedback is slow but more comprehensive | quick feedback as the system doesn’t need to be brought up |
they will suffer from intermittency due to so many outside factors | quick to debug and run as part of every commit; it’s hard to introduce any intermittency |
have a well defined external API (most probably the same API that clients use) for interacting with the SUT | use the internal APIs of the modules under test |
have bigger costs in terms of authoring and maintaining; there’s always a judgement call about encoding a specific behaviour in an acceptance test | with integration tests you can go nuts and test a lot of edge and negative cases |
can successfully be used in proving the functionality to the users through showcases; their side effects are observable and familiar to the users | the output from an integration test run will seem cryptic and uninteresting to people outside of the feature teams |
We’ve started using more and more integration tests to validate requirements. This means they’re no longer the lost cousin of unit tests but rather the new kid on the block right up there with the acceptance tests when it comes to testing in an agile environment.
If you’re using integration tests leave a comment about how you’re using them and how your peers see them.