The Perfect Test Automation Strategy
I know, I know. You’re thinking either click-bait or impossible, and I totally get it. There is no such thing as perfect, right? Well… it depends on your definition of perfect.
A perfect score in a game of football may be defined where one team wins in a shutout / clean-sheet score. A perfect shoe might be defined as one that fits just right. And a perfect circle can be defined as one that is drawn with a compass. So how might one define what a perfect test automation strategy is?
My definition is one that meets the following criteria:
- Gives the team feedback whether or not the system they are developing continuously conforms to their requirements throughout its lifetime
- Reliably provides the same output every time the same automated tests are run with the same input
- Is quick and easy to change by the team over the lifetime of the system
- Runs as fast possible using the skills and technology accessible to the team
Let’s unpack this list.
1. Gives the team feedback whether or not the system they are developing continuously conforms to their requirements throughout its lifetime
Every system has requirements that are communicated as specifications. How a team chooses to produce and communicate these specifications varies. Some like to write user-stories and wireframes, others like to write detailed 200-page documents, and some prefer to have conversations and whiteboard sessions.
Also, the responsibility of writing specifications varies. Some organizations have nimble cross-functional teams that do everything under one roof, and others have one team producing specifications while another team builds them.
Whichever format and methods are chosen, inevitably, the specifications get actualized as code in a system through development & verification functions, and the underlying system eventually matches the specification to an acceptable level.
The verification function — be that manual or automated — takes two inputs: Specifications and the System Under Test (“SUT”), and outputs a binary value of “pass” or “fail” depending on whether or not the SUT adheres to said specifications.
If a team automates the verification function, they codify the specifications into an executable form. This means writing software that verifies that the SUT matches the specifications. When software is written in this way it’s technically called executable specifications but is more broadly referred to as “tests”.
A perfect test automation strategy starts early with specifications being created in a collaborative fashion between those requiring the changes and those making the changes.
Those specifications are then recorded in an executable form and are run to provide feedback to the team when they make breaking changes, or if external factors create breaking conditions.
This feedback loop allows the team to course-correct and gives them confidence when adding new features and making improvements over time.
2. Reliably provides the same output every time the same automated tests are run with the same input
If you had a calculator that gave you a different answer despite you adding the same two numbers, that calculator would be useless! Yet the same thing often happens in the industry where automated tests fail and the solution is to rerun those tests in hope that the correct answer will arrive. Not only is this a waste of valuable time, but it also reduces confidence in the tests and creates doubt regarding the value of writing tests in the first place.
For a test automation suite to be trusted, it has to be reliable.
The usual culprits for unreliable tests are down non-deterministic factors, which typically involve the following areas:
- Race conditions due to asynchrony
- Uncontrolled test data
Asynchrony because of the variety of I/O related issues such as network latency, timeouts, variable speed disk access and so on. It’s hard to write a test that can account for all the possible asynchronous cases.
Uncontrolled test data because this can create a mismatch between assertions in tests and the system under test. It’s hard to write an assertion when the underlying system data is unpredictable.
A good test automation strategy will ensure that all the non-deterministic factors are effectively dealt with. Asynchrony and uncontrollable test data can be dealt with through decoupled architecture that allows the use of stubs and fakes which make the code paths deterministic and therefore the tests deterministic. Test data can be further controlled by ensuring it is created in an automated fashion through fixtures and services which evolve with the test automation codebase.
These measures not only make the tests more reliable, but also easier to maintain which brings us on to the next point.
3. Is quick and easy to change by the team over the lifetime of the system
Software that doesn’t change is software that no-one uses! New requirements inevitably emerge as new use-cases are established which ultimately means a change in the specifications, and there is usually some urgency around getting these changes made by some business need.
One of the most common type of changes to requirements are actually bug reports, which entail adding code paths to cover unforeseen use-cases. A good test automation strategy embraces this fact and encourages bug reporting to be incorporated into the specification writing process. That is, bug reports should be expressed as changes to the specifications and followed by automated tests to cover these specifications and their new code paths. (See this article on the different kind of bugs that exist)
Bear in mind that not all automated tests are equal. Some test automation approaches act like shrink-wrap that locks functionality down — e.g. snapshot testing — which ends up reducing the team’s ability to make changes quickly. If it takes a lot of effort to add a test, then the team will likely skip writing them. Humans are inherently lazy and will typically choose the path of least effort when performing tasks.
So basically you need to be sure that you can change the underlying test code easily and this requires good engineering practices.
Your team needs to have the skills that allow them write software that is easy to change. This applies to both the SUT and the test automation codebase itself. Here are what we see as the essential skills and techniques required to do so:
- Behaviour Driven Development for a collaborative approach to discovering and writing specs as a team
- Hexagonal/Onion/Clean Architecture for writing decoupled testable code
- Domain Driven Design for managing complexity and fighting the ball-of-mud outcome
- Component-Based UI Development for a modular scalable approach to the front-end of your application
A team that wants to have the perfect test automation strategy will continually strive to improve their skills and techniques and apply them to their codebase to make it super easy for future changes to happen.
4. Runs as fast possible using the skills and technology accessible to the team
Once you have reliable executable specifications that are easy to change, you need to make sure they can run as fast as possible. This is because the longer feedback lives in a system the exponentially more expensive it is to deal with it, and the inverse is also true: the sooner you can detect feedback, the cheaper and quicker it is to deal with.
By having the feedback sooner, it means developers can quickly course correct without having to wait minutes or hours and have to lose context of their thought process.
This means the team has to have the skills and know-how to omit I/O from within test runs. I/O is caused by networking and disk access where the CPU is left in a wasteful waiting state that could instead be running tests at lightning speeds. With the right decoupling skills these I/O operations can be exchanged for in-memory processes that moves the I/O bound problem into CPU-bound and memory-bound realms, and these are far faster on today’s laptops and cloud systems.
The team also has to have access to technology that allows them to run the tests faster such as wallaby.js, and tools that can provide earlier feedback about the health of the code such as linting tools, and parallel-capable continuous integrations servers that can easily scale to run tests in a a shorter amount of time. It’s in the interest of every company to provide these tools to their development teams as the cost-benefit scenarios are a no-brainer.
In a perfect strategy a team will continually look for opportunities that can improve the speed at which they can run tests. One rule we like to use is to make sure that a full unit test run locally should take no longer than 10 seconds, and a full build including all tests should take more than 15 minutes on a continuous integration server.
The more of these types of techniques and technologies that are applied in the strategy, the faster the team can get feedback and therefore the faster they can course correct. This ultimately leads the team providing higher quality business value to customers, faster.
If all of the above points are incorporated into a test automation strategy in a disciplined and diligent fashion, and every point is performed to the 9s, then I would consider by my definition that test automation strategy to be perfect.
If you like what you read then go ahead and follow the Quality Faster Twitter account, and please check out the links below to see how my team and I can help you on your journey to test automation bliss.
Thank you for taking the time to read this and I look forward to helping you deliver higher quality software, faster.
Sam.
Let me know if you have any questions or thoughts in the comments below.