Why you really need Automated tests

Published on July 4, 2024

Why you really need Automated tests

The state of automation isn’t consistent across the industry, with some companies pushing for 100% coverage and some doing barely any. In both cases it’s likely that they’re doing the wrong thing with their automation because of a lack of understanding of why we need automation. This can be because of an AUTOMATION IS A SILVER BULLET mentality where it’s the only type of testing we know, or because the team haven’t been trained in why we automate.

Let’s go through some of the reasons we need automation so that we know why, how and where to automate:

Merging Code Together

Whenever developers write and add code to the repo there are risks that things will break, the more we add code the more risk there can be. We use automation to test and support the engineers to see that code and service logic/ behaviour is persisted and hasn’t changed.

Because a repo (project code) can be added to to by lots of team members it’s hard for one developer to know all of the intended logic and behaviour of code. Each person adding tests for their code means that we have a way of saying “MY CODE SHOULD BE DOING THIS, DON’T CHANGE THAT without having to go through all the requirements / logic by hand.

Fig 1. A meme of merging being messy, tests can stop this.

Additionally, an open source library you’re using may change unexpectedly (it happens), so we need a way to be checking for this constantly. Running code functionality tests constantly would be impossible, so we need automated code logic tests.

Why automate these: If we have lots of code merges then there’s lots of chances that the code will go wrong, it’d slow us down too much to check all these manually.

Who automates these: As these predominantly are engineering tests, preserving the logic of code, they should be written and maintained by the developers working on that code.

What these tests don’t tell us: Were the business requirements met? Is behaviour of the service or application totally what we wanted? How are interplay of systems and user interactions handled?

Contracting Between Teams

Products are usually written by a series of teams working independently on their own services or parts. Just like in the above, this means that changes to how their part works might change how it interacts with other teams. We can use automation to make sure the interactions between parts of the system are preserved.

Fig 2. Ariel signing a contract (not the type of contract we’re testing).

Contract tests are those that we add to make sure our side of the interaction between two parts doesn’t change from how we’ve said it will work. Making sure our side of the contract is upheld means there’s less chance of the overall interaction failing.

Why automate these: To keep warning us if a change we’ve made will impact another team so that we don’t forget to check this & communicate it (important in siloed teams).

Who automates these: As these predominantly are engineering tests, preserving the logic of service interaction, they should be written and maintained by the developers working on that code.

What these tests don’t tell us: Did another team’s side of the interaction change? Were the business requirements met? Is behaviour of the service or application totally what we wanted? How are interplay of systems and user interactions handled?

Overall Behaviour

The whole is greater than the sum of its parts. Whilst testing the code and service interaction logic will tell us that the intended engineering logic has been met, we need to make sure the business needs persist too. We can use end to end automation to ensure that business requirements are not effected by changes to code and services.

If we integrate with 3rd party or SaaS (Software as a Service) products, these might get changes without us being told. Having tests running frequently allows us to catch whether such a change has been made and impacts us (or our customers in production). This means having to run tests almost continually to check for these unexpected changes from outside the organisation and catch them if (and when) they happen.

As products grow in complexity and size it becomes difficult to run manual tests for everything in a timely manner. Without automating tests of behaviour we could risk ignoring areas because our testing takes too long or holding up a needed release by days in order to cover something. Automating our tests also documents them and makes them inheritable, so we don’t need “system knowledge” to get that testing done.

Why automate these: Helping the business understand that what it wants from its product continues to happen each day would take too long to run manually. Also to reduce the need for keeping system knowledge in your heads by documenting it through testing.

Who automates these: These are predominantly business tests at an enterprise level, people need to be given this as a task to do as it may not fall in service / component testing teams remit.

What these tests don’t tell us: Anything where there’s uncertainty (unasked needs or unknowns in behaviour). Usability, charisma and whether people like the product. Whether we’ve asked to build the right thing in the first place.

Deployments

In addition to CI/CD and merging code together into one place, we also need to check deployment of our product into production. Basically we want to know that all the components have been placed into the environment and they’re all configured / talk to each other. This can be tricky because in some organisations we’re not able to just run behaviour in production (for example in banking you wouldn’t want to make tests because you’d have to report fake test transactions to the regulators).

We may also need to get around NOT GIVING PEOPLE ACCESS TO LIVE SYSTEMS for security reasons, so we cannot manually test. Instead we can use automation to check that systems are in place and communicating to each other post deployment with some Read Only automation checks (so that we don’t create bad test transactions in production).

Why automate these: To get around not being able to make full transactions or being given production access we need a safe way of testing to show deployments have been made.

Who automates these: As these predominantly are engineering tests, showign service availability and interaction, they should be written and maintained by the developers working on that code.

What these tests don’t tell us: Whether the services and features are behaving as intended. Were the business requirements met? Is behaviour of the service or application totally what we wanted? How are interplay of systems and user interactions handled?

Observability (Post Launch Automation)

As we deploy to a live production environment sometimes teams get shrunk or disbanded, but we still need to check that what we’ve delivered works for our customers. Or maybe the team starts working on new features for the product and have to maintain production AND the new features. In either situation we might not have the people hours to manually test that both systems are working each day. Automating the health and behaviour of our live systems will mean that we’ll be informed quickly of any failures that might impact our customers (ideally in real time).

This is an area we tend to forget about. We test to make sure we’re ready to launch, but many projects and teams forget about the need to test post launch to ensure stability.

Why automate these: We need to know as soon as possible of any failures impacting our customers.

Who automates these: The teams responsible for services should add healthchecks to them and overall behaviour tests should be scheduled to be written as part of readiness for deployment.

What these tests don’t tell us: Anything where there’s uncertainty (unasked needs or unknowns in behaviour). Usability, charisma and whether people like the product. Whether we’ve asked to build the right thing in the first place.

Agile: Gotta Go Fast!

In many environments (Agile being one of them) we need the ability to make time for ourselves to get things done in a fast paced environment. Automation is a tool we can use to make time for ourselves by making it so we don’t have to manually retest behaviour all the time. This frees us up to think about testing scope, risks, exploration, benchmarking and all the other things needed to support the quality of a product.

Fig 2. Sonic the Hedgehog, the international symbol for “going fast”.

This is very relevant in a time where testing teams are being made smaller. It’s not unusual to be the only tester in a team now, so you have to be able to find ways to make time for yourself. If you don’t, you’ll be so swamped with the day to day of basic testing and retesting (regression) that you’ll not have the time or focus to think about the big picture and really support your team.

Conclusions: Why do we need to know this?

Understanding of testing and why we test can be limited in our industry. We may know that a type of testing exists but are not told why and when we need it (beyond being told “we certainly have to have it”). Engineering managers and developers may have a bias towards automate everything based on being taught that testing is all code based at university; this can bias our view that we just have to have automation without thinking about what we need it for. This becomes the AUTOMATION IS A SILVER BULLET view where it’s assumed that if we have automation then that’ll cover 100% of all testing we need.

Fig 3. A werewolf, the beast you can traditionally kill with a silver bullet.

Additionally, many testers focus on the HOW of automated testing (look at all the courses focusing on how to code or explaining types of automation) rather than the WHY of automation. That lends us to having a tool we can use, without knowing how it might fit into an overall toolbox or having the ability to talk to out teams as to the benefits and shortfalls of using this tool.

Understanding why we need automation, in what places and how it’s useful, is vital to being able to create a holistic strategy. It’s important, not just for SDETs or automators, but also more manual testers because we all need to be able to help recommend the scope and approach for testing.