Quality Coaching Scenario: No testing happening

Published on May 12, 2025

Quality Coaching Scenario: No testing happening

In this post I’ll take a hypothetical testing scenario and then set out the thinking I’d take around it to support the organisation as a quality coach. The aim here is to help people to see the thinking that I’d undertake as a way of helping to shape other people’s thinking and provide some materials for teaching.

The Scenario: An organisation where there isn’t ANY testing happening

A hypothetical organisation that hasn’t had a testing professional working there before. Features are being created, but with limited (or no obvious) testing taking place throughout the stack.

This has lead to a number of escaped defects, low confidence from the wider business (due to the high volume of support requests) and a general lack of system knowledge where requirements have been ill defined and / or lost.

You have buy in from leadership, who have hired you in to support the organisation, but other people working there are skeptical and may be detractors.

What I would do?

In this scenario there’s a lot of uncertainty and with only myself as a quality professional at the organisation I would need to be able to scale my approach to help everyone else pick up quality practices. Given that there are people who might be detractors, I’d have to consider what’d work and why people might be not willing to work with testing professionals.

LISTEN AND MAKE A PLAN

The first thing that I’d do is take the time to talk to people within the organisation, engineer, business analysts, product managers, designers and managers. I’d be looking to find out what people’s experiences of working with testers are, testing knowledge that exists and what could be a root cause to any skepticism; I’d also be looking for potential allies that I could start working on.

Key to this, I’d be looking to start becoming a trusted advisor by listening to what people’s pain points are and seeing what they want to fix. By showing that I’m willing to work with people and meet them where they are (rather than just force something onto them or copy/paste a solution) I should start to build more trust and buy-in.

From this, I’d set out a plan or a roadmap for testing and socialise that with the organisation for comments, buy-in or contributions.

DEFINE WHAT GOOD (ENOUGH) MEANS

Coming into testing things without a guide can be difficult for people to start with, they might not know what to test or what types of testing to use. Because in this scenario I’m the only testing professional I’d look to help the engineers in understanding what needs testing via scripted tests (probably automation). To do this I’d look to help the team to define acceptance criteria and to put together a definition of ready and done. Having these things in place will help to show what testing should be done and what testing should cover.

These would need to be pragmatic, setting out the testing that is achievable by the team. I’d look into what testing the code base can take and advise on levels of testing appropriate to that (not all code can take unit tests for example). I’d also probably start only with functional testing and save any non-functional testing for later, to aid with taking people on a journey.

Key to this will be getting buy-in from Product Owners to look at what good enough really means as we’ll want any acceptance criteria to be meaningful for the product. Knowing what the organisation will accept in terms of quality will help define how deep to go with testing.

ADDING TESTS & TRAINING

This would involve coaching in the large and mentoring in the small.

I would kick off and then join three amigos sessions* to help refine user stories and ensure the acceptance criteria are testable and cover happy, negative and edge case paths. Initially these sessions would be me running the testing side of things, but I’d be looking to up skill others by showing them what I do (and explaining it) during the session.

*Also called Triforce / Power of Three / Three hats / Story shaping

Many people may not know how to add / write tests so I’d directly work with people to add tests; this could be through pairing or (if people don’t like pairing) by reviewing merges for tests. This pairing may involve using TDD based off of the acceptance criteria to define tests as a part of writing features, or be a session specifically to add tests to something that’s been built. Again, where these are engineers I’d look to focus on scripted / automated tests initially.

Reinforcing the messages of the definition of done would be important, so I’d be joining stand ups and asking “did we add tests for this ticket” for anything moved to Done.

During all of this, I would be looking to run training sessions and workshops on bigger testing concepts to help everyone get an understanding of what testing is. Creating infographics to post in Slack or even creating memorable heuristics through catchphrases; “Defining the What not the How” is one of my catchphrases for writing acceptance criteria.

LOOKING TO REDUCE ESCAPED DEFECTS

I would recommend tackling this in two ways: regression testing and exploratory testing.

I’d look to put together a retrofit of regression tests, probably acceptance tests / e2e tests as these would be the fastest to implement. To scope the regression testing needed, I would get hands on with the product and use exploratory testing to document the application’s behaviour, then I’d prioritise these by working with the Product Owner. From here I would help the team to organise a retrofit of regression tests, possibly via a small team working on it directly over the course of a sprint.

I would then look to add regression testing into the definition of done that includes the automation AND manual coverage of anything not automated. This would help the team to realise they need to automate those tests and avoid manual testing. As with automating acceptance criteria, I’d attend stand ups and ask if regression testing had been run on tickets moved to done.

For exploratory testing, I would probably run sessions when I had the capacity in order to show the art of the possible initially. Sharing what I’d tested and any bugs or questions that came out from my exploration; suggesting that these are things to think about whilst developing features (or when defining a ticket).

IMPROVING CONFIDENCE WITH THE BUSINESS

Working on this would require us to make everything transparent and report on things (or getting the team to).

I’d advocate for internal company demos of features that include anyone from around the business to come and see what’s been developed and encourage them to ask questions about it. As a part of the demo we should share edge cases and any known bugs to let people know that we’re aware of the quality of what’s been built.

I’d work with the team to start sharing test reports of what’s been covered in testing, ideally through a quality narrative. This should aim to make the wider organisation aware of what’d been testing and INVITE FEEDBACK FOR ADDITIONAL POTENTIAL RISKS TO COVER.

For big releases I would run bug bashes to get the wider business hands on, bring them closer to development of the application and also use their expertise to shake out any undiscovered issues with the application.

Key to all of this would be getting close to the other business teams, going and talking to them and maybe even sharing information with them ad-hoc about quality. this will build a closer tie between the product team and business to become more trusted.

DEALING WITH DETRACTORS

There’s a couple of ways to deal with detractors: Ignore, Manage or Inspire.

Ignoring is the easiest; initially you can usually just ignore detractors and focus on the people who do want to test or do want your support. Let the culture move on around them and hope that they fall into line as time goes on.

Managing means working through the detractor’s line managers (or leadership). Get leaders to give you a mandate for being in a team by saying what you’re there to do and how you have the buy-in of the company to do it. Have 121s with managers and if someone becomes a problem, raise it with them to manage.

Inspiring means you have to sell the dream of testing to people. That means running workshops and training to help people understand what testing looks like and could be. It might also mean doing charisma testing to win people over that testing (or you as a tester) isn’t just there as someone to raise problems and be a buzzkill.

Why this approach?

This approach focuses a lot on coaching, mentoring and getting others to pick up testing (rather than jumping in and doing it all myself). The reasoning behind this is that if you’re the only tester in an organisation then you won’t be able to do all the testing. You’ll need to find a way to infect teams with a quality and testing mindset in order to make it scaleable, especially if the business won’t hire more testers.

“But Callum… engineers don’t have a testing mindset”

You’ll have to shake yourself out of that mentality in order to get testing happening. By initially focusing on scripted (automated) testing we can get engineers to test things whilst we jump in to support more exploratory things in Three Amigos or in exploratory testing sessions.

Over time we can teach the testing mind set by showing people directly what testing looks like.

“Running TDD and reviews of code merging sounds hard or I’ve not done that before”

There’s alternatives, work with engineers to walk you through their tests and show you their code and then make recommendations of gaps from that. If you need further support you can look at frameworks like Gherkin that use a more human-readable language to help you understand the testing that’s taking place.

The main thing is to support the people doing the testing in types of testing that work for them.

“Shouldn’t we also do ?”

Probably! Usually when you join a business that hasn’t done a lot of testing there’s gaps everywhere that need to be plugged up. What I’d say is look at your context and pick the things that need doing from that, but don’t do too much all at once!

If you try to do everything all at once then you’ll burn out and the teams will burn out too. Aim to have a roadmap of sustainable change where people can learn and get good at something before adding new things to them.

What LinkedIn said

I proposed this scenario on LinkedIn and here’s what people said.

I wouldn’t jump straight to implementing processes.
I’d chat with product first. Understanding what they’re actually looking for and who the customers are. This gives me an idea of what the product is meant to be. Then I’d grab a dev for a system walkthrough (bring tea/coffee, biscuits and sweets). I’m not looking for what is missing exactly, and I like to let them guide me through their understanding of what’s being built. No judgement just curiosity.

Once we know the gaps then process, automation, tooling and smashing buttons till things break are easier to work into people’s day to day.

Luke Lattimer-Rogers

I’d probably begin by outlining and defining the exact philosophy I wanted myself and future testers in the organisation to remain faithful to and also make that clear to the other parts of the business – mainly the development team. I really want new testers to get away from the idea of merely following one step after another and developing confidence to really learn a product. I’d want to try to emphasise the job as a thinking person’s craft and encourage people to have confidence in themselves to learn by themselves as much as possible.

I’d also try to create a culture of doing regular workshops not for training on tools, languages, frameworks or product demos, but actual test and quality techniques and test materials. Do people know what a session is, or an oracle, or charters and mind maps?

Owen Trutwein

Collaborate: Work with developers to ideate some form of testing into their workflow and others to understand the current pain points or blockers to testing.Automating testing: Show people the cost savings of testing using tools or to automate repetitive tests.
Understand the current situation: Talk to team members to learn about their processes and challenges.
Start small: Introduce basic testing like exploratory testing first then maybe some unit tests
Explain the importance of testing: Share why testing is crucial for product health, quality and reliability.
Improve with measure: Track the impact of testing and make adjustments as needed.

Aj Wilson

Hopefully this is a useful coaching tool to people, please feel free to reach out to me with thoughts or ideas that you’ve had.

This scenario and recommendations in this blog are entirely hypothetical and are for coaching purposes only.