The challenges of being your team’s sole tester…

Published on May 18, 2025
Picture from #WOCINTECH

This is an article I previously wrote in 2018 using feedback from my team at Datacom. It was originally published for as an online article at SmartBear, but I’ve found this article so useful time and again for helping many testers explore working in similar challenges, that it deserves publication on my own page.

This is quite a long piece, as indeed the topic is pretty big by itself. I’m also leaving this as the original 2018 copy, although I have added a little commentary on remote working, which has changed radically in the last few years.

Thanks to Janet Gregory for helping me with the editing of the original version…

Introduction

Until relatively recently, the chances are that if you were a tester on a project, you’d be one of a number of such people. You’d have other members of the team to try ideas out, share the workload, cover for you when you’re away.

With the recent drive towards agile, we’re seeing the makeup of the team change dramatically. Projects can typically be supported by much smaller teams working permanently to evolve the product. This can often result in there only being a single tester on a team.

What are the challenges being the sole tester on such a project? How can you work with these constraints? This has been the subject of a series of workshops within Digital Identity Solutions, and we’re happy discussing what we learned …

The iron triangle

Before we get underway, it’s useful to revisit the following principle within project management which we found underpinned many of our conversations. It’s useful for thinking about the constraints we’re working within on a project, especially in agile.

The iron triangle gives us the idea that the “quality” of a project is defined by three attributes of your project — cost, scope, schedule (or time).

You might have heard the adage “cost, scope, schedule … pick two”. However ideally on a project there should be only one cast iron attribute — what management consultant Johanna Rothman calls “your project driver” in her book Manage It.

Within any project you can only really have only one attribute which is fixed — it could be “we need this done by X” (schedule) or “there is only Y budget for this” (cost). The skill of a manager is to work with this constraint and plan around what can be done with the other two attributes to achieve this goal.

Within traditional test management, there are a lot of parallels for applying this same theory to test planning. Within this dynamic the fields are,
• Scope — how much testing you’d like to achieve
• Cost (or rather typically resources) — having more testers allow you to execute more activity
• Schedule or timeframe — how long you have to do things

It should be obvious that if you have a large scope, and a short timeframe, one solution would be to have more testers on it. Although of course in the real world, there are constraints as to how much this can be pushed, and good test management revolves around knowing and pragmatically working within these constraints.

Another solution of course is less testers, but it means that it takes longer to get through everything you’d like. Great for the test budget, but typically people like developers need to be paid to be on call to fix the bugs and the bugs are found later in the cycle, so developers need to be available longer.
Finally, if you find yourself in a situation where your available people and schedule are fixed, the only thing to do it to prioritise your scope as it’s the only think you have control of.

Understanding this dynamic and the trade-offs is important because it was a core part of the discussions that were held, together with ways they could be handled and occasionally hacked.

Under pressure

A common initial feeling of someone stepping into the role of a sole tester was that of feeling under pressure.

Especially in an agile project — the timeframe is set by the sprint duration and your testing team size (although this can be “hacked” as we’ll discuss later).

Just back in 2013, one of our projects would have an annual release, which would involve a two-month testing window and would keep our test team of six busy.

Fast forward to 2018, and we’re now working in agile teams where we are creating deliverable code in a two-week sprint window using only two testers.

A key enabler in this was adopting a robust automated testing framework, which was easy to maintain with changes in the system under test. Such a suite did not grow overnight — and required a lot of work between testers and developers to build the right thing from a framework perspective, as well as to work through a prioritised list of useful automated tests to have in place. In working out scenarios and prioritisation, testers found themselves well-placed to lead these discussions. Over time, this suite was able to carry the functional regression load.

Automated testing helped, however it didn’t eliminate the testing role. But testers found that their role did change dramatically. Most manual testing effort now focused on testing new or changed functionality in-depth during a sprint, as well as helping out with increasing ownership on test scenario selection for the automated suite (as well as shock-horror, learning to code their own tests).

In teams which are still undergoing a level of “forming” — a term used to describe those that have relatively new team members, some of whom were relatively new to working in an agile team — it was quite common for the sole tester to feel initially like they were the “point of blame”. If something gets out into production, the inevitable uncomfortable question can be asked of “why didn’t you test that?”

We shared a few of our experiences looking for general themes. Part of the problem that we were acutely aware of was time, and it’s not always possible to test everything you want to.

In many examples of a release where a defect had been undetected, manual testing had always occurred. Typically though, something was missed, or it was not imagined that a particular scenario could have been capable of causing an issue.

It’s worth taking a moment to think about how this was addressed in “classic” waterfall projects. A test lead would create a plan of what’s to be covered in consultation with many people on the project, but especially using the requirements. From this, they would build up a series of scenarios to be covered and make estimations around the resources and timescale.

However, on these classic projects, this was not the end of the story. It was the tester’s job to produce the best schedule they could, but it was known that this would not be perfect on the first draft. This was why such emphasis was put on the importance of reviewing — firstly by peer testers to see if enough testing heuristic variation has been employed, but also by a wider team such as project managers, customers, developers.

The aim with reviews was to find gaps in the plan and address them. This allowed the final testing scheme to be the most robust scheme of testing possible. This could come from developers saying, “we’re also making changes in this area” or our customers stating there’s an expectation that “most people will…”.

Within agile, it can be easy to forget that this level of contribution is still required. It needs to occur, however it’s in a more informal, often verbal manner.

Within Digital Identity Solutions, there is general consensus that the tester becomes more responsible for facilitating a discussion around testing, much closer to what some organisations will call “a quality coach”.

A core tool for having these conversations is the use of mind maps, which the group has been using with success since 2013. A mind map allows the author to show for a particular feature, all the different variations and factors that they’re planning to follow in a one-page diagram.

When done well, they’re intuitive to read and can even be posted in common areas for people to look at. Their brevity helps to get people to read them — “I haven’t had time to read that thirty-page document you’ve sent yet” is a frequent complaint in IT.

Even with a mind map in place, there is a natural tendency for the rest of the team to rubber stamp things. A sample conversation might go like this:

Tester: Did you have anything to add to the test mind map I sent out?
Team member: Uh … I guess it’s okay?

We all have a tendency to say something along the lines of “I guess so” for something we’ve not properly read. It’s important to still follow up with a brief conversation about what’s in your coverage– this can be individually with each team member, but often better with the whole team. Just after stand-up can be a great time for this to occur.

If a member of the team notices there’s a mistake about the approach, or some items that are missing, it’s expected for them to provide that feedback. Likewise, if the developer does more change than initially anticipated, there’s an expectation for them to tell the tester what they might also want to consider.

Often what you’ll read in agile literature about a “whole team approach” is essentially this: the whole team takes responsibility to give feedback whether it’s about how a story is defined, how a feature is being developed, or how testing is being planned.

A good indicator of when a team has made this mind shift is the use in retrospective of “we” instead of “you” — “WE missed this, WE need to fix this”. Teams where this happens have a much more positive dynamic. It’s important that this applies not just to testing.

Other examples include when a developer builds exactly what was on the story card, but not what was actually wanted (“we failed to elaborate”), when a story turns out much bigger than first thought (“we failed to estimate”) etc.

That said though, agile does not mean the breakdown of individual responsibility. A clear part of the tester’s role is to set clear expectations for the team of what they can do, how much effort it will take, and how you’re approaching it. But there needs to be team input to fine tune this to deliver the best value.

Mainly testing will revolve around changes to a product, for which the rest of your team are your first “go-tos” as fellow subject matter experts on the item. Occasionally as a tester though, you will find the value to consult with another peer tester — and there is an expectation that testers who are part of the same organisation but in other teams can be approached to be asked for their advice and thoughts on a test approach. Within our company there is an expectation that all testers make some time in their schedule to support each other in this way. This, in many ways, echoes the “chapter” part of the Spotify model, with testing being it’s its own chapter of specialists spread across multiple teams/squads who provide test discipline expertise.

Reaching out to other testers like this is important; it creates a sense of community and the opportunity to knowledge share across your organisation.

Waterfall into agile won’t go…

There have been some “agile-hybrid” projects where there has been an expectation of set number of people being able to perform a set volume of testing in a set time (sprint). This can sometimes be problematic as the tester involved in execution hasn’t been involved in setting the expectation of what volume of tests are likely. And hence, it can feel like working against an arbitrary measure not based in reality.

In such a situation, it’s like being given an iron triangle where someone has given you “here’s your schedule, here’s your resources … so you need to fit in this much scope”. When faced with so many tests to run, it obviously helps to have them prioritised so that you’re always running the most important test next. When three areas are fixed, what suffers is the quality — it gets squished

On projects where test scripting was not mandated by contract, there was always a preference for use of exploratory testing — this being because it allowed the manual tester to focus their time on test execution with very little wastage, meaning more tests could be run, which helped reduce the risk.

Advocating for testing

A commonly recited tale within “forming” teams involved occasions where a story had been handed to them to test with just an hour to go until sprint end — and hence any testing performed would be rushed. This usually reflected a bias to see a story as “done” if development was finished.

It was typically these stories which, when passed into production would be the ones with missed issues.

As a tester it’s your responsibility to contribute to a “forming” team’s evolution to “performing” by helping the rest of the team understand your role and your needs in that role. The key to doing this is advocating for testing throughout the sprint and bringing your needs to the team.

During pre-sprint elaboration of stories when the team talks about future work requests and tries to size them:

· If part of a story was going to be difficult to test, then ask developers to also add tools to aid with the testability of the feature, and factor that into the size

· If a story was going to be onerous to test, bring it up — it’s possible that the product owner might be comfortable accepting less testing, especially if it was an early part of a feature that would grow over a number of sprints. Looking at our “iron triangle”, this is essentially accepting there will be less scope to make it fit.

· Alternatively, discuss with the team how a large testing task could be shared amongst the whole team. Looking at our iron triangle, this is accepting more resources/cost for testing in order to get the scope of testing you want.

An example of a shared team approach to aiding testing involved a number of sprints which involved page redesign — during the initial stories, the pages were only tested in IE, Chrome, an Android phone, and an iPhone. These items represented the most popular browsers / devices currently used by the system.

As all the pages neared completion, testing was performed across the finalised pages using a larger suite of tests. The project tester drew up a matrix of items to be tested, with members of the team helping with allocated browsers or phones to test in more depth according to the instructions given to them.

During stand up:

· It always helps to have a rough expectation of how long you expected to need to finish testing a story. As the sprint end came closer, it helps to remind the team about how long you expect to need to test, especially if several pieces of work are due to complete close to the sprint end. The team may have help or solutions for you.

· Some teams have embraced this in their planning, mixing up the size of story they prioritise to keep a good flow of work to best tested, rather than deliver several large stories at the end. Likewise, there is a level of maturity by accepting that a story won’t get finished this sprint and bringing it to the scrum master and product owner’s attention.

Communication

A key skillset that was talked about by sole testers was that of being effective and influential within their communications to the rest of the team. When talking about concerns or problems, it was important to put together a strong argument.

This often felt daunting to testers, especially in forming teams where there was better numerical representation of other disciplines.

The following elements were considered the core features of effective communication when making a case for a risk:

· Explain why you consider it to be a problem. Frame it in terms relevant to the business.
· Highlight some examples if need be. People are highly visual, so anything you can show will often make your case strongly.
· Outline what you can / cannot do.

As a tester it’s important to recognise you need to make the strongest case you can, but often others — whether the wider group as a consensus or the product owner themselves — will make a decision based on that risk.

As time went on in a team, this pressure relaxed and there was a feeling of trust that built up within the team as it shifted into “performing”. But it strongly highlighted how effective communication and influence skills are becoming increasingly central to everything we do.

The power of showing

The talk about effective ways of communication and “highlighting some examples” diverted to talk about “the power of showing”.

Persuasion is always easier when you’re close and when you’ve worked together long enough to build trust in each other as a team. A charming story around this, is one of our testers just has to gasp at something on her screen, and the rest of the team turns around and wants to know “what have you found”. Teams like this only form with time and trust from all parties, moving from that “forming” stage to building trust to “perform”.

For many, seeing is believing. It can be easy to get trapped into a conversation when you describe behaviour you’ve witnessed of “well, it shouldn’t do that”. A demonstration can simply and effectively show “well it does”.

One example came from a distributed team with some members in different countries. A remote team lead passed a link for the web page to be tested in an email.

The tester responded that it was the wrong link. The team lead replied back that it definitely was the correct one. This followed a few iterations.

Eventually the tester did a screen share with the team lead over Skype For Business, showing them clicking the link, and showing how the link was not valid and went nowhere. The team lead then responded immediately with the correct link the tester needed.

There is no substitute for colocation, but Skype For Business and other such screensharing tools are the next best thing. They are used extensively by our internal helpdesk to log and rectify issues with our machines, and so are logical tools for testers.

[2025 edit — well, the pandemic taught us ways to work together remotely, didn’t it? I’m leaving ‘no substitution for colocation’ in. But working out how you build trust in distributed teams really is a follow on article. But check my references for an excellent recommendation.]

When this isn’t possible, recording your session with a commentary as a video can be really useful, and as a last resort, the tester favourite of taking screenshots for an email can be deployed.

However, sometimes email is not always the best way to interact. Along with the choice of channel, it was picked up how important it was to keep communication simple — you ideally want to take up as little time as possible. If you do it face-to-face or in real time in some form, you can be certain it won’t be ignored.

A multi-page email might contain all the detail that might be required, but you can’t be sure it’ll be read or that care will be taken with its salient points. This was highlighted to me recently with a joke I sent to a colleague, which they asked me about afterwards. My colleague didn’t get the joke because the actual punchline was in the second line of the email … but he didn’t read that far. When we send an email, even one with a read receipt, how can we really be sure that it’s been read? And much deeper, how can we be sure it’s been understood?

I know, a terrible dad’s joke. In my defence, we were talking about the object orientated topic of polymorphism that week in a training session on Java

There also came out from conversations some of the basics that testers have been talking about for decades and which apply to communications — avoid being pedantic and recognise that anything you find issue with, represents someone’s hard work and effort. As such, there is always potential to upset and hurt people.

It’s been noticed that without colocation, it’s much easier for egos to be hurt in such interactions. A collocated team can build up a depth of social interaction to allow for a closer, deeper teambuilding. Typically, people in collocated teams feel they move from “forming” to “performing” much quicker.

Absence

As the sole tester, you’re a key resource in a sprint. When you’re not there, there is no tester.

This initially created some feeling of guilt when a member of staff had to take a sick day or even worse, planned leave!

Ultimately, responsibility with coping with your absence does not solely rest on your shoulders. There is some expectation on the managers and team to cope with this.

Within the team realm, members should have enough cross-skills, that any task on the board can be picked up by more than one team member. Of particular note from testers, was the close relationship between testers and business analysts on a team, and how a good business analyst should be able to pick up a testing task and vice versa.

Solid teams learned to focus on making sure stories were done, and that meant people going “this task needs finishing … I could do it” rather than sticking solely to their own discipline tasks.

The mind maps we talked about earlier were invaluable for the team — if they had been created, they formed a road map of planned testing which the rest of the team could cover, and which would allow them to continue in the case of short, unexpected absences.

For longer absences, being able to get another team member to cover for you, and coaching them through the basics of what you do, also turned to be invaluable.

A team needs understanding of tools / product. It’s often been much easier to get a member of the team to cover vs bringing in a new tester onto the team, unless the absence is significant.

Conclusion

Our workshops were run using a modification of the Lean Coffee model. We all collected about five post-it notes of ideas, and put them on the board, clustering common items as topics. We then could use five dots to vote up the topics which were important to us.

At the end of our final workshop, there was a lot of satisfaction on the stories and learning we’d shared. We didn’t talk about every topic, but we’d talked about the most important topics applying the prioritisation that we spoke of being so important.

In you reading through our combined wisdom there is perhaps a slight sense of “déjà vu”. If you read enough about agile testing, you’ll come across items like “it’s not done until it’s tested”, “the WHOLE team is responsible for quality”, “everybody helps testing”.

These terms can be very validating to us as individuals. However, what we sometimes don’t appreciate in reading this material is that as testers, it often falls to us to educate and build trust with the larger team who to get to this “performing team” Utopia.

Achieving this is all about sharing our role and our struggles. Good communication and influence skills are pivotal because we need to get others on side, and yes, we may have to make a few compromises and understandings of others’ roles on the way.

In looking at the future of what skills the tester of five years in the future will need, we often find ourselves focusing on automation and technical skills. And these skills do have their place. But as our workshop highlighted, soft-skills are a core part of being a tester moving forward.

The sole tester is required to be the ambassador for their craft and their viewpoint. To be effective, they need an ability to influence, persuade, and teach. From a training perspective, the challenge going forward in my company is how do we as a discipline practice and build these skills.

That will be a focus of a future session, but feel free to share your experiences in the comments.

Further reading

Manage It! Your guide to modern pragmatic project management by Johanna Rothman

Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and Janet Gregory

On the topic of bulding trust in distributed teams, I’d really recommend Software People… Work From Home on LeanPub, which I contributed to. This was a lot of people’s experience dealing with being productive while remote working during the pandemic.


The challenges of being your team’s sole tester… was originally published in TestSheepNZ on Medium, where people are continuing the conversation by highlighting and responding to this story.