
Why the testing industry is the way it is
I was inspired to put virtual pen to virtual paper again by a LinkedIn post from my good mate, Paul Seaman, lamenting his experience of spending nine months looking for a new testing role in Melbourne (Australia):
During 9 months of job searching it was hard not to notice that the job market for software testers is broken. Not just a little broken, a lot broken…
…we have the job ads that ask for a million different things and tools. I was told by a recruiter that, in a market like the current one, it’s a form of filtering. We both agreed it’s a particularly poor filter. I suspect it’s more fundamental. Many companies seeking a tester do not know what they need so they resort to a “wish list”.
Paul asks how the testing industry got to be this way and that got me thinking. When you look at a system and it seems completely broken or makes no sense, it’s worth thinking about how it could make perfect sense just the way it is. The US “healthcare” system is a perfect example, it’s not broken for those who’ve architected it to be the way it is – far from it!
We know how systems become the way they are thanks to these sage words from Jerry Weinberg:
Things are the way they are because they got that way
So, what lens can we use to look at the current testing market and see it making sense? Who benefits from the way it is? Who decided it got this way?
I’m aware that many other folks in the testing community have charted the history of software testing in various different ways. What follows is my take on how historical events (and not just within testing itself) have led us to the current state – you may agree or disagree with my analysis and I invite further debate on the topic.
In my opinion, the testing industry has been shaped into its form today by the following factors (presented in somewhat chronological order). I will discuss them individually but, as will become clear, they’re intertwined and exert forces between each other as well as on the industry as a whole.
- The Agile & DevOps movements
- ISTQB certification
- Commodification
- SDETs
- The “testing is dead” narrative
- Keyword-driven recruiting
- Surveillance capitalism
(No, I haven’t forgotten about AI, I’ll come to that in my closing remarks.)
The Agile & DevOps movements
The early 2000s saw the agile movement starting to gain traction, with DevOps coming into the mix towards the end of the first decade of the new millennium. I’m covering both of these movements together as their impacts have amplified each other in many ways, I think.
Both movements talk about faster feedback loops and don’t formally acknowledge the idea of testing being a speciality in terms of role. As both of these movements have become the dominant paradigms for modern software development (despite their adoption often not adhering to their foundational practices – yes, I’m looking at you, organizations with a “DevOps team”), it’s no surprise that testers have been devalued.
Organizations have institutionalized the utopian vision of machines rapidly & cheaply checking their software products instead of “slow & costly humans” critically evaluating them (and the conflation of human testing and “automated testing” is a consequence of the widespread organizational ignorance around testing).
Both of these movements have been very well-resourced and popular certification programmes further their financial clout, so there has been no shortage of high-profile coverage of the benefits of both Agile & DevOps in major IT and business conferences, industry publications and so on. You only need to look at the strong focus on these movements in CapGemini’s “World Quality Report” to understand their reach into the testing and quality management arenas. (I’ve critiqued these reports in previous blog posts: 2018/19, 2020/21, 2022/23, 2023/24 and 2024/25.)
That organizational decision-makers have gone “all in” on these approaches was an entirely predictable outcome – adopting them as the de facto way in which software development teams now operate across their organizations.
ISTQB certification
It’s over twenty years since the ISTQB was founded and they have issued over a million certifications in over 130 countries (according to their own data from May 2025). The lack of other software testing certification schemes created the perfect environment for the ISTQB’s offerings to flourish and they were highly successful in marketing their certifications as the “industry standard” especially in the 2005-2015 period (based on my own experience). Though they had no genuine authority, they created the “ISQTB as industry standard” narrative. While skilled practitioners questioned the value of these certifications, they provided an opportunity for candidate filtering that was too good to waste and they were subsequently viewed as mandatory for many testing positions for a long time.
The simplicity of obtaining the Foundation certification helped to create the illusion that testing is easy and, as such, anyone can be quickly trained to be competent. Treating testing in such simplistic terms inevitably helped it become seen as a commodity service (more on that later).
The ISTQB and its local boards actively promote the idea that they are non-profit organizations, but the accredited training providers associated with them are generally not – and are often owned or serviced by members of the boards (which would seem to be a conflict of interest). The market value of the certifications themselves along with the training courses around them is in the order of millions of dollars per year. This significant financial clout has been used to influence decision-makers especially in larger organizations, with a trickle-down effect on the industry more generally.
Commodification
With testing being seen as easy and capable of being performed by machines – via the forces of agile, DevOps, easy certifications, etc. – testing skill became conflated with deft operation of the machines or tools, rather than in the creative intellectual evaluation and exploration of the software.
It was then an inevitable “race to the bottom” for the humans left behind. This industrial revolution of testing resulted in competition only on price, with outsourcing to low-cost locations becoming more and more common.
SDETs
The SDET (Software Development Engineer in Test) role originated in the early 2000s and was popularized by Microsoft, who made a lot of noise about the fact that they no longer had testers, only SDETs.
Like sheep, other big players quickly followed suit, including Google with their version, the Software Engineer in Test (SET). As the big names talked up this new approach, many other organizations latched onto the idea and human testers all over the world found themselves out of favour (and often out of work).
The need for engineers who could both write code and the automated tests for it arose out of the agile and DevOps movements, but the move to SDETs critically missed where human testing added value (or ignored it in the interests of speed, automation, commoditization, etc.). The terrible user experience of Windows Vista released during the height of the SDET frenzy should have been taken as a sign that removing the human elements of testing was probably a bad idea.
SDETs, in practice, were likely to be much better developers than testers and the role seems to have fallen from favour in the last decade. It’s now common to see agile teams with developers and no SDETs or testers, based on the theory that developers can do all the testing, whether that be coding automated checks or performing human testing. I again see this notion as being based on other influences rather than facts, such as the devaluing of testing skill promoted by easy certifications or the perceived need to increase the speed of delivery.
The “Testing is dead” narrative (c.2011)
At the large STARWest testing conference in 2011, James Whittaker (then at Google) announced that “testing is dead” with testers no longer being required in a world of automated checks and automatic updates. A high-profile name from a high-profile company like Google guaranteed that the message would reach far and wide. It was music to the ears of the SDET fanboys and proof positive that human testers were a historical relic whlle the new, faster, better software development world marched on.
The death of (human) testing has been proclaimed so many times in my 25-odd years in the industry (for various reasons), yet human testers still exist in many software development teams. It’s almost as though the humans bring something to the table that the machines cannot, although some organizations are steadfast in their refusal to admit it.
Keyword-driven recruiting
This will probably feel alien to younger folks, but back when I first started work (and for some time afterwards), job ads were largely focused on broad capabilities like “problem-solving,” “communication” or “managerial experience”. Tools were often learned on the job and there was more on-the-job training, so it was uncommon for particular tools to be part of job ads.
With the internet boom in the 2000s, online job boards normalized searchable skill and tool keywords. Employers started to assume that general skills were not enough and applicants had to be “ready to go” with experience in the right tools for the particular job.
Over time, tools became more closely tied to workflows so experience in them was viewed even more favourably so that a new starter could “hit the ground running” in a Jira shop, for example. Companies that make such tools also push for credentialing and adoption, which filters into hiring norms.
With the digital transformation in full swing, tools then become more central and it was a perfect storm once Applicant Tracking Systems scanning for exact terms in resumes were employed en masse by recruiters – the age of “keyword-driven recruiting” was upon us.
Long laundry lists of tools rapidly became a feature of most job ads for testers and lazy recruiting practices were at least partly to blame. Smart testers learned how to manipulate the system, by using text like this as suggested by Michael Bolton:
I do not have an ISEB or ISTQB certification, and I would be pleased to explain why
But too many simply fell into the trap of focusing on toolsmithing rather than becoming excellent testers, just to feed the filtering beasts.
This keyword-based approach excludes many great candidates who are perfectly capable of picking up and learning new tools as required, but who haven’t used the exact tools to pass through the automated filtering process. It also overemphasises tools over core competencies and yet it is these more fundamental skills of the craft that are much more durable and essential to completing testing missions with credibility.
This move towards keyword-based recruiting has negatively impacted the hiring process for genuinely good testers, in my opinion.
Surveillance capitalism
“Surveillance capitalism” is a term used to describe a new economic system centered around the extraction, analysis, and commercialization of personal data. It was popularized by Shoshana Zuboff in her excellent book, The Age of Surveillance Capitalism.
One of the most obvious characteristics of surveillance capitalism is the commodification of the human experience. Human behaviour becomes a raw material: your clicks, likes, movements, conversations and even emotions are turned into products. These raw materials are the fuel used to predict and modify behaviour for the benefit of their actual customers (not their users who are merely seen as the sources of these raw materials).
The dehumanizing impact of surveillance capitalism is clear. Attempting to track, monitor, instrument, analyze, predict and modify every aspect of our world – from our virtual interactions (e.g. tracking our searches) out into to the real world (e.g. tracking our movements via GPS and mapping services) and right into our very beings (e.g. wearables and facial image recognition) – has become an accepted part of modern life. In doing so, these approaches feel less alien than they should and so removing humans from the picture in other aspects of life becomes normalized too. The move away from skilled human testers towards toolsmiths and machine operators thus seems completely natural to the current generation of software development professionals.
What about AI?
In the discussion above, I deliberately left AI out of the list of factors I think have contributed to the current state of testing. The factors I’ve identified have all played their part in my opinion, some more significantly than others. The impact of AI, though, is only just starting to hit our industry – and I fear that it will make all of these factors look very minor in terms of their impact. I realise that I’m writing this in the middle of a huge hype cycle around AI, but the “loading up” on all things AI is important to analyze, both from the viewpoint of the testing industry but also across software development & IT more generally.
The stage really has already been set for dehumanization as I’ve outlined above, so I’m not surprised that I don’t see too much resistance to the idea of “AI” replacing testers and other IT professionals. I don’t believe that skilled testers can be replaced by current AI systems, so I urge testers to navigate this time by focusing on being more human and not trying to behave more like the machines that look set to replace them. Being aware of the benefits and limitations of AI is important, as is seeing these systems as assistants or tools to help you do better or different testing, but not replacements for your humanity.
Who is the current system working well for?
Looking through the lens of testing tool vendors, the current state of the testing industry is looking good. More and more organizations are using more and more toolsets to assist with testing and agile & digital transformation projects tend to result in a move towards more tooling and less human testing. These vendors have deep pockets and can influence the testing space through their advertising, sponsorship of testing conferences and so on.
The AI vendors will also see the testing industry as being in a sweet spot for exploitation, with the stage set by years of talking about “testing is dead”, automating away the humans and surveillance capitalism’s normalizing of a dystopian world.
Recruiters seem to love the keyword-based filtering of applications, filtering down the massive number of applications (for fewer and fewer pure testing roles) to more manageable stacks to follow up.
For human testers, though, the moulding of the industry into its current form hasn’t been beneficial and, frankly, is likely to become even worse as AI hooks into more and more aspects of the development game.
So what?
The testing industry is what it is, shaped by many different forces over decades. For human testers, the time to be vocal about the value you offer is now – before it’s too late. The tidal wave of AI is heading your way and you can’t make a snorkel long enough to breathe through it – instead, head for higher ground where you can see the wave crashing in, while bringing your distinctly human skills to the table of those organizations still seeing value in what you bring.
There’s a big role to be played by those professional organizations representing testing as a craft, such as the Association for Software Testing. These kind of voices carry weight and are less easily silenced by the lobbying and financial weight of the players looking to dehumanize our craft.
Focus on being more human, not more like the machines, and build communities of like-minded folks. The industrial revolution transformed manufacturing with its factory model and many people were (and are) content to buy factory-produced low-cost goods. But there are also plenty of other people who want a more artisan, hand-made, craftperson experience behind their purchases. Excellent human testing is the same and there will, I believe, always be a market for the true craftspeople – go find it… or help to create it!
(Featured image on this post by Yusong He on Unsplash)