Testing Has Something To Do With Mass Extinction

Published on September 11, 2025

Okay, I’ll admit my title is a bit of click-bait. The better title would be “Testing Has Something To Do With Paleontology” but even that would not be correct since what I really would have to say is “Testing Has Something To Do With Paleontological Debates About Mass Extinctions in the Fossil Record.” Ugh. Even worse. You know what, let’s just dig in. (Pun slightly intended.)

There’s a lot of setup for this article so let me say that the point I will eventually reach is this: is testing or quality assurance headed for an extinction event? If that question interests you, bear with me on the context. I’m going to use mass extinction debates as a mirror to reflect how some people see testing today.

That being said, here’s my thesis: testing isn’t dying from external forces. Instead, testing is vulnerable precisely because it’s been too stable. Like paleontologists who couldn’t imagine sudden catastrophic change, we’ve become comfortable with gradualism while missing the signs that our very stability might be setting us up for sudden disruption.

Disagreeing on the Details

If you’re aware of paleontology debates, you might be aware that what people debate in this context as fact or not fact can be so specific and so either-or as to make you wonder how there’s debate in the first place. By this, I don’t mean minor disagreements over small details. Those are handled just as in any other discipline: find new evidence that settles the matter one way or the other. I mean disagreements over major scale events. Case in point: one dispute was over the biggest mass extinction of all time.

Some paleontologists accepted that, around 250 million years ago, all of life on Earth came very close to complete annihilation. Another group argued that nothing at all had really happened. Thus, it was a case of either “On one particular Thursday, all life almost died off forever” or “It was a perfectly normal Thursday, not much to report.”

How could there be any debate about such a fundamental question predicated upon such a large distinction?

Now, granted, the fossil record of the history of life may be patchy and full of holes. Surely, however, there should be no doubt about an event that literally almost killed off all life on the planet. Either that happened or that did not happen. Right? Well, suppose you study the record and conclude, yes, a massive extinction really did occur 250 million years ago at the end of the Permian period. Then you face the puzzling question: how could some paleontologists deny it?

Note, of course, that you could have easily ended up on the other side of the issue. Maybe you felt there was not such a major extinction event. Then you would have been wondering how it was even possible for some to suggest it! It’s really important to be able to think that way, essentially looking at the counterfactual.

Here’s the key insight: these weren’t bad scientists. The extinction deniers were not seven-day creationists. They weren’t flat-earthers. They weren’t members of some fringe group that didn’t care at all about evidence. In fact, they were trained and highly experienced paleontologists. They knew their fossils. They knew how to read the geographical record. And yet their reading of the record seemed to tell them that the end of the Permian passed with only the merest blip, the smallest disturbance, and, really, it was nothing to be concerned about.

Okay, so, what happened? They were trapped in uniformitarian thinking: the belief that geological and biological processes happen slowly and steadily over long periods. This commitment to gradualism blinded them to the possibility of sudden, catastrophic change. In testing, we see this too. Some professionals deny whole categories of testing (“exploratory isn’t real testing,” “developers testing is enough”) not because they’re ignorant, but because their worldview is shaped by what they’re comfortable with. Just like paleontologists read fossils differently, people can read the ‘testing record’ differently depending on the frameworks we bring to it. And here’s the danger: when testers deny whole categories of practice, they aren’t erasing the discipline itself: it will continue to exist. What they’re eroding are the roles that protect and embody that discipline. In other words, it’s not testing that risks extinction, but the tester as a dedicated role.

A Tale of Two Extinctions

All of what I just described regarding paleontology was in fact the case from the early 1970s to the late 1990s. Starting around the 2000s, the viewpoints shifted. There were, at that time, two strongly argued catastrophic models for the end-Permian mass extinction.

Notice how we’ve gone from an extinction / no extinction binary to a discussion of what type of extinction. Distinctions, much like details, matter. This was an example of shifting the discussion. Ask yourself if, broadly, testers have managed to do that well in the industry.

One model tied the event to massive volcanic eruptions, producing thousands of cubic kilometers of lava and poisoning the atmosphere. These massive eruptions of lava were sustained over half-a-million years or more and caused catastrophic environmental deterioration. Lots of poison gas, soil stripping, and de-oxygenation. This viewpoint focused the debate on the synergistic effects of such a break-down in earth systems, when normal feedback processes are overwhelmed.

The other model linked the event to the idea of external impact. The idea was that a large meteorite or comet hit the Earth and caused global destruction. This destruction occurred via major environmental deterioration, dust clouds as earth strata was blasted into the atmosphere, blacking out the sun. All of this leading to freezing cold and acid rain.

Note here that we have a terrestrial (from earth) view and an extraterrestrial (not from earth) view. One of these is a more uniformitarian view and the other is a more catastrophism view. Again, distinctions matter. This mirrors testing debates. Are the threats to QA “terrestrial” (internal: our own complacency, our over-reliance on the same arguments that have proven not to work)? Or are they “extraterrestrial” (external: AI tools, organizational shifts, cost-cutting)? Just like paleontologists argued about the origin of catastrophe, we argue whether the biggest pressures come from within the discipline or from forces outside it. But either way, the point isn’t that testing as a discipline disappears. Investigation, experimentation, and risk assessment will always exist. The real question is whether the tester role can survive these pressures, or whether the discipline will simply migrate elsewhere without that role.

And, of course, there is the related question: if the role is doomed to extinction, is that such a bad thing if the discipline continues to exist? I won’t be answering that here.

From Gradual Decline to Sudden Impact

If the Permian debate sounds contentious, it was nothing compared to the uproar over the dinosaurs. At the end of the Cretaceous, scientists again faced the same question: gradual decline, or sudden catastrophe? This was a heavily studied event in the late 1970s and here, too, there was much disagreement.

The common view was gradual decline, though dramatic proposals — a solar flare, a meteorite impact — occasionally surfaced.

These no-gradual views were all considered to be nonsense by the so-called “mainstream.” Why nonsense? Because there was a prevailing view that Earth was subject to huge forces that moved the continents at a slow pace. But massive, sudden destruction events like impacts? No. Just no.

Then, in 1980, a group of physicists and geologists in California announced evidence that the Earth had indeed been hit by a huge impactor from space at around that time in the distant past and that the consequences of this impact had wiped out much of life, including the dinosaurs.

Even with the evidence at their disposal, and there was quite a bit, the idea was met by instant ridicule and derision by most geologists and paleontologists. Yet, as I’m sure many readers know, that idea is broadly accepted now. Something similar happens in testing. An idea once mocked, say, that developers could take over much of the testing, or that automation would redefine roles, suddenly becomes orthodoxy. Once the pendulum swings, it’s hard to question it, even when the reality is more nuanced.

But here’s the critical point: when automation or developer-led testing becomes the new orthodoxy, that doesn’t mean testing itself has gone extinct. As I said before, investigation, experimentation, and risk assessment are still essential. What may vanish are the dedicated roles that once embodied those practices. In other words, the discipline of testing persists, but the tester as a distinct role is what risks being eclipsed.

What’s interesting is that today impacts are so culturally accepted — think disaster movies — that any contrary evidence would likely be ridiculed. The tables have turned, just as in testing where once-mocked ideas (like automation or developer-led testing) are now — at least for some — beyond questioning.

Here’s the key takeaway: people shifted from believing only gradual processes mattered to accepting that sudden, catastrophic events could reshape everything. Testing faces the same shift: we can’t afford to assume slow, incremental change is the only story. Disruption is always closer than we think. (Maybe I wasn’t so crazy when I said a new narrative is needed.)

The Uniformitarian Comfort of Testing

For decades, I would argue that testing has been remarkably uniformitarian. We’ve evolved slowly, incrementally: refining processes, adding tools, tweaking methodologies. We’ve been comfortable with gradual change, and this very stability may be what leaves us vulnerable to sudden disruption.

Yet, as I’ve pointed out multiple times, it’s critical to see that disruption doesn’t mean the extinction of testing as a discipline. The practices themselves will continue to matter. What disruption threatens is the tester role: whether organizations still see value in dedicated practitioners who safeguard and advance that discipline. Stability has lulled us into thinking roles are secure, when in reality only the discipline is untouchable. The role is not. (Maybe I wasn’t so crazy when I wrote about acting like a developer.)

The Fossil Record of QA

Now, bear with me for a context shift and let me ask you this: when was the term “quality assurance” first used?

There isn’t any indication that folks like Deming, Shewhart, or Taylor used the term “quality assurance” in their writings or teachings. While they were certainly involved in the development of concepts around quality control and quality management, the term “quality assurance” as we understand it today doesn’t appear to have been part of their vocabulary in the early days of these developments.

Similarly, based on the available records, neither Philip Crosby nor Joseph Juran are widely recognized for specifically using the term “quality assurance” in their work. While both made significant contributions to quality management, their primary focus was on quality control and continuous improvement, with Crosby notably promoting the idea of “zero defects” and Juran emphasizing quality planning and management.

The shift from “quality control” (QC) to “quality assurance” (QA) likely began in the 1960s and 1970s. Okay, but when, specifically, did it first occur? According to the Oxford English Dictionary, the earliest known use of the noun “quality assurance” is in the 1940s, which they reference in the Journal of American Statistical Association.

You can look up usages through various sources. We can even use Google’s ngram viewer.

Yet, consider that while the formalization of “quality assurance” is often associated with the mid-twentieth century, various discoveries reveal its earlier usage. A search of historical news sources netted this:

The term appeared in The Times from Washington, D.C., in 1896. This early usage highlights that the concept of “assuring quality” was already important to businesses in the late nineteenth century. This early example shows that the term was in use, even if the processes that are used today were not yet invented. This is like the fossil record again: the bones are there, but the ecosystem that gave them meaning looked very different. Early ‘quality assurance’ was branding, not a discipline. Just as paleontologists reinterpret fossils with new frameworks, we reinterpret early quality assurance with the lens of later methodology.

The fact that “quality assurance” was used in a commercial context in 1896 — and there are many other examples I could point to as I trolled such news sources — suggests that the idea of providing guarantees or assurances about product quality was already present. Perhaps not surprisingly. The idea of “assuring quality” was a means of persuasion, just as defining mass extinction was a way to persuade the scientific community of a larger shift in how we see continuity, rupture, and recovery.

And that’s the lesson for us: labels and titles shift, sometimes dramatically. What businesses once called “quality assurance” was more marketing than method. Yet the underlying discipline of testing — critical evaluation, evidence gathering, risk consideration — remains. If we confuse the survival of a label or a role with the survival of the discipline itself, we risk misreading our own fossil record just as badly as those paleontologists who denied extinction events.

The Testing Fossil Record

Now, now let’s pivot to the question of testing. Think about what is — and what is not — considered “testing.” Much like the paleontologists who denied the Permian extinction, some professionals in our industry deny the value or even the existence of certain forms of testing. Others mischaracterize testing in ways that distort its purpose, much like misreading a fossil layer. In both cases, the danger isn’t that evidence is missing. It’s that interpretation is compromised. The fossil record is patchy, but the bias is in how it’s read. Testing is the same: the record of practices is there, but people see only what fits their narrative.

Ah, but here’s where the analogy gets interesting! While many vocal testers claim we’re already in an extinction event — that testers are dying out, that QA teams are being eliminated — I believe they’re misreading the fossil record. The threat isn’t that testing is disappearing. The threat is that testing’s core identity is being fragmented, co-opted, and misunderstood.

What’s really happening is role extinction, not discipline extinction. Tester titles and QA departments may vanish, but testing itself, the investigative discipline, will persist. The tragedy is if, in losing the roles, we also let the discipline fragment into shallow practices without preserving its depth. Just as paleontologists know species disappear while life itself adapts, we have to recognize that roles may die while the discipline survives. The question is whether we safeguard that survival with clarity and intent.

Just as the paleontologists who denied the Permian extinction were trapped in uniformitarian thinking, believing that slow, gradual processes were the only forces that mattered, many in our industry have been lulled into thinking that testing’s steady evolution will continue indefinitely. We’ve become comfortable with the idea that change in our field happens gradually, predictably. But what if we’re wrong?

Yet, my posts on testing in the 1980s and testing Defender show that while contexts have shifted massively in the tester role, testing as a discipline has never gone anywhere. In that context, my post on Elden Ring testing with AI, shows exactly that.

The extinction-mongers on LinkedIn aren’t entirely wrong about the symptoms: they’re just misdiagnosing the disease. When they wail about testers being replaced by automation or developers “doing their own testing,” they’re pointing to real environmental pressures. But these aren’t meteors hitting the planet. These are the ecological stressors that make a population vulnerable to sudden change.

I explored this theme in my Nothing to Do with Testing post. The parallels are clear: we have people who either misunderstand what testing is or misrepresent it for the sake of convenience, trend-chasing, or institutional dogma. This occurs both externally and internally.

Even assuming someone grants my long argument here, a valid question is: does this really matter? Is it just semantics or professional defensiveness? Well, I don’t know, but let’s borrow a page from the paleontologist’s handbook again. Why study extinction events? Because they matter. They matter not just as curiosities of the past, but as insights into the future. Many believe we are living through the so-called “sixth extinction,” a period of human-driven biodiversity collapse. Whether or not that term sticks, the larger point remains: looking at the past helps us anticipate what may happen next.

The same applies to testing and QA. If we don’t understand our historical trajectory — how ideas evolved, how concepts were codified or ignored — then we will fail to understand how to adapt when disruption arrives. (Maybe I was on track when I called out the history of automated testing.) Whether the disruptor is Agile, DevOps, AI, or the next tooling trend, we’re constantly reacting instead of leading. In this, testing is not just threatened; it’s at risk of being cut out of the industry’s future lineage entirely.

The Real Extinction Threat

Thus, one of my strong beliefs is that the narrative of decline that dominates testing discussions focuses on the wrong extinction. The question isn’t whether there will be people called “testers” or departments called “QA.” The question is whether testing as a specialized discipline — with its unique approaches to investigation, experimentation, critical thinking, and risk assessment — will survive or be diluted beyond recognition.

This is actually where I part ways with both the doom-and-gloom crowd and the “testing is fine” crowd. I believe in democratizing testing and distributing quality assurance throughout teams. However, democratization doesn’t mean elimination: it means ensuring the specialized knowledge survives while becoming more accessible, even if the dedicated role of ‘tester’ doesn’t always survive with it.

Awhile back I talked about the evolution of testing. I also talked about the distribute and democratize combo when I talked about my role as a specialist.

Think of it this way: when paleontologists study mass extinctions, they’re not just counting species that vanished. They’re looking at which fundamental biological innovations were preserved and which were lost forever. Some extinctions eliminate entire approaches to solving life’s challenges. Others redistribute successful strategies into new forms.

The question for testing isn’t “Will testers exist?” but “Will the essential knowledge and approaches that make testing effective survive the transition to new organizational structures?” That should be the question people are asking. You can then adapt your role to whatever the current evolution pattern is, but without sacrificing your desire for testing being front-and-center in our growing technocracy.

The Causes and the Culpability

When testers talk about decline, they often point to familiar culprits: universities that don’t teach testing well, hiring practices that filter for coding skills over testing skill, or leaders who don’t understand quality. These are real frustrations, but they are symptoms, not the disease. The real danger is our uniformitarian complacency: the assumption that testing will keep evolving slowly and predictably, leaving us unprepared for sudden, catastrophic change.

Education isn’t failing testing; it reflects the industry’s own confusion about testing. Universities follow industry trends. If we, as specialists in the discipline, don’t clearly articulate testing’s value, how can we expect academia to teach it? Hiring isn’t failing testers; it mirrors how organizations define the craft. If companies only see testing as automation, they will only hire for automation. Leadership isn’t failing testing; it follows the narratives that the community allows to persist. If we tolerate shallow definitions, we can’t be surprised when leaders manage to those definitions.

These aren’t meteors or volcanoes. They are ecological stressors that make role extinction more likely. Tester titles and QA departments can vanish, but, again, and I’ll keep beating this drum, that is not the extinction of testing. The discipline itself will survive. Or, rather, it might. The real risk, and the inherent danger, is that by failing to articulate and defend testing as a specialty, test practitioners allow the discipline to be fragmented and diluted as it disperses across other roles.

This is the culpability we bear as testers. We have too often fallen back on generalities like “we help surface risk,” which anyone can claim. A specialty is more than that. It can be operationalized and articulated so that others recognize it as unique, valuable, and transferable. If we fail to make that case, we shouldn’t be surprised when organizations conclude that anyone can do what we do, and quietly let the tester role die off. Roles may vanish. But the discipline must not, and it’s our responsibility to ensure that distinction is clear.

As both science and software have shown us, sometimes the massively disruptive happens. And those who survive are those who adapt: not by accident, but by awareness. The question has never been about whether change is coming to testing. The question has always been about whether we’ll be ready when it does. In other words: will we be like the paleontologists who only recognized catastrophe once it had already wiped out half the record, or will we learn to see the warning signs before our discipline becomes a fossil itself?

Share