
Why the Experimenter’s Mindset Outlasts Automation in Software Testing
The experimenter’s mindset beats the automation mindset
A few months ago, someone at a conference asked me whether I thought testers would still have jobs in five years.
It wasn’t a joke. You could hear the anxiety in the room, because the arrival of generative AI has reignited a very old fear in our industry: what if automation finally makes us obsolete?
I’ve lived through several versions of that question. I remember when automated UI testing was supposed to replace all manual testing. Then when continuous integration was supposed to replace test planning. Then when “shift left” was supposed to make testing disappear entirely.
Each time, the same thing happened. The people who defined their role as “executing tests” struggled. The people who defined their role as “learning about risk” adapted and thrived.
The difference wasn’t technical skill. It was mindset.
The automation mindset
For years, software testing has rewarded what I call the automation mindset, the belief that progress means doing the same thing faster and more consistently.
That’s not a bad instinct. Testing often involves repetitive work, and automation genuinely helps teams move faster. But it also encourages a narrow kind of thinking: one that values efficiency over understanding.
Automation thinking leads us to ask questions like:
- How can we reduce human involvement?
- How can we run more tests?
- How can we make this step disappear?
Those are fine questions when you already know you’re doing the right thing. But when you don’t, when you’re exploring new risks, complex systems, or emerging technologies like AI, that mindset becomes a trap.
Because automation doesn’t tell you whether you’re solving the right problem. It only tells you whether the machine ran the script correctly.
If the automation mindset is about replacing yourself, the experimenter’s mindset is about extending yourself.
Katja Obring Tweet
The limits of efficiency
AI makes this distinction more visible than ever.
A large language model can generate thousands of test cases or build a working Selenium script in seconds. It can analyse logs faster than any human, and even flag patterns that might indicate defects.
What it can’t do is decide which of those defects actually matter.
It can’t judge whether the cost of fixing something outweighs the risk of leaving it alone. It can’t negotiate trade-offs, or see that an edge case in the API is actually a business problem in disguise.
The irony is that AI might make testing more efficient, but unless we change how we think, it won’t make it more intelligent.
And that’s where the experimenter’s mindset comes in.
The experimenter’s mindset
The experimenter’s mindset starts with a simple shift: instead of seeing testing as a process to automate, see it as a system to learn from.
An experimenter doesn’t ask, “How can we make this faster?” They ask, “What would we learn if we tried this differently?”
They treat every test, every metric, every improvement idea as a small experiment. Some succeed, some don’t, but each one teaches something useful about the system and the people building it.
This is the mindset at the heart of my Q.E.D. framework, Question, Evidence, Develop.
- Question what problem you’re really solving.
- Gather Evidence to understand what’s happening.
- Develop your next move as an experiment, not a guarantee.
The point isn’t to be right all the time. It’s to learn quickly enough that being wrong doesn’t hurt.
How this applies to AI
When teams introduce AI tools into their workflow, the temptation is to jump straight to automation thinking: “How can we get this to do what we already do, but faster?”
But that’s rarely where the value lies.
AI can accelerate analysis, generate options, and surface insights, but only if you use it to learn, not just to execute.
A tester with an experimenter’s mindset uses AI the way a scientist uses a microscope: not to outsource their thinking, but to expand what they can observe.
They run controlled experiments. They gather data, evaluate it critically, and adapt. They use AI outputs as evidence, not as answers.
And crucially, they share what they learn, turning individual curiosity into collective intelligence.
That’s how testers become indispensable in AI-driven teams. Not as scriptwriters, but as sense-makers.
How to start thinking like an experimenter
If your work has become dominated by test execution, bug triage, and dashboards, this shift can feel daunting. But it doesn’t require a big revolution. It just requires you to treat your work as a series of small, deliberate experiments.
Here are a few ways to start:
Reframe your assumptions.
Whenever you hear “we always do it this way,” that’s an opportunity. Ask, “What would happen if we didn’t?”
Work smaller.
If you’re unsure about a new tool or process, try it in one service for one sprint. Keep the feedback loop short enough that you can afford to fail.
Define success as learning.
If the experiment disproves your idea, that’s not wasted time. It’s a discovery. The only real failure is doing something and learning nothing.
Make learning visible.
Don’t bury results in test reports. Share them as short learning summaries, what you tried, what you saw, what it might mean.
Reward curiosity.
If your team only celebrates delivery, curiosity will wither. Recognise the people who ask good questions and challenge safe assumptions.
Experimentation as a human skill
The experimenter’s mindset is more than a way to protect your job. It’s a way to make your work matter.
AI can execute instructions with perfect precision, but it has no intuition, no judgement, no sense of proportion. It can’t tell when a metric is misleading or when a bug is more political than technical.
Those are human skills, and they’re not going away.
If the automation mindset is about replacing yourself, the experimenter’s mindset is about extending yourself.
It’s about becoming the kind of professional who helps your team adapt, no matter what new tool or technology arrives next.
Because in the end, the future of testing won’t be written by the people who automate the most.
It’ll be written by the people who keep learning the fastest.
The post Why the Experimenter’s Mindset Outlasts Automation in Software Testing appeared first on Kato Coaching.