The Triangle of Perception: Why We See Testability Differently

Published on August 31, 2025

Rethinking Testability Part 3 – A series of blog posts based on my talk Improving Quality of Work and Life Through Testability


Rethinking Testability Part 1 – Testability is about people, not just code,  Part 2 – Poor Testability is Everywhere – but we don’t always see it

Japanese anime styled picture. A triangle in the center of the picture. To the left a girl with brown long hair faced towards the triangle and in dialogue with black haired guy to the right of the triangle.
Triangle of Perception

Same same but different

Two people can work on the exact same system and what seems to be the same problem— and yet live in completely different worlds.

I learned this many years ago—I was working with a developer, asking to improve the logs to help us catch subtle problems. But we saw logs very differently: for me, they were essential; for him, they were occasional – which made him question the investment and the time needed to improve the logs.

As a tester, logs were really important to me. I relied on them not just when something was obviously broken. I needed that observability before anything failed. It helped me spot anything weird—things that might not be visible through the UI.

For the developer, logs were something he dug into after a failure—part of troubleshooting a known issue. Logs were helpful, but only needed now and then.

We weren’t disagreeing on whether logs were useful.
But how often we needed logs and how we used them, what we used them for – shaped how we saw the need for investing in better testability.


Three Factors Shaping The Perception of Testability

An animated picture in black and white with a triangle in the middle. To the left you can see the shadow of a person with short hair. On the right the shadow of a person with long hair. They both face the triangle. On top of the triangle is a text- view of testing - inside the triangle is a text - Perception of tstability. In the left corner of the triangle it says - usage of the system. In the tight corner of the triangle it says -  frequency of interaction
Perception of Testability Triangle

Over time, I started noticing a certain pattern.
It seems like different people’s perceptions of testability are shaped by three main factors:

  1. Frequency of interaction — How often do you work with the product? Daily? Occasionally? Rarely?
  2. Usage of the system — How do you interact with the product? No matter if you are building it, testing it, observing it — When you do work with it, are you going deep into the system or just skimming the surface?
  3. View of testing — Do you see testing mainly as confirming known behaviors, or as exploring the unknown?

When your answers to those questions differ, your sense of what’s “good enough” for testability will differ too.


Confirmation vs. Exploration

An animated picture in black and white with a triangle in the middle. To the left you can see the shadow of a person with short hair. On the right the shadow of a person with long hair. They both face the triangle. On top of the triangle is a text- view of testing - inside the triangle is a text - Perception of tstability. In the left corner of the triangle it says - usage of the system. In the tight corner of the triangle it says - frequency of interaction
View of Testing

I’ve noticed that the third factor — how you see testing — is the one that changes the conversation the most. Note – I am clearly polarizing and exaggerating the views, to make the distinction more clear.

When someone sees testing as confirming expected outcomes, they’ll judge testability by how easily they can check the known. In my experience it seems like the symptom of this is a huge focus on testability for automation.

But if we see testing as exploration—about learning, discovering, and questioning—then what we need from testability will be different.  We need to support serendipitous exploration—being able to notice something interesting and then quickly dig deeper without friction.

Unfortunately, most organizations I’ve worked with lean heavily toward optimizing for confirmation and verification, maybe because it’s easier to measure. Exploration often gets left behind and when that happens we risk missing the bugs that really matter. For more on this topic see my post on Testing Beyond Requirements.


Why This Matters

When someone nods along as you talk about improving testability, it’s worth checking:
Are they picturing the same thing you are?
Or are they imagining something completely different?

That shallow agreement can be dangerous — because it hides the fact that you might be solving for entirely different problems.

Rethinking Testability Part 1 – Testability is about people, not just code,  Part 2  Poor Testability is Everywhere – but we don’t always see it