
The concept of “enough”
This weekend – when you read this, last weekend – I listened to my favorite woodworking podcast. And the topic was “Gratitude, Appreciation and the concept of Enough“.
“Enough” is a topic that lately rumors in my head all the time. When I look at politics, careers, economics, global warming, ecosystems, billionaires, and all topics along this line. Why do we always need more, more, more. This fundamental capitalist thought of always wanting more that is deep engraved into society. But that is not what I want to talk about today.
Perfection and completion are destinations at the end of a very long road. This road leads along a very long part called “enough”. If you are not making a jig-saw puzzle, then “enough” is likely to be found around 80-90% of perfect/complete. Sometimes it’s even less, sometimes maybe a bit more. When you look at the Pareto principle, also called the 80/20 rule, you realize that the further you are down the road the longer the rest is going to take. Pareto says that for 80% of the work you need 20% of the time, and for the last 20% you need 80% of the time. Or as we call it in IT, the 80/80 rule. For the first 80% you need 80% of the time, and for the last 20% you need the other 80% of the time.
Be it requirements engineering, programming or testing, in the world of IT we regularly have to be content with “enough”. We might strive for perfection, but we don’t have the time, or sometimes even the ability to reach perfection. There are opposing quality criteria, unknowns about the future, unclear requirements, missing feedback and most of all, not enough time. We have to settle with enough for now on a regular basis.
The old tale of “testing is never done” comes from the fact, that there is always something more to test. Especially as testers we have to find peace with the fact that sometimes we are “done”. Even when it is not enough. The time to cover the remaining scenarios is often multiples of what you need to do for a solid coverage. This is why we need a risk based approach. Cover the biggest risks first and then the next and the next, until time runs out.
It’s tough to defend decisions on why we tested certain criteria and others not. If you don’t do a full-fledged risk analysis first to determine what to test first, second and so on, then it will always be a topic of trust. Your project trusts you that you test the most important aspects first, and that you stop when you have a good feeling about the latest changes.
The more experience you get with your product/project and also with testing in general, the easier it will become for you to identify “enough”. But how do we teach “enough” to the next generation of testers? All testing courses that I’m aware of teach you how to define all kinds of test cases with path coverage, decision coverage, state transitions, decision tables, etc. This can produce a lot of test cases or experiments to cover. So you learn how to combine them to save time. But that still leaves too much. And we have not even spoken about testing of performance, resilience, accessibility, security, you name it.
Over time a tester will establish a mental model of where priorities lie or where tricky situations might be. A tester will develop a certain grasp for the system at hand. This mental model will go further than what the specification can provide. This will be more than what standard test case analysis methods can tell you.
Practice your systems thinking skills, so that you can extend your mental model beyond the specification of the software – in case you even have one. Company goals, clients, user bases, regulations, other teams or departments are all valuable elements to a model around the system that you have to test. You can do simple things and just speak with your colleagues what has been changed, what is important to the customer or users. When you are new to the project, ask them if there were problems in that area before? Ask business people what is most important to them. In regulated environments, speak with your regulatory folks to understand what is an absolute must have.
You will never have enough time to test everything, so every approach to understand quickly when enough is enough is useful.
And yes, you will miss issues. And this will be hard. But you are not alone. Others have missed the issue before you. The person writing the requirement has not put it explicitly in the specification. The developer has not thought about it as well. The code reviewer also didn’t have the case in mind. Maybe the case never happened before. Appreciate what you have done. Learn from the ones that you missed. But don’t go too hard on yourself.
I’m very hard with myself with every bug I missed that had a certain severity. See, I have gained already the experience to be fine with minor bugs. Some bugs are just not relevant enough to cover them beforehand.
Having missed a bug can also mean for me to conclude that I will miss bugs like that in the future. Some issues are just too time-intensive to mitigate.
So, if your software is not putting lives at risk, find a comfortable level of “enough”.