
Failure – if not avoided, at least prepare for it. It’s inevitable..
Failure in complex systems is, like other types of behavior in such systems, emergent
Perrow‘s model is a good example.
In a recent project I realized that testing can act as a canary, as early test results indicate things are going not in the right direction.
The project manager, was asking why these defects were raised during a team meeting as being important, and one of the more experienced testers mentioned that this is a smell of something not being done properly, that will fail down the line.
Reading the opening quote in a book recently reminded me of that episode. That was a good example of two things, heuristics and complexity.
First, heuristics derived from experience are one of the better leading indicators when used properly. They are born out of experience, first hand experience. Also, they offer flexible and easy to adapt rules that can be ported from context to context, from one project to another. There are many examples of heuristics potentially usable when testing software, and many more can be built as experience layers.
Cem Kaner defines heuristics as a fallible way towards solving a problem or towards a decision Below are a few examples of widely used testing heuristics:
- FEW HICCUPS
- Heuristics for Understanding Heuristics
- Heuristics and Leadership
- All Oracles are Heuristics
- Elisabeth Hendrickson’s Test Heuristics Cheat Sheet
- SFDIPOT (San Francisco Depot) – test strategy
- Steeplechase Heuristic – exploratory boundary testing
- Galumphing Heuristic – style of test execution
- Creep & Leap – for pattern investigation
Second, complexity, as software development is many times a complex activity where not combined before elements are being put together in new ways. Their interaction creates new behaviors, new contexts, with emergent properties springing from more than the sum of the included components.
If I were to start on Complexity, first thing coming to mind is Dave Snowden’s Cynefin and one can branch out from there, coming upon gems like this paper.
All in all, think that failure is rarely sudden, and many times there are early signs, weak signals, throwing hints of something about to happen. You know, like a bridge squeaking and cracking slowly before the breakdown. And I get back to Dave Snowden’s work, to his point that it is preferable having a system with a failure mode that is not sudden, unwarned, catastrophic. Such a desired system would provide some kind of warning, weak signals, that one can pick and act on it before the consequences escalate.
And, to close this up, back to the title, failure is inevitable and thus we should aim to prepare and work with systems that make it easier to deal with it. Entropy is naturally increasing, and all our work is to keep it under control, so favor the systems that help with this mission.