In his book Design Paradigms,1 Henry Petroski highlights the concept of “proactive failure analysis.” He cites Christopher Alexander (the father of design patterns), who notes that
We are never capable of stating a design problem except in terms of the errors we have observed in past solutions to past problems. Even if we try to design something for an entirely new purpose that has never been conceived before, the best we can do in stating the problem is to anticipate how it might possibly go wrong by scanning mentally all the ways in which other things have gone wrong in the past.
Extending his argument, Petroski cites engineer Lev Zetlin, who notes that
Engineers should be slightly paranoiac during the design stage. They should consider and imagine that impossible could happen. They should not be complacent and secure in the mere realization that if all the requirements of the design handbooks and manuals have been satisfied, the structure will be safe and sound.
Petroski further cites Zetlin’s observation that “I look at everything and try to imagine disaster. I am always scared. Imagination and fear are among the best engineering tools for preventing tragedy.”
Many forms of red teaming are arguably similar to engineering design. For example, IDART, the flagship Sandia red team, is–in its long form–the Information Design Assurance Red Team, and design assurance is one clear purpose a red team can serve. Systems engineers know well the principle that it is much less costly and troublesome to catch a flaw in the design phase than it is to recognize the same flaw much later in the system’s life cycle. I argue, in fact, that we can easily port Zetlin’s observation to the world of red teaming: “Imagination and fear are among the best red teaming tools for preventing tragedy.”
But does Alexander’s dictum hold in the domain of red teaming? Are we limited merely to “the ways in which other things have gone wrong in the past”? At a high level of abstraction this may be true. In other words, if we characterize all forms of failure and attack at the level of basic principles or classes, we can probably assign all failures and attacks–past and future–to one class or another. But patterns, principles, or classes are often not detailed enough to serve a decision maker’s purpose. The decision maker often aims to preempt or counter specific failures and attacks, in which case principles may serve as a useful guide, but the red teamer must still contextualize the principles.
It also seems likely to me that today’s complex world of increasingly interconnected technologies yields many circumstances in which many failures and attacks are substantially new and unique. Analysts who limit themselves to “the ways in which other things have gone wrong in the past” are almost certainly going to be surprised. Why? Clever adversaries will often search for methods of attack that differ in some aspect or parameter from those used in the past. This is perhaps the key difference between traditional engineering design and red teaming: the active adversary.2