Join us 2 March for our next Red Teaming 101 Webinar.

Model-Based Red Teaming (Part I)

Few red teamers probably think their jobs are easy—fun perhaps, but not easy. In most cases red teamers face a clear assignment: test a system, plan, or idea from the perspective of an adversary. In the simplest case, the red team considers a single adversary, enumerates the adversary’s possible courses of action, and organizes or orders them based on the adversary’s assumed preferences. In other cases, the red team might consider several adversaries independently or look for instances where the goals, strategies, and preferences of the adversaries overlap. Each case is progressively more challenging.
      Still, each of these cases is relatively straightforward. Problems arise when you consider that few adversaries are unitary; most are transient aggregations of mismatched preferences and conflicting interests. Courses of action that emerge from collections of stakeholders aggregated into an organization tend not to be optimized but satisficed, and that, of course, makes things very difficult for a red team.
      To put the challenge into perspective, just imagine how difficult it is to explain the behavior of BLUE (your own side and your own culture). Major decisions typically follow disagreement, debate, compromise, and negotiation. Often such decisions are difficult to untangle after the fact, even with the full cooperation of the participating decision makers.
      You can add to this conundrum all the problems that attend misperception and deception. Not only are adversaries likely to misperceive key decision factors relative to their adversary (BLUE), the stakeholders within the adversary organization are likely to misperceive each other! Deception only multiplies the challenge (and don’t forget that deception isn’t limited to the RED/BLUE reciprocity, it can occur within RED and within BLUE).
      If this sounds a lot like a wicked problem, you’re right; it is. But don’t despair. You might not be able to model the situation quantitatively, but you can probably sketch it (and understand it) qualitatively. Soft systems analysts have been telling us this for decades (most of us just don’t want to hear it). Among other things, they’ve warned us that trying to solve a wicked problem using “tame” tools results in a mismatch—one that can lead to misunderstandings, false confidence, and poor decisions. In Part II, we’ll talk about more about the soft systems toolkit and discuss why red teamers should embrace it.