Red Teaming: Closing the Gaps

R

Despite the fact that we pride ourselves on thinking laterally and creatively, we red teamers are still human, and as humans, we share a host of “wetware” issues with our non-red teaming colleagues. The difference? We’re aware of the issues (or at least we should be), and we (usually) try to do something about them. Even so, the issues persist.
      And what are these issues? I don’t claim to know or understand them all, but I am aware of a few that I’ve seen and experienced over the years.
      First, as humans we like stories; we like to hear them, we like to tell them, and we like them to make sense (and perhaps more importantly, our clients like them to make sense).1 Unfortunately, we’re all unreliable narrators to some degree. It’s not that we lie, per se; we just see the world through our own eyes, and the stories we compose reflect that. Sometimes we force things or gloss over them to make a story fit our expectations, to further an argument, or to promote a desired course of action. Often we do it unknowingly, forgetting that a fully sensible world is a fiction. After all, as Tom Clancy once said “The difference between reality and fiction? Fiction has to make sense.”2) (We might even paraphrase this as “The difference between red teaming and fiction? Red teaming has to make sense.”)
      Second, while we’re trying to make sense of the world—wrapping it up in stories—we’re doing so through our own eyes, in our own, very unique heads. We each have strengths, weaknesses, and blind spots. Some of these are cultural; some are embedded in our language, others in our decision-making schema, still others in our personalities and emotions. But isn’t that our job as red teamers, to see the world through others’ eyes? Yes, but what we actually do is more akin to putting on someone else’s glasses. Believing that we truly see through someone else’s eyes is a conceit best acknowledged and addressed.
      Third, we’re not very good at anticipating our own actions, let alone someone else’s. Philip Tetlock’s book Expert Political Judgment is a good summary of this very human lack, which, by the way, is compounded in group situations where decisions often emerge from a tangle of negotiation, compromise, bluster, and influence. How can we, as a red team, speak to Adversary X’s preferences and choices when we’re not even sure of our own much of the time? How can we speak to Adversary X’s next moves when ours are neither algorithmic nor predetermined? (And we should know much more about ourselves.) If we were to set up a red team to anticipate the decisions and moves of our own organization, government, or military, just how successful would this red team be? Twice as successful as an adversary-focused red team? Just as successful? Less successful? The fact that such a question is difficult to answer should tell us a great deal about the common “truth claims” of the practice.
      Fourth, we rarely go hungry. On Maslow’s hierarchy of needs, we’re usually near the top, unlike many of our potential adversaries, some of whom may be most concerned with physiological and safety needs. To what degree are we able to employ a sterile “adversary model” effectively when we get to go back to our comfortable home at the end of the day, or when, if our child is sick, we can take them to the doctor tomorrow and fill a prescription? Not every adversary has that option. The need to fight and scrape for shelter or safety changes one’s risk calculus dramatically, and that’s difficult to act out in a red teaming engagement.
      I attended a red teaming course once and asked the instructor “How do we learn to think like the adversary?” The answer was “The method—the method does that.” Yet the method was rooted in systems engineering processes. In terms of thinking like the adversary, the method did little more than perpetuate a Western, rational mindset, which is fine, I suppose, if all your adversaries are engineers. As we know, however, they’re not.
      To date, an engineering/systems analysis mindset rooted largely in normative standards of rational thought has dominated the practice of analytical red teaming. It’s a good start, but it’s not sufficient in a chaotic world of adversaries who not only (to some degree, small or large) think and feel differently than we do but also, in many cases, seek to exploit the way we think and feel. Tomorrow’s (no, today’s) red teams need to think less about the potential power of red teaming and more about the gaps between the current practice and the perpetually messy “real” world.3 The sooner we’re able to do this, the sooner we can start to close these gaps and move the practice of red teaming forward.

Postscript: For more on this issue and others, listen to Redteam.net’s podcast episode 6.

  1. I’m using “story” here in the broadest sense possible, to encompass all the individual and group narratives we construct. []
  2. Others expressed this idea before Clancy. Examples include Lord Byron (“Truth is always strange; stranger than fiction.”), Mark Twain (“Truth is stranger than fiction, but it is because fiction is obliged to stick to possibilities, truth isn’t.”), and G. K. Chesterton (“Truth must necessarily be stranger than fiction, for fiction is the creation of the human mind and therefore congenial to it.” []
  3. . . . or at least move it closer to our *perception* of the “real” world. []

Categories

Terms of Use

Please read.