We’ve added a second session of the “one-time only” “Dragon and Knight” course on 16 Dec. to accommodate those who couldn’t attend the first one.

Slippery Risk: Some Thoughts on Risk from a Red Teamer’s Perspective

Why should red teamers care about risk? Risk is ubiquitous. We face it every day, even when we leave work and take off our professional hats. (Every mom and dad knows about the risks associated with letting children go to bed without first picking up their Lego.) Risk estimates drive decisions, good and bad. Red teaming can help inform inputs to a risk equation, and red teamers are basically asked to estimate risks from the perspective of an adversary or competitor.
      In this post I ask and answer five questions regarding risk from the perspective of a red teamer. If you’re looking for an academic review of risk and red teaming, this isn’t it. In fact, I don’t even fully answer most of the questions I raise.1 Why not? The risk community as a whole is still struggling with many of these issues. As I discuss below, however, simply acknowledging them is an important first step.

  1. What is risk (and who says)? This is the 21st century; surely we’ve conquered the question of risk by now. Well, yes … and no. We have formulas for risk, certainly, and it’s only natural to perceive something encapsulated in a formula to be solid, measurable, and objective. How many of us, when told that risk is a product of threat, vulnerability, and consequence, for example, simply nodded our heads, assuming that a wise sage from times past derived the formula from first principles. Not so fast … dig into the risk community a bit, and you’ll quickly find that not everyone uses the same definition of risk. In fact, many definitions exist, and while most of them address the same idea (measuring the potential for loss), they do it in different ways that can lead to very different answers. The same goes for the representations of risk. Risk is a number, right? Some people argue that it’s actually a curve. Most of us use simple risk matrices to represent risk, but that has problems, too.
  2. But haven’t insurance companies figured out how to quantify risk? Insurance is one of the great inventions of the modern age. It allows us to manage our risks. Imagine a world without insurance—a world in which you have to absorb the direct consequence of every risky event that occurs. Insurers survive by knowing the numbers, and they do so because they have data, usually lots of it.2 But what if you don’t have much data? What if the events you’re trying to assign a risk value happen only rarely or have never happened at all? These are questions risk theorists have been wrestling with, particularly the last 10 or 15 years.
  3. How does risk account for the perceptual reciprocity between actors? For the most part, it doesn’t—at least the common, traditional approaches don’t. Imagine that I’m your adversary. I know your risk estimates hinge in part (largely?) on what you think I’ll do. The outcome, of course, hinges on what we both do. (This is basic game theory.) Given the perceptual “system,” I’d like you to underestimate certain risks and overestimate others. Can I influence your estimates? Absolutely. Looking at this from the opposite perspective, can you influence mine? Absolutely. What, then, does your risk estimate mean? If you can read my mind and accurately measure all the relevant decision factors (mine and yours), you can probably generate good estimates of risk. If you can’t read my mind, your risk estimates might do more harm than good, especially if I’m deceiving or misleading you … or you communicate your estimates intentionally or unintentionally … or if I can somehow manage see them.
          In short, when dealing with adaptive adversaries, your (traditional) risk estimates approximate the “true” risk only when you’re confident (a) I haven’t deceived you; (b) you’ve identified all my strategies, and I’ve identified all yours; (c) we perceive the same set of outcomes and we measure them similarly; and (d) it hasn’t changed since you printed it. The problem is that all of these factors can be very difficult to pin down with certainty. And don’t forget, you probably have more than one adversary, each of which might be trying to deceive you and each of which probably misperceives you to some degree. Again, this all ties back into game theory, a topic for another day, but take some hope that everything I’ve just said applies to your adversaries too. At the risk of extending this answer to the point of tedium, allow me to cite one of my favorite quotes from Michael Handel:

    … the reciprocal nature of all action in war means that attempts to grasp its complexities through a static, unilaterally based concept will never succeed…. a realistic approach must consider how one’s adversary interprets the war as well. Thus, perceiving the nature of a war is a reciprocal and dialectic process in which it is important to consider how one side’s perspective and actions affect the other side’s actions and reactions.3

  4. How does risk account for variance in perceptions? This builds on and complicates the issues I raised in the last section. As in that case, I’m not prepared to answer it fully (it would take at least a book), but I do want to highlight it. (Being aware of it is half the battle.) As I suggested above, a sophisticated risk analyst acknowledges and accommodates the intertwined issues of perception, misperception, and deception. In addition, a sophisticated risk analyst also recognizes that even within groups, individuals perceive risks differently. Get 10 people in a room, ask them to estimate the risks associated with 10 different scenarios, and you’ll have 10 different sets of risk estimates. In the last question, we quickly jumped into the topic of game theory, and here we spring into the topic of risk perception. You can find entire books on the topic, but just know that risk has a strong subjective element and people are generally quite poor at estimating and comparing risks. This in turn takes us into the topic of heuristics and biases, another area where you’ll find a large body of literature.
  5. How can I improve my understanding of risk? Regardless of how old you are or how long you will live, you won’t be able to read everything about risk that’s ever been written. There’s just too much material, and we’re generating more and more every day. I suggest starting with the following books (and reading them in order). Each one brings something unique but important to the risk/red teaming table. The first book is Against the Gods by Peter Bernstein. It’s essential background reading and sets the foundation for many of the principles risk analysts apply daily. The second is a widely read and discussed book by Douglas Hubbard titled The Failure of Risk Management. Hubbard sometimes adopts a strident tone, but he raises critical issues, among which is the simple but powerful principle of modeling uncertainty as part of your process of risk estimation. He also raises some issues with the commonly used risk matrix. In both cases, he points you toward more resources if you want to learn more. The third is Risk by John Adams. This one will cause you to rethink much of what you read in the first two books by forcing you to consider risk as a subjective, reciprocal phenomenon.

      Much of this discussion takes us back to Red Teaming Law #32: “No matter what the nature of the game, the red team’s ultimate target should always be the opponent’s mind. Everything else is just technique.” It’s easy to forget this when you start talking risk formulas and quantitative estimates of risk. Just remember that whenever you quantify risk in an adversarial or competitive context, the numbers usually represent an aspect of something that’s happening in the human mind, and that’s just plain slippery.4

Share on LinkedInTweet about this on TwitterShare on StumbleUponShare on FacebookShare on RedditShare on Google+
  1. Maybe it’s more academic than I thought. []
  2. For more on risk and insurance, see Bernstein’s Against the Gods, a title I discuss briefly in question and answer number five. []
  3. Michael Handel, War, Strategy, and Intelligence (1989), pp. 94–95. []
  4. We’ll be discussing these issues and more at The Watermark Institute’s next red teaming course. []