We’ve added a second session of the “one-time only” “Dragon and Knight” course on 16 Dec. to accommodate those who couldn’t attend the first one.

Toward a Red Teaming Taxonomy, 2.0 (2004)

This article from Sept. 2004 is one of the most requested method papers from the previous iteration of Red Team Journal. Although many of my ideas have matured in the past four years, I believe the paper remains valuable, if only as a point of departure for further discussion. I also still recommend Russo and Schoemaker’s work, from which I drew some key concepts.

Introduction

In July 2003, I posted a Red Team Journal method paper titled “Toward a Red Teaming Taxonomy.”1 My aim was to encourage red teamers to adopt a “common set of definitions and relationships” in order to “carry lessons across fields and combine approaches in new ways.” To this end, I characterized red teaming by approach and problem of interest.2
      In this paper, I first define red teaming and then characterize it by purpose, approach, and decision structure. I believe this new taxonomy better meets my original aim.

Definition

Red teaming is implicit in many different activities. For example,

  • Militaries red team when they identify and game their adversaries’ courses of action;
  • Physical security professionals red team when they survey and assess vulnerabilities;
  • Computer security professionals red team when they test and penetrate client networks;
  • Detectives red team when they attempt to get inside a criminal’s mind; and
  • Corporations red team when they review and challenge their own proposals and initiatives from a competitive perspective.

Although these activities vary in purpose, scope, and process, they share a common root: in each case, the friendly side (“BLUE”) attempts to view a problem through the eyes of an adversary or competitor (“RED”). This leads to the following definition:

Red teaming involves any activity—implicit or explicit—in which one actor (“BLUE”) attempts to understand, challenge, or test a friendly system, plan, or perspective through the eyes of an adversary or competitor (“RED“).

      Purists might be tempted to limit red teaming to activities that apply explicit processes. I prefer a broad definition, if only because the field is still emerging and a narrow definition would exclude important, closely related activities and discourage crosstalk between these activities and explicit forms of red teaming. Overall, my strategy is to define red teaming broadly and then, within this broad definition, differentiate activities by purpose, approach, and decision structure.3

Purpose and Approach

Despite its many variations, red teaming generally serves one of four purposes: understand, anticipate, test, and train. These purposes are cumulative; each builds on the one before. Some red teams, for example, seek primarily to understand, while a red team that seeks to train must also understand, anticipate, and test. The following table describes each purpose.

Table 1: Red teaming purposes and approaches.

Table 1: Red teaming purposes and approaches.

      As table 1 further illustrates, these four purposes correspond to two broad approaches: passive and active. A red team that undertakes a passive approach attempts to understand RED and anticipate RED’s actions, but it does not play out these actions in an operational or experimental setting. In contrast, a red team that undertakes an active approach does play out RED’s actions, typically against a “live” BLUE in an operational setting. In this sense, active red teaming tends to be more interactive than passive red teaming.

Decision Structure

Much like decision analysis in general, red teaming can vary from highly intuitive to highly ordered forms. Figure 1 illustrates this hierarchy.4

Figure 1: Decision structures.

Figure 1: Decision structures.

As the figure indicates, most red teaming (like most decision making) is based on intuition. Too few red teaming projects adopt more complex structures, and more should consider doing so. This is not to say that intuition is out of place in red teaming, nor is it to argue that all red teaming must adopt a complex decision structure. As one would expect, the degree of structure required is largely contextual.
      Table 2 outlines some of the tradeoffs involved.5 It suggests, for example, that more complex levels of structure require more effort but also yield higher quality and greater transparency. The challenge for red teams is to decide when intuition is best and when complex analysis is more appropriate. Red teamers should also consider combining multiple levels of decision structure in the same red teaming project.6

Table 2: Pros and cons of different levels of decision structure.

Table 2: Pros and cons of different levels of decision structure.

Summary

Table 3 combines the elements of the new taxonomy into a single matrix. It illustrates the fact that many possible combinations of purpose, approach, and decision structure exist. When planning a red teaming project, an organization should review these combinations, weigh their advantages and disadvantages, and select the most appropriate one.

Table 3: Purpose, approach, and decision structure.

Table 3: Purpose, approach, and decision structure.

      For most red teamers, intuition remains the structure of choice. In some cases, it has served us well; in many others it has not. Simply recognizing that alternatives exist is an important step toward more vigorous, robust, and adaptive red teaming—one more reason, in fact, to pursue, discuss, and debate a common taxonomy.
      In July 2003 I noted the following:

A red teaming taxonomy is long overdue. With it, we can further refine our methods, cross-pollinate between fields, and extract lessons we would otherwise miss. Without it, we may understand intuitively that different red teaming approaches exist, but our ability to distinguish between them (and combine them in new ways) is limited.

I believe this still holds; if anything, the need for shared definitions and taxonomy is increasing as red teaming receives more attention and more organizations undertake red teaming projects.

Further Reading

[amtap book:isbn=0385502257]

Share on LinkedInTweet about this on TwitterShare on StumbleUponShare on FacebookShare on RedditShare on Google+
  1. The July 2003 article is separate from this, the September 2004, article. I have not re-posted the 2003 article. []
  2. The problem of interest, though important, is more an issue of application than taxonomy. My attempt in the first paper to match approaches with problems of interest probably obscured as much as it revealed. []
  3. For a more recent but less complete description of red teaming, see the “About” page of this site. []
  4. I derived figure 1 from a similar diagram in J. Edward Russo and Paul J. H. Schoemaker’s 2002 book Winning Decisions (see p. 135). []
  5. I derived table 2 as well from Russo and Schoemaker’s Winning Decisions (see p. 155). []
  6. Of particular interest here is the notion of transparency. As defined by Russo and Schoemaker, “Transparency refers to how easy it is to verbalize the rule, use it as a basis for learning, and self audit, as well as to persuade or legitimate the decision to others” (Winning Decisions, p. 155). Good red teaming demands more transparency than is generally possible with strictly intuition-based structures. []