“Red Team Journal still serves as the best open-source repository for helpful hints and emerging practices in the field.”
— MIcah Zenko, Red Team (2015)
Pleased to Meet You, General Red

Pleased to Meet You, General Red

I just listened to “Uri’s rant” (episode 022 of the The Red Team podcast), which followed an episode in which I joined Uri and Dan to discuss, among other things, the nature of red teaming. The gist of the new episode is “red teaming is not pentesting.” After finishing the episode, I thought a bit more about the possible differences. Here are my initial thoughts (sorry for the repetition, but in this case I think it’s necessary):

  • Some potential adversaries will likely play a long game, and most pentesting methods just don’t model the long game well. It’s problematic that the adversaries who play the long game are usually among the most worrisome. How many pentesting engagements (and pentesting engagements rebranded as red teaming) consider the long game?

  • Some potential adversaries will exploit contingencies, some of which they will encourage and some of which they will simply leverage. Again, the adversaries who do this are among the most worrisome. How many pentesting engagements (and pentesting engagements rebranded as red teaming) consider contingencies?

  • Some potential adversaries will induce misperceptions and actively employ deception. Once again, the adversaries who do this are among the most worrisome. How many pentesting engagements (and pentesting engagements rebranded as red teaming) consider misperception and deception (beyond tactical social engineering)?

  • Some potential adversaries won’t break a single thing; they’ll sneak in and wait. What they eventually do might be unrelated entirely to the vulnerability they exploited to get in. Once again, the adversaries who do this are among the most worrisome. How many pentesting engagements (and pentesting engagements rebranded as red teaming) explore the implications of these sorts of subtle, diagonal raids?

  • Finally, most pentesters I’ve encountered employ an existing toolkit to each new problem. Yes, they expand this toolkit as they learn, but the toolkit itself is generic and reusable.

      With this in mind, the “April 2018 me” believes that . . .

Pentesting (or “pentesting rebranded as red teaming”) is a largely technical/tactical exercise in which the pentesters employ an existing toolkit to hunt vulnerabilities in an organization’s system or systems.

Red teaming is an exercise in which the red team attempts to undermine, leverage, or attack an organization across time by not just breaking things but also by manipulating things and exploring the second- and third-order implications of breaking and manipulating things (like real-world adversaries are likely to do). All of this includes using the target organization as a stepping stone to achieve other ends.

These are extremes, I admit, and most actual pentesting and red teaming as practiced today probably sits somewhere between the two (with pentesting closer to the former and "real" red teaming closer to the latter).

Unfortunately, "real" red teaming is more difficult than ever. Historical red teaming was rooted in the classic “Red vs. Blue” battlefield confrontation: General Blue’s staff tries to think like General Red (or like General Red’s staff) to better plan the forthcoming battle. It’s the archetypal Clausewitzian duel. Unfortunately, we no longer live in a “duelistic” world. Attackers of all stripes harass organizations from all directions (including from within). Thinking about what “red” might do or what circumstances might undermine an organization requires a sophisticated level of systems thinking, and a the typical “red teaming as pentesting" technical/tactical toolkit is insufficient. When you play “red” today, you must consider the operational codes of multiple potential adversaries.

The classic “duelistic” red teamer knew (or at least knew of) General Red. He knew how General Red thought, how General Red acted in the past, and even what sorts of stratagems General Red might employ—in short, the need to get inside General Red’s head was immediate and obvious. In most cases today, no obvious General Red exists, or so many possible General Reds exist that we fall back on hunting vulnerabilities rather than truly “thinking like red.” It’s easier and in most cases useful, but it ain't red teaming in the full sense.

The 'Beyond Red Teaming' (BRT) Cards

The 'Beyond Red Teaming' (BRT) Cards

The Red Teamer's Manifesto

The Red Teamer's Manifesto