Red Teaming: Degrees of Influence and Control
When red teaming, it’s often useful to model and distinguish elements of the engagement based on the degree of influence or control each actor exercises over these elements. For example, as the red team (RED), I unilaterally control some aspects of the engagement domain. I choose my goals, and I choose how to invest my time and resources. I also choose various aspects of my operational code. I generally do not, however, choose the outcomes of my strategies. Outcomes fall within an area controlled jointly by RED and the defender (BLUE). Much like a game of chess, both players’ decisions determine the game’s outcome. (The inspiration for this tripartite division between things we control unilaterally, things we influence, and things we neither control nor influence comes from a similar concept in Kees van der Heijden's excellent Scenarios: The Art of Strategic Conversation (2005), although he didn't separate RED and BLUE.)
Beyond the elements I control unilaterally and the elements I influence exist elements over which I have no control. For instance, I exercise no control over nature, at least on the scale of natural disasters such as hurricanes and earthquakes. And unless I’m a powerful or extremely clever RED, my ability to influence the global economy approaches the limit of zero.
Yet natural disasters and the state of the economy can affect BLUE’s capabilities and decisions as well as my own. As a result, I might wait for a natural disaster to strike before executing a conditional attack, and an astute BLUE might plan to implement certain defensive strategies given the same conditions. Red team leaders and clients should consider these possibilities when drafting the engagement’s scenario or context. Failure to raise such issues embeds the engagement within a system of unstated suppositions, many of which remain ambiguous and many more of which are likely to vary from individual to individual.
Of course, every model reveals only so much, and most models both reveal and conceal at the same time. One area of potential fuzziness in this model is the realm of perception, misperception, and deception, via which a clever RED or BLUE might manipulate elements that would otherwise be within an opponent’s area of unilateral control. An example? Though BLUE chooses its strategies and investments, I can manipulate BLUE’s view of the world so that BLUE’s choices and investments benefit me. I’ve now penetrated what before was the exclusive domain of BLUE’s unilateral decision-making.
Another aspect of the real-world this model fails to address is the effect of time. As time passes, the boundaries of the three domains may shift, sometimes under the influence of an actor’s upstream manipulations. Actors may also learn more about the system in ways that allow them to influence elements that were previously immune to influence or control. If I learn to engineer the weather, for example, that now falls within my domain of control. Less dramatically, I might simply learn over time how BLUE responds to certain triggers, thus placing aspects of BLUE’s behavior within my influence, if not within my control.
In practice, red teams often run about “thinking evil” without acknowledging the context in which the real-world actors they seek to mimic operate. This is certainty true when red teams engage in adversary emulation. It also applies obliquely when a red team engages in vulnerability assessment. In both cases, unexamined contexts yield red team engagements in which the red team (1) “plays” a default scenario that reflects the red team’s assumptions and (2) emulates the adversary who looks and behaves precisely like the red team. As noted elsewhere on this site, the gaps that exist between the red team and the real-world adversary—if not addressed—can misinform the downstream analysis.
Ultimately, adversarial red teaming remains more art than science. As such, it continues to resist efforts to commoditize its methods and processes. At the same time, red teamers can generally do more to frame their red teaming engagements, especially relative to the context in which the red team “plays” the scenario. Real-world REDs inevitably face constraints, frustrations, and unexpected setbacks beyond their control, and so do red teams; the prudent question, then, is this: how and to what degree do these constraints, frustrations, and setbacks diverge? If they differ significantly, the engagement manager or red team lead has a good deal more work to do before unleashing the red team.