One of the core themes in the “Becoming Odysseus” course is that of gaps—specifically, the importance of defining the gaps between your red team, your adversary, and the real world. If you fail to understand these gaps, you risk delivering skewed findings. In extreme cases, the result might be a red team report that does more harm than good.
Look at it this way: Imagine that you’re an engineer responsible for testing the aerodynamic performance of a new automobile design. You build a scaled-down model of the body and test it in a wind tunnel. In order to draw worthwhile conclusions from your test, you need to understand the “gaps” between your scale model and the full-scale design. A quarter-scale model, for example, might exhibit subtle or not-so-subtle performance differences when compared with the full-scale design. If you fail to account for these gaps, there’s a very good chance that your car will not perform as efficiently as expected.
If you’re a red teamer, you’re also a modeler. You typically can’t “build” a full-scale model of the adversary, so you abstract away a detail here and simplify a factor there (not to mention that no one—not even your adversary—fully understands your adversary). You’re constrained by cost, time, and knowledge to build and employ a red team that approximates your adversary. This applies equally to the context or scenario in which you embed your red team.
But do you know how your model differs from your adversary? Have you explicitly documented the gaps between your adversary’s characteristics (intent, resources, capability, time budget, awareness, knowledge. and so on) and the characteristics of your red team? Similarly, have you explicitly documented the gaps between the red team’s setting and the real world, current or forecasted? Do you understand how these inevitable gaps will affect your analysis and findings? If not, you’re working at least partially in the dark, and doing so means that you’ll misunderstand your findings, even when you write them yourself.
As you might expect, you can approach this challenge from many directions (and we do just that in the course). Here, we look at a concept borrowed from the field of knowledge management.
In business and government, it is often necessary to share knowledge across groups that lack a shared identity and base of common knowledge. This gap can impede the transfer of knowledge. To better understand the gap and thereby address it, knowledge management professionals sometimes turn to Carlile’s framework of “boundary types.”1
In this framework, Carlile posits three types of boundaries between functional groups: syntactic, semantic, and pragmatic. The syntactic boundary involves groups across which a “common syntax/language/understanding exists.”2 In these cases, the barrier is minimal and knowledge can be transferred. The semantic boundary involves “different understandings of the same knowledge” and “requires development of mutual understanding.”3 Where semantic boundaries exist, knowledge must undergo “translation.” The pragmatic boundary involves “conflicting interests which requires one party to adapt/change their knowledge.”4 When pragmatic boundaries exist, knowledge must undergo “transformation.”
While these boundary types apply most directly to cross-community knowledge sharing, we can usefully appropriate them to help us define the gap between our model and our adversary. If, for example, our adversary shares the same basic identity and knowledge framework as our red team, we can say that our red team and our adversary share a syntactic interface. If our adversary shares the same foundational interests as the red team but understands the same items knowledge differently, we can say that our red team and our adversary exhibit a semantic gap. Finally, if our adversary holds a different set of interests and goals, we can say that our red team and our adversary exhibit a pragmatic gap. (We might also describe these gaps as degrees of asymmetry.)
When you run a red team without mindfully considering the gap between the team and the adversary, you run the risk of assuming a syntactic interface when, in reality, a semantic or pragmatic gap exists. This might manifest itself in terms of misaligned perceptions, preferences, methods, or goals, any of which can undermine your approach and your findings. Alternatively, you might posit a semantic gap when, in fact, your adversary can simply employ insiders to activate what we might call a pseudo-syntactic interface. This is perhaps even more worrisome because it means you’ve probably underestimated your adversary.
To help address the unavoidable gaps between your red team and the real world, consider the following questions whenever you run a red team:
- Are you modeling a specific adversary or class of adversary? (If not, the issue of gaps is still important.)
- Who is the adversary you are modeling?
- Are you modeling more than one adversary?
- Have you explicitly defined the adversary’s (or adversaries’) intent, resources, capability, time budget, awareness, knowledge. and so on? (If not, you should.)
- What can the real adversary do that your red team cannot? (Consider this question both in terms of capability and rules of engagement.)
- Can the real adversary exploit time in ways that your red team cannot?
- How and in what ways might your real-world adversary’s goals and preferences differ from your red team’s?
- How and in what ways might perception and misperception differ between your red team and your real-world adversary?
- What might you know that your real-world adversary might not?
- What might your real-world adversary know that you might not? (This is a bit of an abstract question, but asking it can help you get at important factors that are otherwise easily overlooked.)
- How and in what ways might your adversary perceive risk differently than your red team?
- How and in what ways might your adversary perceive the value of life, time, money, and “things” differently than your red team?
And don’t stop there. No doubt you can think of additional questions that can help you further itemize the gap. We haven’t, for example, specified any questions addressing gaps between your context or scenario and the real world. (We leave that to you, at least for now.) And, as a matter of principle, don’t allow these questions to box you in. Questions can help you see new things, but they can also lead to you believe that you know more than you do. Never stop pursuing the questions you still haven’t identified, and watch for the biases you’ve embedded in your existing questions.
Finally, while you’re busy “minding the gap,” be sure to write everything down. Document each element and aspect of the gap as part of your red team’s “contract.” Taken as a whole, the gaps should also directly influence your analysis and findings. We provide templates in the “Becoming Odysseus” course to help you do this, but in some ways it’s even better if you build your own.
- See Carlile, P. (2002). A Pragmatic View of Knowledge and Boundaries: Boundary Objects in New Product Development. Organization Science, 14(4), 442–55 and Carlile, P. (2004). Transferring, Translating, and Transforming: An Integrative Framework for Managing Knowledge across Boundaries. Organization Science, 15(5), 555–68. [↩]
- Hislop, D. (2013). Knowledge Management in Organizations (Third ed.). Oxford, UK: Oxford University Press, p. 180. In this reference, Hislop is citing Carlile. [↩]
- Ibid., p. 180. [↩]
- Ibid., p. 180. [↩]