In my last Red Teaming 101 Webinar, I shared a concept that I often discuss in my red teaming courses. It involves the issue of when you should red team the system of interest, where the system is some combination of people, technology, or processes. Like many issues connected with red teaming, the short answer is “it depends,” the middling answer is “it’s a tradespace,” and the long answer, well—it’s a longer answer. During the Webinar, one of the participants asked if I’d posted the concept on RTJ; I hadn’t to that point, so I assembled this post.
I start with the notion that the cost to extract a defect from a system rises nonlinearly as the system moves from concept through architecture, testing, and operations. The idea is captured in a neat little chart in the INCOSE Systems Engineering Handbook (see figure below).1 For example, it costs relatively little to add or adjust a critical requirement up front compared to the cost of discovering the requirement once the system is in operation. The same idea was captured in the old FRAM Oil Filter TV commercials. (“You can pay me [a little] now or pay me [a lot] later.”)
I then overlay on this “cost” curve a conjectured curve that represents an overall decline in uncertainty as the system progresses across the same phases. For example, during the initial concept phase, uncertainty is high; you have a general idea of what you want the system to do, but you usually haven’t yet decided how the system will accomplish your goals. You learn more as you define and analyze the system requirements, more still as you model the system functionally, more still as you define the solution, more still as you test and evaluate the system, and more still as you put the production system into operation.
So, the question remains: “When should you red team the system of interest?”—when uncertainty is high and the cost to extract defects is low, or when the uncertainty is low and the cost to defect defects is high? I should note before answering that the chart here represents a broad-brush hypothesis. Specific cases will vary. In some cases, uncertainty will jump dramatically when the system goes into testing and evaluation. In other cases, the cost to extract a defect during a later phase might involve a simple and relatively cheap mod. Regardless, let’s accept the general hypothesis for now and see how it can inform red teaming choices.
You might choose to red team at point “A,” the argument being that you can unearth faulty assumptions that underpin equally faulty requirements. Additionally, you might find a flaw in the initial concept of operations in which a potential adversary or mission parameter is overlooked or minimized. Per the chart, you arguably can save a lot of money and hassle by red teaming here.
Alternatively, you might choose to red team at point “B,” the argument being that you’re now red teaming the real thing instead of a “paper” concept or model. You now have something you can really test, and what you learn red teaming is much more tangible. It might cost you a lot more to fix the problems you find, but, you can argue, it will cost you even more if you wait or if you let your adversary be the one to uncover the flaw.
You might also choose to red team at both points, at any other point along way, or even continuously. Of course, the more you red team the more you spend, so you should weigh the cost of red teaming against the likely benefit within the context—in other words, you should balance the relative level of uncertainty, the cost of extracting defects (which may or may not be the inverse of the benefit of extracting defects), and the cost of red teaming. What’s more, you should also consider a range of other factors, many of which are likely to be ambiguous and beyond your control: changes in key stakeholders’ goals and preferences, unexpected changes in adversary capabilities and intentions, unexpected events, the potential for breakthroughs, and so on—all of which argues for vigilance, even when a formal red team might not be operating.
In the end, the short and middling answers still work—“it depends” and “it’s a tradespace”mdash;which leaves the hard answers and the real work up to you. By thinking in terms of uncertainty and the cost to extract defects, however, you’re now able to consider the potential nuances of the hard answers more intelligently. And besides, all of this is what prevents us from formulating a one-size-fits-all solution to red teaming, which, in my opinion is what keeps red teaming fun.
- INCOSE, Systems Engineering Handbook, Fourth Ed., p. 14. [↩]