I spoke briefly yesterday with a gentleman who runs a successful pentesting company. For the most part, I get what he does, but I don’t think he got what I do, nor did he seem inclined to ask any questions to find out exactly what that might be. (At one point, he described my version of red teaming as “sitting around thinking,” which, of course, doesn’t make money!)
The misunderstanding just might be my fault. I realized today that I need a better method of describing how successful red teaming addresses the whole system even if the red team ultimately only “attacks” a portion of it. I’ve tried before (here and here), but until I get it right, I’m going to keep trying.
The gist of the systems-oriented, model-based red teaming that I preach and practice is this:
- First, the red team lead should work with the client to model and understand the whole system. This includes both people (the stakeholders and their concerns) as well as the relevant processes and technologies—in other words, “the system” in the sense that Peter Checkland, for example, defines a system.
- Second, they should work together to determine the scope of the red teaming effort, explicitly discussing and addressing what’s in and what’s out (including the associated rules of engagement). If necessary, they should phase the effort to address portions of the larger system in parallel or in sequence.
- Only then should they “unleash the red team.” (I’ve skipped a couple of steps and concepts here, but I’m focusing on “the system” for now.) Yes, you might call this “sitting around and thinking,” but in many ways, it’s just as important—if not more important—that the actual red teaming.
Sound easy? It’s not, which is why many pentesters and red teamers jump right to step three. Let me explain the problem with some pictures.
The easy answer, of course, is to simply address the whole system, as shown in the figure below.
Why doesn’t every red team do this? The main reason is that it usually takes more money and time. These are valid real-world constraints that only an ivory-tower red teamer can afford to ignore. As a result, it often makes good sense to focus on a portion of the system—but only after understanding what that system is, what it does, and what it links to.
The trouble begins when the client and the red team lead aim the red team at a portion of the system without first understanding the whole system. I illustrate this issue in the second figure:
I suggest that this form of blindered red teaming is more common than we care to admit. I’ve even seen cases where the client aims the red team at only a portion of the system but then asserts confidently that the lessons learned apply to the whole system, all the while never understanding the error.
Even the systems-aware red team lead and client, however, invariably face a seemingly intractable problem: where to draw the line. Modeling systems has much to do with modeling interfaces, and the systems-aware red teamer understands that nearly every system links to at least one other system, which in turn links to at least one other system, and so on. Only half tongue-in-cheek, we might argue that much like Kevin Bacon is linked to everyone else, any system is only six degrees removed from any other system. I express this challenge in the final figure.
In my experience, learning how and where to draw the line is at the heart of superior red teaming. The best red teamers strive to understand systems, interfaces, and feedback loops in a deep sense using a multilayered toolkit. Only then do they draw the boundary of the engagement and “unleash the red team.” And even if their red teams never target the whole system, they explicitly account for this fact in their analysis, findings, and recommendations to the client.
Let me add one more twist to this already difficult challenge: most adversaries don’t face the same time and money constraints the client faces. Yes, adversaries must think about time and money, but they usually don’t own the systems they target, and thus they don’t have to limit their focus to a portion of this client’s system. They’re usually much more free to extend their vision across systems, to find seams between systems, and to attack a given system by attacking an upstream system. This yields what I’ve often called the Catch-22 of red teaming: somebody pays you to do it, but the scope of the client’s interest, time, and money is typically and understandably more limited than that of the adversaries they face.
I doubt if I’ll ever speak again to the pentester I spoke to yesterday, but if I do, I’ll point him to this post and ask him if it makes sense. Regardless, I’m going to keep at this until more red teamers and pentesters than not spend a bit more time “sitting around thinking.”