“Red Team Journal still serves as the best open-source repository for helpful hints and emerging practices in the field.”
— MIcah Zenko, Red Team (2015)
Beyond the Never-ending 'Hunt and Peck'

Beyond the Never-ending 'Hunt and Peck'

Forget about red teaming, at least the kind we’ve been advocating for the past few years. Few enterprises outside of the mil/gov world seem to be interested in doing much beyond the typical pentest/redtest “hunt and peck.”

Let’s talk instead about whole-systems security: the practice of defending systems from threats by closing systems vulnerabilities. It’s a much bigger task than the typical pentest/redtest, and it promises a much larger payoff. What’s more, if all you do is hunt and peck, you’re going to be stuck at it for a very long time.

Let's compare pentesting/redtesting and whole-systems security analysis using a hospital as an example.

A typical pentest/redtest involves hiring a team of offensive security specialists to find vulnerabilities in the hospital's IT systems. The team most likely steps through the typical pentesting checklist—conducting recon, scanning, exploiting, escalating, covering tracks, maintaining access—you know the drill. The hospital might even allow a bit of social engineering and maybe even some physical security recon. If the hospital is smart, they hire a team that understands the specific legal and regulatory controls associated with health care as well as the vulnerabilities and risks associated with a wide variety of medical devices. In the end, they get a report that lists immediate vulnerabilities to be addressed.

Figure 1. The “hunt and peck” focus on vulnerabilities.

Figure 1. The “hunt and peck” focus on vulnerabilities.

A whole-systems security analysis begins further upstream, within the temporal and dynamic context of the hospital-as-operational-system. This includes discussing the hospital's operations, the stakeholders involved, and their main concerns. It also includes a facilitated mapping of the hospital-as-operational-system's culture, communications, systems of interest, functions, requirements, and dependencies. Only then does the security team leader work with the hospital to define the assessment's scope and rules of engagement.

Figure 2. The whole-system security view: threats, vulnerabilities, system of interest factors, contextual factors, and uncertainty.

Figure 2. The whole-system security view: threats, vulnerabilities, system of interest factors, contextual factors, and uncertainty.

Depending on the nature of the assessment and the level of threat actor to be considered, the hospital and assessment team leader might draft a plan that includes a full operational red team. This red team would probably include members with cyber skills but also members with domain knowledge and expert skills in surveillance, social engineering, and physical attack modes. It might also include someone who is expert in perception, inducement, and "shaping," or the art of what the Soviets called "reflexive control"—similar to what we’ve called "con and hypercon." In other words, what is often ancillary in a pentesting engagement becomes central in an operational red team assessment.

But a whole-systems security assessment need not stop there. It might also involve a full-scope analytical phase. Here, a risk modeling team works with the operational red team to map all conceivable, relevant modes of attack, not just those capable of being tested and demonstrated by the operational red team. This allows the team as a whole to expand its field of view to include (1) attack vectors that might be risky to test or demonstrate or (2) vectors that might involve long-term adversary strategies and investments. This activity feeds a more complete assessment involving relative risk rankings of the threat vectors and the impact of possible mitigations, adversary by adversary. Key here is the fact that the team now has enough information to support higher-level decision tradeoffs.

Using the hospital example, the operational and analytical teams generate an overall map of possible attacks specific to the site and the services it offers. These attacks include not just against the hospital's IT systems, but deliberate and contingent attacks involving all hospital systems and operations. Without providing specifics here, any reasonably informed security professional should be able to see how this opens many new possibilities and brings the assessment much closer to the real world.

All of this provides context in which to assess upstream and systemic issues. Importantly, the whole-systems security team can begin to pose additional questions regarding the relative risks identified during the operational and analytical phases. For example,

  • Why were the vulnerabilities identified not identified sooner?

  • Why do certain vulnerabilities recur?

  • Why are certain investment and development patterns evident?

  • How is critical security information and intelligence shared horizontally and vertically within the enterprise?

  • How do concerns, requirements, and operational pressures differ across the enterprise, and how do these affect security-related decision making?

  • How do culture and communications within the enterprise enable or limit this decision making?

  • How are these various patterns and behaviors likely to play out over time or during periods of unusually high pressure?

For the most part, pentests/redtests help enterprises identify immediate technical vulnerabilities, which—though important—typically represent only the final link in a long chain of (often nontechnical) upstream problems. Put differently, it's akin to treating symptoms rather than addressing the disease. Whole-system questions such as those listed above rarely get asked. As a result, we encounter the same symptoms again and again and again.

We believe the case for whole-systems security is straightforward, even obvious. Why aren't enterprises doing more of it?

Increasing specialization is a big part of the problem. We're operating almost exclusively in technical hunt and peck mode. What's more, very few security professionals are aware of what they don't know. We talked to one of the few true systems-oriented red teamers recently who was interviewing for a senior red teaming position with a big firm. She talked to the hiring staff at length about systems thinking, engineering, and analysis and was met with vague nods and an immediate follow-up question about her relevant certifications.

Another reason we believe whole-systems security is largely ignored is that it takes time and money. Who within the enterprise is willing to cut their current activities to fund something that few other enterprises even consider? The answer, of course, is "few enterprises." Until we are able to share cases in which whole-systems security analysis has saved someone money, most enterprises will continue to hunt and peck. But that just begs the question: it’s difficult to show return on investment for even the most direct security purchases, let alone pentesting, let alone whole-systems security.

As long as the overall perceived cost of breaches and pentesting remains relatively manageable, don't expect much change, which, of course, offers a tremendous competitive advantage to the enterprise that adopts whole-systems security before anyone else. Any takers?

Want to learn more about whole-systems red teaming and security analyses? Consider taking our “Becoming Odysseus” course.

Everything Old Is New Again

Everything Old Is New Again

‘Redtesting’: More of the Same

‘Redtesting’: More of the Same