
Measure twice, cut once. I’m sure this timeless mantra has not only been used, but proven frustratingly true on multiple occasions for everyone reading this column. In today’s data-rich world, it’s also advice that is more broadly applicable in business than ever before. However, in security the equation is much more complex. In many ways it needs to follow an organizational “order of operations,” where certain measurements, wherever they reside in the business, need to occur before subsequent calculations. Failure to do so will result in insufficient or misleading results. While in construction, a mismeasured cut is immediately apparent, a misalignment in security may not present itself for weeks, months or even years – and the evidence may be a catastrophic failure.
When organizations evaluate offensive security testing specifically, many look to immediately jump into penetration tests or engage a Red Team. What they fail to ask is, are you built correctly to not just withstand the individual test, but the tests of time and change. Can you operationalize the results in a way that affects demonstrable, positive and continuous improvement.
This column will provide the “mathematical” principles, and even some suggested “units” of assessment, to guide organizations in gauging the readiness and maturity of their security programs to make testing of all types as impactful as possible.
The “Calculus” of Maturity
As in life, maturity is not an inherent trait. It is earned through experience, and the ability to process, analyze and apply the lessons learned. Organizationally, as you progress in experience and (hopefully) maturity, the tasks become more complex and the lessons come with greater gravity and consequence.
We’ll dive more specifically into what each level comprises, but at the highest level, we see the levels of maturity as such:
- Foundational – this includes threat modeling and security reviews; attack surface visibility and management; vulnerability management and application security testing
- Advanced – at this level includes network, cloud, and application penetration testing
- Adversary Emulation – a demonstrably mature program is then ready for activities including Red Teaming, Purple Teaming and Tabletop Exercises.
So, when is an organization mature enough to advance to the next level? To put it simply, the assessment process analyzes the state of a security program from two vantage points – the activities being done, and who is doing them.
In terms of what is being done, the assessment examines the activities being performed at the current level in terms of their coverage, and their cadence. For example, how much visibility does an organization have into the breadth and depth of their attack surface or applications? Also, how regularly are they revisiting testing results to accommodate for changes in infrastructure, code or threat environment?
Behind those activities, is an assessment of the people doing the testing, and the processes and technologies in place to support them. Are there adequate resources to cover in-depth and regular testing? Are there clear and enforceable processes to ensure it’s happening? Do they have frameworks, systems and technology in place to document and track progress, including external modeling such as the OWASP Software Assurance Maturity Model (SAMM)? Essentially, the assessment looks at the ability to successfully conduct testing, the ability to manage a repeatable process for benchmarking, and the infrastructure and resources in place to sustain the program and clear measurement/metrics on a continuous basis.
Of course, progression also comes with the aforementioned consequences. A strong security program will be increasingly complex, which will sometimes require substantial organizational and cultural changes around security practices, and inevitably, a higher cost. So let’s take a quick tour of the actual assessment activities by which maturity is assessed, and against which growth can be mapped.
“Addition by Addition”
Going briefly back to our building analogy, an enduring structure requires a strong foundation. This stage is about understanding what you know, finding out what you don’t, and prioritizing in what order all of it will be addressed. This process endeavors to identify programmatic deficiencies, and then sets out to either advance existing, or construct new capabilities around threat assessment, and attack surface and vulnerability management.
The first category here is in reviewing Threat Modeling and Architecture security. This includes looking both externally and internally. First, mapping the top threats and threat actors, most likely to find your organization an attractive target. Second, the top “crown jewel” systems they would target for compromise.
Remaining at the enterprise level, the next step is to establish an internal framework and underlying program that graphs threats and risks, and provides a repeatable mechanism to track and refresh that understanding over time. This includes graphs of all enterprise systems, and their associated connections and dependencies, as well as attack graphs that represent all the potential paths through your architecture that would lead an attacker to their prize. Finally, the third element is an architectural security review that discerns from the graphs what paths are most possible and probable. Installing a program that guides and tracks three activities will also pay dividends down the line in better informing and increasing the efficacy of adversarial simulations.
We all know the devil resides in the details. At this stage we begin understanding the actual vulnerability of individual assets and systems. The first step is a comprehensive inventory of elements that exist across the organization. This includes internal endpoint assets, and external perimeter and cloud systems. As you’d likely expect, the next step is vulnerability scanning of the full asset inventory that was established. By the end of this process, an organization has created a granular understanding of their entire attack surface, ferreted out and addressed the known vulnerabilities, and established that critical foundation for ongoing awareness and protection. On to the next challenge.
“Divide” and Conquer
Having largely addressed the known, it’s time to address the unknown, or at least of what you are unaware of. This step goes beyond scanning and moves to build out, augment and/or sharpen advanced operational mechanisms for penetration testing of applications, networks and clouds. Much like an attacker, this follows an outside-in approach.
The first surfaces tested are the perimeter of your traditional network, as well as your cloud infrastructure. Then it’s time to dive beneath the surface. This includes testing internal network assets and systems, and then individual testing of endpoint images, and finally the “crown jewel” targets themselves.
While this may seem like a lot, much like an iceberg, this is only the surface, and the rest of the berg lies not only in the applications that comprise your internal ecosystem, but the code that makes up their DNA. As with other steps it begins with a full inventory of the applications, and then a prioritization based on risk of compromise, and then modeling their potential impact to the business.
Next comes what amounts to a functional deconstruction of applications to test them from multiple vantage points. Static Application Security Testing (SAST) looks for vulnerabilities in application code prior to deployment. Dynamic Application Security Testing (DAST) looks for vulnerabilities and threats to applications already in production. Finally, a Software Composition Analysis examines third-party libraries used in the development of applications. All three of these are core fundamental parts of a well-run Application Security function. As also stated, cadence is one of the main assessment criteria, so all applications should undergo regular review and testing, particularly to understand the impact of application upgrades or changes in environment or infrastructure. We’ve covered a lot of testing, now it’s time to see if you’re ready to graduate.
Force “Multiplier”
Many often joke about the advanced math classes they took and when they’d actually come into play in daily life. Well, this highest stage of maturity can be looked at like Applied Mathematics. An organization at this stage has grown its program to the point of having a posture and discipline that can consume and drive continuous improvement from full adversarial emulation and testing at scale. In the lower stages you’re solving problems. At this stage you’re building, testing and verifying – resilience.
This stage is about testing your organizational mettle – and the networks, systems, applications, and code you’ve secured – in the face of simulated, but realistic attacks. The Tactics, Techniques and Procedures (TTPs) of actual attackers are used to find weaknesses, and determine if trained offensive security professionals can reach your crown jewels. It also tests the improvements you’ve made and your ability to detect and stop an attacker from causing damage, as well as your organizational ability to respond and recover if an attack is successful.
From an offensive and defensive perspective, Red Teams and Purple Teams are used to test technical exposure and response. A Red Team will map a threat intelligence profile to specific assets and environmental dynamics, and test them to determine any assets that are vulnerable to the TTPs within the profile. Purple Teams conduct “live fire” walkthroughs that allow Red Teams and Blue Teams (internal defensive security teams) to openly play out attack situations and determine the extent to which defense can rapidly detect and negate offensive attempts. Finally, from a resilience perspective, Tabletop simulations determine the strength and adaptability of organizational stakeholders (internal and external), and business processes with regards to incident response.
About The Author
Original post here