
Thriving organizations maximally allocate resources. With seemingly infinite cybersecurity threats and finite resources, everyone needs to know the size of the threat to determine priority, and where to invest to maximize ROI. Elastic takes a quantified approach to cybersecurity risk management using FAIR to break threat scenarios into (A) likelihood and (B) losses to calculate risk per year, AKA annualized loss expectancy, or in FAIR terms, simply “risk”.
We know what an attack scenario’s annualized risk is, but what do we do about it? Renowned mathematician Richard Hamming advises, “The purpose of computing is insight, not numbers.” So, we must deconstruct this overall risk into rich, actionable analysis.
Elastic’s Risk Management team does this by mapping the attack chain(s) for each risk scenario at a high level. This exercise accomplishes 2 goals:
-
Factoring multifaceted, opaque FAIR inputs like Loss Event Frequency (LEF) into separated, visible attack routes highlights our weaknesses so we know what to fix.
-
Building documentation improves our knowledge and accuracy with every quarterly risk assessment.
Here’s an example of using this approach with the Build Chain Compromise scenario (sensitive information withheld.)
1. Lay out the infrastructure
We started with SLSA’s supply chain threats diagram and added additional specificity in a diagramming application. Each piece of infrastructure has an accompanying malicious action taken on it. If multiple malicious actions are possible, you likely need to break them into separate pieces so they can each have a discrete probability (step 2).

In the next steps, we’ll see that you can choose your level of detail here. You might try separating distinct options only to find they are alike enough to be rolled together, but when unsure, the exercise of factoring and enumerating options is informative.
Add a layer that just contains notes so that you capture details as you go. If you do roll options up into one step, note which steps are included so new/overlooked information is easily recognized during future analyses.
2. Add statistical probabilities for each discrete step
Assign a likelihood from 0 (impossible) to 1 (inevitable) to each step.
For entry points or attack vectors, it may be helpful to think of likelihood as “successful attack(s) per year(s),” and for lateral or escalating movements to think of likelihood as “chance of success given they’ve gotten this far.” But either mental framing ultimately produces the same results.

3. Map the attack routes and calculate LEF
Recall that the likelihood of a sequence of events is the product of each probability therein, so multiply each step along the attack route to get the overall likelihood for that path. The likelihood of any one of several events happening is the sum of each event, so add each path likelihood to get the overall Loss Event Frequency for this scenario.

Notice the single, multifaceted, opaque Loss Event Frequency is now five separate attack paths with discrete steps and rich detail. It’s trivial to see which paths and steps are highest risk and how changes from mitigation projects will impact LEF and the overall annual risk.
Delivering effective cybersecurity risk management
Quantifying risk for a scenario is an important first step, but ultimately it doesn’t provide much direction on how to most effectively reduce the risk. Additionally, it’s hard to accurately measure loss event frequency without transparency into the constituent factors.
Rather than exhaustively diagramming every piece of the technology stack, we can capture detailed analysis that:
-
Highlights weaknesses and areas to improve, separating an overall risk into distinct actionable insights.
-
Improves the accuracy of our risk quantification, documents our assumptions, and produces a visual narrative for others to understand and improve.
Ultimately, this delivers a better cybersecurity risk management service.
Learn more about Elastic Security, or start a free 14-day trial of Elastic Cloud.
Want to read more about how we use FAIR? Check out this blog on assessing generative AI: A FAIR perspective on generative AI risks and frameworks.
The release and timing of any features or functionality described in this post remain at Elastic’s sole discretion. Any features or functionality not currently available may not be delivered on time or at all.
Leave a Reply