Bringing the Dartboard Closer: Increasing Accuracy in Qualitative Risk Assessments

Ethan Altmann
|
March 12, 2025
Updated:
March 12, 2025

Why We Need a New Risk Assessment Methodology 

Over the last 2 years or so, I’ve had the pleasure of speaking to countless cybersecurity risk management practitioners week over week, predominantly in mid-market to enterprise-size organizations with mature and robust GRC programs. Some are more experienced, some less; some are more ambitious with their plans, some less. One overarching conclusion, however, rings true — risk management is the Achilles heel of most organizations. 

Rather than keeping their eyes on the prize (hint: risk management), practitioners are forced by “market conditions” (i.e. the need to pass audits) to get so deep into the weeds that they lose track of why they were implementing controls in the first place.

Risk assessments end up being performed as if they are a control activity! Come up with some risks (or take some from a library), throw some darts at the impact-likelihood dartboard, define a few distant treatment plans for known issues, and just like that, the “risk control" is implemented…and the audit-centric equilibrium is back in balance. It is possible that risks are mitigated with this approach, but the process happens blindly, almost subconsciously.

Following those many conversations with GRC practitioners, it became clear that we all share the feeling that risk management deserves its rightful spot back in the limelight. However, the current standard practice appears either too simplistic (the said impact-likelihood dartboard) or too complex, borderline rocket science (FAIR methodology). I decided to find a middle ground that could serve the wide community of overworked, dedicated GRC practitioners.  

The solution presented below rests on the idea that qualitative risk assessments are still the only realistic approach for the majority of organizations. However, if we break down the qualification into smaller “chunks” and then more accurately analyze each chunk, then put it all back together again — even if we are still talking in “impact X likelihood” terms — the results will be much more accurate and indicative of actual risk levels.

The New Risk Assessment Methodology

Introduction to Components

To understand the methodology and its different components, let’s use the example risk of a “data breach due to ineffective authentication mechanisms” risk. Before we dive into the numbers, let’s make sure we share a common language when it comes to the terms I’ll be using starting with the building blocks (or chunks, as they were referred to above) of the methodology.

  1. Risk. For this methodology, we’ll define a risk as an undesirable outcome or event. For this example we have chosen a “data breach due to ineffective user authentication mechanisms.” Different organizations will assess risks at varying levels of granularity. “Data breach” alone is, of course, a risk (regardless of the process deficiency that caused it). “Data breach due to ineffective Okta password enforcement configurations” is also a risk, albeit highly specific. This methodology works no matter what level of granularity you choose.

    Example:
    “Data breach due to ineffective authentication mechanisms.”
  1. Policy. Policies are written documentation that defines the organization's high-level approach to reducing known risks. Policy dictates control objectives and even control implementation methodologies (which can, in turn, be tested).

    Example:
    IAM policy that defines the two controls that should be implemented.
  1. Control. Controls reduce the defined risk and can be endlessly added. You can also continue to improve implementations to reduce residual risk to levels within your organization’s risk appetite. The “M(x)” (found in the top right corner of each control in the diagram) refers to the maturity of the implementation of the control, where x = integer between 1-5, a traditional capability maturity model.

    Example:
  • Secure password enforcement — passwords must meet complexity parameters as defined by policy, which must be technologically enforced. At present, the maturity of this implementation is a 4 out of 5 which means that the control is “managed,” but metrics are yet to be defined and work plans are yet to be implemented to continuously improve the implementation.
  • MFA enforcement — authentication to sensitive systems must enforce multi-factor authentication for all users. At present, the control maturity is also a 4 out of 5.

    For this scenario, we assume two data sources: Okta and AWS. Let’s say the organization has nearly all systems authenticated via Okta, but AWS is still authenticated directly. In this case, to validate that the two controls (passwords and MFA) are both implemented effectively, data must be collected from both Okta and AWS.  
  1. Tests. Testing the data collected to validate the implementation of controls against organizational policy, to ensure that what is written in policy is implemented in technology. Performing these tests lets you know if the control is operating effectively, and the risk is being reduced to the extent possible.

    Example: Testing the data collected from Okta and AWS against organizational policy. I.e., if organizational policy dictates that all passwords must be at least 12 characters, require a combination of upper/lower case letters/symbols/numbers, etc., then the “test” will validate that the technology enforces this. 

Definitions

We now understand the different components of the methodology, great! Now it’s time to move on to turning those building blocks into a methodology that you can use to perform better qualitative risk assessments for your organization. To do so, we’re going to need a couple of more definitions.

Term Methodology specific definition
Inherent risk analysis Evaluation of a risk before applying any mitigating controls.
Residual risk analysis Evaluation of a risk considering mitigating controls and their effectiveness.
Mitigating control An action to reduce the likelihood or impact of a risk.
Control maturity 1-5 rating based on control effectiveness.
Theoretical impact weight Extent (0-100%) a mitigating control reduces inherent impact.
Theoretical likelihood weight Extent (0-100%) a mitigating control reduces inherent likelihood.
Maturity Level Definition
Initial (1) Control applicable, but policy/process undefined.
Repeatable (2) Loosely defined policy, inconsistent process.
Defined (3) Formal policy, followed and audited process.
Managed (4) Automated monitoring and reaction to exceptions.
Optimizing (5) Continuous improvement with automation and metrics.

The New Methodology

Impact X (efficacy of controls X weight of controls) = New Impact

Likelihood X (efficacy of controls X weight of controls) = New Likelihood

Let’s continue our example from above and apply the formula assuming a standard 5x5 impact x likelihood matrix. 

The following assessment is performed:

Risk: Data breach due to ineffective authentication mechanisms.

Inherent risk analysis: 

  • Impact: 5
    • The data stored in systems with ineffective authentication mechanisms is highly sensitive customer data. A data breach will have catastrophic consequences to the organization.
  • Likelihood: 4
    • Ineffective authentication mechanisms are likely to be challenged by malicious actors by virtue of it being public knowledge that the organization stores sensitive customer data. 

5 (Impact) x 4 (Likelihood) = Inherent risk score: 20

Application of mitigating controls:

  • Secure password enforcement – maturity level 4
  • MFA enforcement – maturity level 4

Definition of control weighting:

  • Secure password enforcement
    • Reduces impact by 0%
    • Reduces likelihood by 30%
  • MFA enforcement
    • Reduces impact by 0%
    • Reduces likelihood by 50%

Note: In this scenario, we assume that neither of the applied controls affect the impact of the risk. If the risk event were to occur, the impact would still be catastrophic to the organization. However, the likelihood would be reduced by a total of 80%. This means the current control package does not offer complete likelihood reduction, as is a common scenario.

Residual risk analysis using the new methodology:

  • Impact: 5
    • Unaffected by controls.
  • Likelihood: 2.72
    • In theory, the likelihood level can be reduced from 4 down to 1. However, because the total controls only allow for 80% reduction, the best case scenario would be a reduction to 1.6 (a reduction of 0.8*3=2.4). However, the maturity of the implementation of each control is a 4 out of 5. Therefore, each control only reduces the likelihood by 80% of its total theoretical likelihood weight. As a result, the actual likelihood weight is 1.28. (4-1.28=2.72)

Residual risk score: 13.6 (5*2.72)

Minimizing Risk With More Accurate Darts

Why is this methodology effective? While we may still be throwing darts at the board, now, instead of throwing one dart at 10 meters, we are throwing 10 darts at 1 meter. Overall, our accuracy increases significantly. Furthermore, there is an opportunity here for larger organizations to introduce segregation of duties between control owners, who will define maturity levels, and risk practitioners, who will define weights and perform the assessments. 

It is my hope that this approach will lead to a greater investment in effective risk management, restore world peace GRC equilibrium, and return risk assessments to their rightful place as the basis of an impactful GRC program rather than an audit-driven afterthought.

Key Takeaways

What you will learn

Ethan Altmann
Compliance Product Owner - Chief framework cross-referencer, Control-understander, Evidence-mapper
Link 1
Link 1
Link 1

Explore Our Compliance Leader Playground

No items found.