SOT

SOT

SOAR
Security Orchestration, Automation and Response

Automation of response to information security incidents using dynamic playbooks and information security tools, building an attack chain and with an object-oriented approach

NG SOAR
Next Generation SOAR

Automation of response to information security incidents with built-in basic correlation (SIEM), vulnerability Scanner (VS), collection of raw events directly from information security tools, dynamic playbooks, building an attack chain and an object-oriented approach. AM and VM are included

AM
Asset Management

Description of the IT landscape, detection of new objects on the network, categorization of assets, inventory, life cycle management of equipment and software on automated workstations and servers of organizations

VS
Vulnerability Scanner

Scanning information assets with enrichment from any external services (additional scanners, The Data Security Threats Database and other analytical databases) to analyze the security of the infrastructure.

VM
Vulnerability Management

Building a process for detecting and eliminating technical vulnerabilities, collecting information from existing security scanners, update management platforms, expert external services and other solutions

FinCERT
Financial Computer Emergency Response Team

Bilateral interaction with the Central Bank, namely the transfer of information about incidents and receipt of prompt notifications/bulletins from the regulator

GovCERT
Government Computer Emergency Response Team

Bilateral interaction with the state coordination center for computer incidents, namely the transfer of information about incidents and receipt of prompt notifications/bulletins from the regulator

Mail us to sales@securityvision.ru or get demo presentation

Mathematical risk modelling: shamanism or cybernetics?

Mathematical risk modelling: shamanism or cybernetics?
26.12.2024

Maxim Annenkov, Boris Zahir, Security Vision


Introduction


In an era of digitalisation and ever-increasing cyber threats, every company is looking to understand how much they are investing in cybersecurity and the return on that investment. Cybersecurity is not a free enterprise, and an educated approach is required to effectively manage risk and measure the return on security investment.


One key aspect of this approach is a risk assessment. This is an important tool that helps to understand the relationship between the cost of defence measures and the degree of threat exposure reduction that can be achieved through their implementation. Risk assessment can be both qualitative and quantitative, but the most important thing is to draw useful insights from the process in order to make well-informed and informed decisions.


рис 1.png 1.png


This approach, for example, is embedded in the Security Vision Risk Management product and allows for the assessment and informed selection of the most effective defence measures.


However, it is worth recognising that existing cyber risk assessment methods can be confusing. They don't always help decision makers see a company's real exposure to cyber risks. They also rarely provide a clear picture of which controls have the greatest impact on reducing cyber risk. Because these methods often fail to accurately answer the question of what resources should be focused on the most critical areas, company management may be sceptical of their effectiveness. Thus, the value of risk assessment results does not always outweigh the effort expended.


Expectations for a risk-based approach to cybersecurity


Typically, the following expectations are placed on the results of a risk assessment:

  • Systematisation and cost-effectiveness of remedies: a risk-based approach assesses risks by likelihood and potential business impact, which helps to create strategies to address them and make informed resource allocation decisions.

  • Cybersecurity culture: risk assessment helps to introduce a cybersecurity culture, which increases staff awareness of cyber risks and overall business engagement in cybersecurity.

  • Proactive risk mitigation: a risk-based approach reduces the likelihood and impact of cyber-attacks by identifying and mitigating potential risks before they occur.

  • Long-term savings: this approach helps make decisions about the most profitable investments by focusing on targeted and cost-effective security measures.


However, such expectations are not always met, as we end up with heat maps, traffic lights, or sets of scary million-dollar losses when using quantitative assessment.


рис 2.png 1.png 


Maps of this kind give a flat picture, which, of course, can be used for more detailed study and accurate prediction of potential negative consequences. However, they can be quite misleading due to the subjective nature of the perception of the assessment scales.


Thus, in order to realise the above expectations, it is important to use risk modelling, which turns numerical indicators into manageable models that help in making informed decisions. It can help companies assess potential threats and vulnerabilities, predict the frequency of cyberattacks, their severity and the cost of necessary security investments.


What does risk modelling do?


When it comes to investing in cybersecurity, you want to understand what the impact of those investments will be. However, working to increase the level of security is always about dealing with uncertainty. This is where risk modelling comes in, allowing companies to better understand the impact of different attack scenarios and make more effective and efficient investments in cybersecurity measures.


By understanding the financial implications of different cyber risk scenarios, organisations can make smarter decisions about where to invest their resources to achieve maximum impact and long-term return on investment (ROI). ROI can be measured by multiplying the average cost of an incident by the total number of possible incidents that can be prevented with cybersecurity investments. This calculation quantifies the return on security investment.


It is also useful to consider other components in ROI calculations, such as regulatory compliance and indirect risk mitigation. This approach will provide a more complete picture of why cybersecurity investments are justified. Ultimately, ROI for cybersecurity investments has an inverse relationship - the more resources invested in detection and prevention technologies, the greater the reduction in the potential negative impact of incidents.


There is also a formula for calculating ROI in cybersecurity known as ROSI, or ‘Return on Security Investment,’ which involves calculating the monetary value of reducing information security risk. ROI usually shows the expected return over a certain period of time, such as three or five years. In the Return on Security Investment (ROSI) calculation, the expected profit is replaced by the expectation of avoiding losses for a year.


рис 3.png

 

Of course, losses are not guaranteed, so it is presented as an estimate of the cost of what you are protecting (Annual Loss Expectancy) combined with an estimate of the effectiveness of your protection (Mitigation Factor). Whether there is an actual loss or not, the solution will still have to be paid for, so its cost must be subtracted from the ‘income’ just as in the ROI calculation. This is then divided by the cost of the solution, yielding ROSI as a percentage, which can be compared directly to ROI.


Risk modelling tools


What is known about risk modelling techniques? Of course, ideally everything should stem from statistical data on the realisation of certain risks. But the more important a risk is to a company, the fewer times it has occurred, and more often than not, it has never occurred before. Failure of a segment of the corporate cloud, stopping the provision of services for X days, the commission's discovery of non-compliance with regulatory requirements and subsequent financial penalties, leakage of user data, and so on. What is the basis for this then?


The key is to regularly monitor the up-to-date resilience status of your security system. For example, the results of a SIEM/UEBA stress test can show the percentage of attacks detected and the proportion of actual attacks among those detected, as well as the effectiveness of filters in blocking phishing emails. This data can be used to create a probabilistic risk model where the key element is the ratio between ‘real incidents’ and ‘false positives’. This ratio can serve as a basis for prediction, allowing analysts to assess how changes in traffic or attack characteristics might affect system security. This is particularly important for targeted risk assessments when specific types of incidents are analysed. In this context, it becomes possible to build a more accurate ‘risk picture’ based on a detailed understanding of the dynamics and proportions of different types of incidents. Of course, provided that the trigger data is sufficiently representative (at least 17-20 alerts) - then it is possible to predict changes in the level of relevance of threats to the company with a high degree of confidence. And yes, in fact, the security testing data mentioned above is arithmetic over the quantitative values of key risk indicators - KIRs, which are already a recognised tool for monitoring risk levels. So, what to do with this data? Let's understand it by example.


Firstly, based on the obtained statistical samples (true incidents, false incidents, incidents from the public field) we calculate the probability of certain events. For example, missing a real incident. The actual sampling looks like a sequence of zeros and ones: [0, 1, 0, 0, 0, 0, ..., 1, ...], where 0 is a missed incident (False Negative) and 1 is a detected incident (True Positive). Next - we need to calculate the probability of risk realisation based on these inputs. Let's assume that with 30% probability our SOC misses the unpacking of a virus from a phishing email inside the network loop, and with 20% probability we miss the incident of executing illegitimate code on servers processing requests to the company's customer database, 25% misses the creation of an account by a hacker with admin rights over the database. And 5% - that no exfiltration of data from the database will be detected. Thus, we have a chain with initial access, two variants of promotion/fixing and one variant of impact. This is where Bayesian networks come to our aid. They are needed to calculate the probabilities of dependent events through the Bayes formula.

 

рис 4.png

 

It allows us to calculate the probability of occurrence of event A when event B occurs. With the help of this method we can calculate the probability of realising the risk of confidential data leakage by two ways of realising the threat of direct access to the database with this information (and there is also interception on the conditional form of user registration, or insider leakage in general).


Remark - here we take the probability of initiation of this or that event by an attacker as one because of the difficulty of obtaining accurate statistics of this kind. But this way we get an upper estimate of the probability, i.e. we have conclusions with a certain margin of safety.

рис 5.png

 

So, secondly, we have the probability of the risk, but what damage can it cause? This is determined on the basis of an expert assessment of the consequences of the risk. If we are talking about some damaged assets, then quantitatively such parameters as downtime cost, market value, replacement cost and others are taken into account here.


We will not dwell in detail on obtaining the value of damage, everything is usually more or less clear here. Most often, according to its results, the risk has a lower and upper estimate of damage. But for modelling it is important to understand which one it will be - closer to the minimum, to the maximum, or to the arithmetic mean? In the general case, of course, high risk losses are unlikely. Therefore, the so-called lognormal probability distribution is ideal for us. Formula for those interested:

 

рис 6.png

 

Yes, it is a normal distribution of a value (in our case, damage), from the values of which the logarithm is taken. What does it give? Look at the visualisation of the distribution and you will understand everything.

 

рис 7.png 1.png


In contrast to the normal distribution, where 90% of the percentages of values lie in the middle of the segment, the lognormal distribution is drawn to the left edge - favouring small damage, which is optimal for our situation. But what if for some risk we want to take into account a special distribution of damage from it, shifted, for example, to high values? This is where a method from the FAIR framework, developed by the US Navy in the 1950s and called PERT, can help us.

 

рис 8.png

 

This metric allows you to consider not only the minimum and maximum damage, but also what you expertly consider to be the most likely damage. If its probability is a priority for you, the coefficient 4 is at this value, and the distribution is as follows:

 

рис 9.png

 

If the maximum or minimum is more important in terms of damage, the coefficient is moved to the appropriate variable.

 

рис 10.png     рис 11.png

 

In fact, this method is extremely flexible, as you can decrease/increase the steepness of the hump by adjusting the numerical coefficients. Of course, you have a question - how to construct the distribution, these formulas do not specify it. The formula for constructing the distribution is given here. There, the \mode\m is the expected value. I will show it here as well:

 

рис 12.png

What are the parameters a and b? They characterise the position of the hump. If we call the coefficients y min, mode and max as m, k and h, they are calculated by the following formulae:


рис 13.png      рис 14.png 


So, the probability of risk is there, and so is the distribution of probabilities of realisation of some damage value. What's the next step?


The answer is to build a model. It will be clear for business if we say that taking into account all probabilities, in the next year there will be so many incidents with such a total damage. So let's cover ourselves from this damage by spending so much money (by the way, according to the Gordon-Loeb model, this should be no more than 37% of the damage). The well-known Monte Carlo method is ideal for the role of such a model. Its essence is literally that on a given number of iterations each time a ‘coin’ is tossed taking into account all the calculated probabilities, as a result of which for each risk it is determined whether it was realised and with what damage, if yes. Iterations can be days (which is too small, usually), weeks or months. In this case, it is important to proceed from the period for which you have formed the basic probabilities of events - that is, for which period you have considered the probability of missing various incidents, the example with which we considered in the context of a Bayesian network. The same period should be used to denote iterations.


рис 15.png 1.png

An example of Monte Carlo modelling of the consequences of risk realisation using Security Vision Risk Management. The screenshot shows how the algorithm constructed the frequency distribution and potential damage for risks for which the system has basic information.

 

Conclusions


Thus, having data on actual incidents, estimating the probability of their occurrence in the future, potential damage from them, and modelling the situation when implementing protection measures that affect the probability of incidents to varying degrees of effectiveness, we have all the data to calculate ROSI. Taking into account different configurations of protection measures, it is possible to assess which combination of solutions provides the greatest economic return. This allows the company to select those protection systems that are not only effective in preventing threats, but also economically justified.

 

рис 16.png 1.png

 

By the way, the Risk Management product on the Security Vision platform provides full functionality for risk identification, assessment, modelling and processing. One of the key advantages of this system is the ability to store and reuse the data obtained, which simplifies and accelerates the risk management process.