“Security Risk” and Probability


In the last little while I have repeatedly encountered people in safety&security standardisation circles who are trying to equate IEC 61508 SILs (Safety Integrity Levels) with IEC 62443 SLs (Security Levels). I saw another instance yesterday, in a paper written for AMAA 2015 by someone actively involved in international safety+security standardisation.

A SIL is a pure reliability measure which is only assigned to safety functions (SF), which are technical devices (which may be SW) put in the system to mitigate a risk which is seen as too high. Specifically, a SIL is a specific numerical range requirement on the allowable failure rate of the SF, phrased as a probability.

One conceptual problem with SILs is that you can’t use a frequentist interpretation of “probability” because the SF isn’t allowed to break enough for you to derive any frequency (once in the system lifetime is too much for some requirements). So an interpretation as objective probability seems to be out. But most engineers calculate with objective probabilities anyway; engineers generally don’t learn the techniques of subjective probability (Bayesian probability).

An SL is a classification of how cybersecurity-hardened you want a function or a piece of kit to be, and it applies to everything, say, in your Basic Process Control System, not just to SFs (or Safety Instrumented Systems as their realisations are called in the process industries).

So, right off, an SL is broader than a SIL. Surely it is evident that if you want to compare a SIL with an SL in any way, you are restricting yourself to SFs, since only an SF can have a SIL? And if you are considering an SF, surely you want it to be as cybersecurity-hardened as you can make it, so an SL of infinity, no matter whether it is SIL 1 or SIL 4? Isn’t that just common sense?

One can’t help but wonder if this kind of confusion could be alleviated if universities could teach budding engineers about these concepts. (Let me refer again to John Knight’s paper as to why this is hard.)

Everyone in or around cybersecurity talks about “security risk“. The IEC is defining it out: risk is probability combined with severity, for cybersecurity people as for safety people. And there is no objective probability which can reasonably be assigned to a security vulnerability. So now you have a phrase common to an entire industry segment which is being arbitrarily defined as an oxymoron. That is so silly.

The perfectly good phrase “security risk” means something like: chance of successful attack (CSA) with deleterious consequences (CSADC). If you have a piece of software S, and nobody in the world knows of any vulnerability, then your CSA is zero. If someone has identified a vulnerability, but there exists no exploit, then your CSA is still zero but is likely to go higher pretty soon. Those are purely Boolean flags. If there is an exploit known, then your CSA is definitely well above zero. How high above zero the CSA is will be dependent on the availability of the exploit (which includes how well it is distributed, as well as how technically hard it is to invoke) and the opportunity for an attacker (how good are your access controls? Is your IT-related staff susceptible to social engineering?), as well as maybe other stuff. You might want to give Exploit Availability and Attack Opportunity grades between 0% and 100%, but they are not probabilities.

Your safety concerns aren’t with the pure CSA, but with the DC part of CSADC. (Since you could define DC however you want, this is generally true for the cybersec triad, not just for safety.) The evaluation of the DC comes necessarily from your Hazard Analysis (HazAn). For a given CSA,  if a severe hazard is involved, then, if CSADC is not zero, it should intuitively be somewhat higher than if a mild hazard is involved. Since the SF is there (if there is one) to mitigate the hazard, then there is the link if you want one from SF to CSADC. But again, surely you want your safety function ideally to be invulnerable – SF=infinity – no matter what its safety requirement: SIL 1, SIL 2, SIL 3 or SIL 4.

Another possible connection of CSA with probability-like concepts is through the uncertainty. In many or even most cases you don’t actually know what the CSA of your kit is, unless you’ve just suffered an attack in which case it is 100%. And CSA can change rapidly. Today there might be no vulnerability known in your system (CSA=0). Tomorrow there might already be an exploit – a zero-day exploit, as it’s called (CSA= pretty high).

If you want to bring uncertainty in to your security risk estimate, you need to be performing Bayesian reasoning on subjective probabilities, not the normal frequentist-type calculations beloved-hated by engineers.

All this seems to me to be more or less common sense. Mostly I have encountered either puzzlement or resistance when I have invoked it. Hence this note. I suppose I could just talk to people who “get it”, but it is not going to help industry as a whole if the smartypants-like-me only talk amongst themselves. That is why it took so long for static analysis to become routine in software development.


Leave a Reply

Recent Comments

No comments to show.

Archives