IACS Safety and Security Intertwined; A Realistic Example


Restarting a nuclear reactor is a complex and sensitive process. The process is essentially controlled through the neutron density at any point. The density is governed by processes which are fundamentally exponential in time, and is controlled by damping the exponent in various ways. It is physically possible for the process to become uncontrolled, on the order of milliseconds, so it is crucial that the damping factors continue to do their job throughout the startup. The startup is controlled by a “protection logic” which is a series of automatic and human-operational actions designed to retain control of the reaction throughout the startup process. See for example the manual of the TU Dresden training reactor at https://tu-dresden.de/ing/maschinenwesen/iet/wket/ressourcen/dateien/akr2/Lehrmaterialien/start_e.pdf?lang=en

This process almost inevitably requires safety functions in the sense of IEC 61508, since the risk of uncontrolled startup is physically present at every startup. How many there are will depend on the individual reactor design and implementation. Let them be SF1, …. , SF4. (The choice of “4” is arbitrary, but makes the arithmetic below simpler.)

A training reactor can be expected to shut down and start up many times a year, Thus SF1, …., SF4 would be classified as “high demand” functions according to IEC 61508 (“low demand” is a demand rate of less than once per year). “High demand” functions have a safety requirement (which in IEC 61508 is a reliability requirement) of an acceptable rate of dangerous failure per hour of plant operation. Let us suppose that shut-down and start-up training is conducted 10 times a year. We also make the usual approximation that a year contains 104 hours. An acceptable risk is often taken to be one dangerous failure in a million hours of operation (see, for example, http://www.hse.gov.uk/risk/theory/r2p2.htm ) which would amount to one dangerous failure in every 100 years of operation. So let us take one dangerous failure per hundred years of operation to be the acceptable risk for this example.

For the planned use of the plant, there will be 10 start-ups every year, and thus the safety functions SF1, …., SF4 will be invoked ten times a year, and some one of them may fail at most once in 100 years. (Let us assume each failure of a safety function is a dangerous failure.) This induces a safety requirement of at most one dangerous failure every 1,000 restarts. We need to translate this into a SIL for the safety functions.

A reactor restart when components are warm is said to take some 24 hours, e.g. the anecdote from http://engineering.stackexchange.com/questions/7394/why-does-it-take-so-long-to-restart-a-nuclear-power-plant . We may presume the restart safety functions SF1, …., SF4 are active during this time: 2.5 101 hours. (I add an extra hour to make the arithmetic work out more neatly.) At 10 restarts a year, over 100 years that amounts to 2.5 104 hours total operational time for four safety functions, over which time period at most one dangerous failure during restart is acceptable. That imposes a requirement per safety function of at most one failure in 105 hours, or a requirement in terms of IEC 61508-1:2010 Table 3 of SIL 1, an “average frequency” of failure of between 10(-6) and 10(-5) per operating hour (although how one can speak meaningfully of an “average frequency” when we are talking once every hundred years, that is, less than once in the expected lifetime of the plant, is mysterious. As is the apparent SIL 1 requirement that a failure shall occur more than once every 106 operating hours – not only mysterious but exceedingly odd! Many of us think the SIL requirements should be “one-sided”, not “two-sided”).

IEC 61508:2010 requires the hazard analysis to consider hazards that arise from security threats and “reasonably foreseeable misuse”. We may presume our training reactor has some key functions, scrams and restarts, involving electronic equipment, and this equipment is configured in such a way that physical security suffices to ensure its correct functioning. However, in many nuclear power plants, over time, subsystems are replaced, and replacement subsystems include more digital electronics than the originals, and become thereby vulnerable to cyberintrusion (this point was made for example in the Chatham House report at https://www.chathamhouse.org/publication/cyber-security-civil-nuclear-facilities-understanding-risks ).

According to the presumption above, there is no cybersecurity threat or “reasonably foreseeable misuse” involving cybermalicious action in the scram+restart procedures of our reactor. There is thus no action required by IEC 61508 further than to secure the safety functions SF1,…., SF4 to SIL 1.

Concerning “reasonably foreseeable misuse”, it is certainly possible that a malfeasant with access could cause or execute an unplanned shutdown and restart. However, we could expect the usual sociophysical security mechanisms to hinder such actions after a couple of iterations – such a person would be straighforwardly identified and denied access. However, a wise bird might anticipate that, at some time in the next 100 years, some of the subsystems involved in scram+restart might be replaced by subsystems which are cybervulnerable in some way. In the current state of cyberforensics, it is often difficult either to identify perpetrators or to hinder repeated attacks. We might suppose that an unplanned scram could be initiated through a cyberattack. After such an attack, the reactor could be left shut down, until the time came for scram+restart training, before which it would be restarted, so that the trainees could scram and then restart it. Such a scenario would lead to more than ten and at most twenty restarts a year. The acceptable risk is one failure every one hundred years, but categorising the safety functions involved as SIL 1 only assures us, under this scenario of up to twenty possible restarts a year, one failure every fifty years. However, we can regain the acceptable risk level if we were to assure the safety functions to a higher SIL. SIL 2 would obviously suffice.

We have derived a more stringent safety requirement for SF1,…. SF4. This more stringent requirement came from operational considerations, and not from any contravention of the operating characteristics of the plant. A scram is a normal operation of the plant. We assume merely that it could be somehow possible for an unplanned scram to be initiated through unauthorised non-physical access in some way in the future, even though it is not possible now. If it is not possible now, this scenario surely cannot count as “reasonably foreseeable misuse” of any sort.

It follows that safety requirements have been tightened through consideration of security, specifically cybersecurity, in a manner neither anticipated nor covered in either IEC 61508 nor in IEC 62443. Safety and security of IACS are intertwined.


Leave a Reply

Recent Comments

No comments to show.

Archives