Hazard analysis (Hazan) is one of the necessary skills of a safety-critical systems engineer. In a post to the University of York Safety-Critical mailing list entitled software hazard analysis not useful?, Daniel Jackson proposed that, in my interpretation of what he says, as far as software development goes any hazard analysis may be performed “up front”, at the beginning, and the safety requirements thereby identified carried through the software development as normal specifications would be. Nancy Leveson replied that he didn’t seem to understand Hazan; that his comments referred to Failure Modes and Effects Analysis. But Daniel is a quick learner (he is, after all, a Professor of Software Engineering at MIT!), so one wouldn’t presume this to be the definitive last word on his proposal, as indeed it wasn’t.
Readers can follow the thread, starting from Daniel’s post, using the York archive tools.
I suggested that it is hard to argue about such matters in the abstract, and an example would be helpful. Daniel duly provided one, announced on the York list with a note on his WWW site, accompanied by the worked example checked in Alloy.
Jan Sanders proposed an Ontological Hazard Analysis (OHA) of the example, a first version of which I put on our RVS WWW site on Tuesday, and after having taken account of comments received (most via the York thread) and the discussion there, as well as discussion with Jan and Bernd Sieker on Thursday afternoon (21.10), I posted a revised version this morning: Leveson’s “Safer World” Interlock Example with OHA.
Daniel and we agree, and thereby differ from Nancy, that formally checking reasoning, using simple logic, is essential to responsible systems analysis. Daniel uses Alloy; we so far perform OHA by hand and haven’t developed a checker. This was brought home to us (not that we didn’t agree with it already) when Michael Jackson (not no relation) pointed out a reasoning mistake in our reconstruction of a hypothetical client interview. Daniel’s point about formally checking reasoning is well taken!
We also discovered a formal device for ensuring the properties of an OHA, including particularly the desired formal proof of completeness of the safety requirement formulated at the highest refinement level (which we call Level 0) even in tricky analyses, namely the making of a Safety Assumption, which may be used in the proofs during the refinement that the high-level safety requirements are fulfilled (formally, it is introduced as a conjunct of the antecedent in the Refinement Safety Proof Obligation, RSPO, at each refinement level). The Safety Assumption does not come for free. It must be discharged at some point, either by being proved (from the safety requirements formulated at some more-detailed refinement level), or by subjecting its negation (i.e., that the Safety Assumption is not fulfilled) to a risk analysis to show explicitly that the risk of its not being fulfilled is acceptable. A sort of “buy now, pay later” for Hazan – but one must be careful to ensure that the interest rate is not unacceptably high! The procedural risk being that one may make a strong Safety Assumption early on to allow the analysis to proceed smoothly – and then take it all the way through, to leave some other poor analyst with an extremely difficult or impossible job of discharging it! That obviously won’t do in polite society, therefore use with care!
Jan should be writing this, but he’s off at his daily job of saving our TechFak computer network from the predations of its users, so it’s left to me.
I had forgotten how much joy there is to be had in working this kind of stuff in a rich discussion environment! It’s been a fun week! Thanks everyone!
PBL