Probabilistic and Possibilistic Analysis, the Precautionary Principle and EUEs


Yesterday, Werner U brought our attention, on a closed mailing list of which I have been a member for almost two decades, to a study by John Mueller, a political scientist at Ohio State University, and Mark Stewart, a civil engineer at the University of Newcastle in New South Wales of the costs and potential benefits of actions taken by the US Department of Homeland Security (DHS) to protect against supposed-possible terrorist attacks in the US . I found the article eye-opening.

Mueller and Stewart argue that, despite recommendations to do so by entities such as the US Government Accounting Office (GAO), and observations of its lack by the National Research Council, which was tasked to produce a study in 2010, the DHS has not effectively used any cost/benefit analysis on its anti-terrorism techniques. The authors performed a search of DHS documents, news reports and testimony, and the result was (p4) that

we have been able to find only one published reference to a numerical estimate of risk reduction after an extensive search of the agency’s reports and documents.

They claim that the expenditures on anti-terrorist measures are (p2) “one trillion [that is, 10^(12)] dollars and counting.”

They put what they essentially argue is a disadvantageous distortion of priorities down to “probability neglect” and a focus on worst-case scenarios. That is, the DHS aims to protect against worst-case scenarios while ignoring the likelihood of them being realised. Mueller and Stewart point out that the annual number of people who have died world-wide in attacks by Islamic-fundamentalist extremists since 9/11 (2001, almost ten years ago) can be estimated at between 200 and 300 (p 12), which is less than the number of people who drown in bathtubs each year in the USA (p 12). If you are a US taxpayer, there seems to be a prima facie case for looking into this issue somewhat harder (or spending more money on your bathtub, whichever…).

Amusingly, they point out some “innovative” methods used by the DHS for assessing risk, such as adding the probability to the severity. De Moivre, who first defined risk in his seminal work De Mensura Sortis, which celebrates its 300 anniversary this year, being a good businessman, would likely have taken the opportunity to offer the DHS a tutorial 🙂

However, Mueller and Stewart also lambast “possibilistic thinking”, or “worst-case thinking”, citing in particular Bruce Schneier’s article from exactly a year ago, Worst-Case Thinking. I am an avid reader of Bruce’s opinions, but I don’t agree with a generic denigration of “worst-case thinking”.

In correspondence with Lee Clarke, who has a book analysing and advocating worst-case thinking as an analysis technique for systems which are prone to extreme unsafe events (EUEs) as I have previously called them here, I have come to understand the method Clarke advocates as a form of hazard analysis – enumerating the hazards by looking at potential outcomes and analysing the possible events which lead to them. I am a great fan of hazard analysis of this sort, as readers of our recent technical work will know. Indeed, it is advocated by almost all the international engineering standards with which I am familiar (I work on a standardisation committee for functional safety) as a necessary technique to be performed during initial steps in development (as well, in many standards, as continuing throughout system development). I have also been impressed by Charles Perrow’s prediction of the exact failure sequence at Fukushima Dai-ichi in his 2007 book, as I have mentioned before, and have wondered why it takes a sociologist to perform – and publish – the kind of hazard analysis that should be performed by the plant engineers.

But there are obvious issues with only performing worst-case hazard analyses and not attempting at some point to assess their likelihood, as Mueller and Stewart show us. I don’t mean to solve this tension here, but just to draw attention to this wider, apparently controversial and important issue of the effectiveness and limits of techniques such as hazard analysis and risk analysis. One major argument for not using estimates of likelihood is that, for EUEs, they are rarely available. One major argument for using some estimate of likelihood, even if crude, is Mueller and Stewart’s.

One particular problem does seem to be invalid argumentation that is commonly used when talking about severe events. Mueller and Stewart point out the “adding probabilities” argument. Another one is as follows. One characterises likelihood as “low”, turns “low-likelihood, high-severity events” into a pseudo-category and thereby enables an argument, as Nancy Leveson has noted in private conversation that she has observed in practice in one particular process industry, of declining to deal with obvious risks that could be mitigated, and, according to people such as myself and her, should be mitigated.

Robert Dorsett has observed in private that the DHS can seem to be motivated by the Precautionary Principle: if one sees a potential for harm, best to take actual measures to avoid that that potential is realised. I have considered the Precautionary Principle as well as other principles in an essay I wrote some ten years ago on a practical example of everyday risk analysis by the HSE in the UK. Cass Sunstein believes the principle is what philosophers of action call a “practical paradox“, something that is impossible to implement as written. He writes on pp102-105 of his 2002 book Risk and Reason that

All over the world, there is increasing interest in a simple idea for the regulation of risk: In the case of doubt, follow the precautionary principle……. There is some important truth in [it]. Sometimes it is much better to be safe than sorry… But there is a larger problem. The precautionary principle can provide guidance only if we blinker ourselves and look at a subset of the harms involved. In real-world controversies, a failure to regulate will run afoul of the precautionary principle because potential risks are involved. But regulation itself will cause potential risks and hence run afoul of the precautionary principle too; and the same is true for every step in between. Hence, the precautionary principle, taken for all that it is worth, is literally paralysing. It bans every imaginable step, including inaction itself.

Given the general inclination which Robert notes to invoke the Precautionary Principle at every turn, is seems to me wise to keep Sunstein’s argument ready to hand. The Precautionary Principle is a blunt instrument, and it is wise to use it more subtly. Mueller and Stewart, following Schneier, could be taken to argue that worst-case, possibilistic, thinking, is also a blunt instrument, and it is thereby equally wise to take Clarke and Perrow’s suggestions to use it more subtly.


Leave a Reply

Recent Comments

No comments to show.

Archives