11th Bieleschweig Workshop: The Fukushima Accident and Systems Prone to EUE


Readers might like to know about the 11th Bieleschweig Workshop on System Engineering, which will take place in Bielefeld in the Senate Room of the University on 3rd-4th August, 2011. The topic will be Interacting with Extreme Risk: The Fukushima Accident. We organise the Bieleschweig Workshops.

I think that there exist the foundations of a consensus amongst engineers and social scientists as well as other observers on how to deal as a society with the use of technologies which carry with them the possibility of extreme unsafe events (EUE), henceforth below “systems prone to EUE”. The goal of the 11th Bieleschweig Workshop will be to attempt to formulate such a consensus, with special focus on the Fukushima accident as an example. The 11th Workshop will follow previous Bieleschweig formats in interspersing formal talks with lots of time for discussion, and including sessions for discussion around position papers on selected topics (“panel sessions”).

For an example of what I mean by consensus, see The Economist’s lead article from its April 23, 2011 edition, In Place of Safety Nets: Lessons from Deepwater Horizon and Fukushima in which three rules are proposed for “mak[ing] it easier to cope with the failures of such brittle technologies“.

The first rule is “the firms involved have to accept that ….. disasters will happen“. This is closely related to the proposal of Lee Clarke (below) that we should use “possibilistic thinking” in assessing systems prone to EUE (see this interview with Lee).

The second rule is “to develop at least some broadly applicable technologies for repair and remediation before they are needed….Fukushima and other nuclear plants seem oddly lacking in robotic access to places where workers cannot or should not go.“. Just so! I and my colleagues at CITEC, who inter alia develop robots, heartily agree! For example, colleagues Prof. em. Dr. Holk Cruse, Dr. Axel Schneider, and Prof. Dr. Volker Dürr, of CITEC and the Department of Biological Cybernetics have developed biomimetic systems, such as the robot Tarry (in German, sorry! From left, that’s Axel and Volker in the picture) which can negotiate uneven surfaces such as rubble piles, using mechatronic systems derived from stick insects (they also have a great stick insect colony, which is a lot of fun if you like playing with critters!).

The third rule is “situational awareness is invaluable“. The Economist means that one needs adequate sensing of fundamental parameters, even in extreme failure situations, and points out the apparent lack of such at Deepwater Horizon and Fukushima. Whatever the reasons for this lack, and some are technical, I think everybody I know who works with or on any safety-critical systems such as airplanes, trains, power plants and chemical plants agrees with the point that you need to assure these data somehow!

The Economist continues that “one solution to the problem of ever-growing requirements is “safety-case” regulation“, which seems very similar to my proposal for requiring a continuously-maintained safety case for systems prone to EUE in my post of 27 March 2011, Fukushima, The Tsunami Hazard, and Engineering Practice

So, are you persuaded that there might be consensus? (At least amongst a small group of English journalists and a few sociologists and system-safety experts 🙂 )

If so, and you deal professionally with safety and risk, do come join us in Bielefeld August 3-4 2011! Please email me if you would like to come.

Confirmed participants are the sociologists of technology Charles Perrow (Yale), Lee Clarke (Rutgers) and John Downer (Stanford), as well as the system-safety engineers Nancy Leveson (MIT) and Martyn Thomas (Thomas Associates), and little old me (Uni Bielefeld and Causalis Limited). Romney Duffey (AECL) and Robin Bloomfield (City Uni and Adelard) have expressed their intention to attend.

Professor Perrow wrote Normal Accidents (Basic Books, 1984, revised edition Princeton University Press, 1999), in which the Normal Accident Theory was proposed, introducing what is now called by many in system safety a System Accident. He also wrote The Next Catastrophe, (Princeton University Press, 2007, revised 2011), in which the exact failure mode of the Fukushima Dai-ichi reactors was foreseen (a natural event taking out primary power, followed by “flooding” taking out secondary power), as I pointed out in my post of 14 April 2011 on memes.

Professor Clarke wrote Worst Cases (University of Chicago Press, 2005) which proposed that the “probabilistic thinking” associated with risk analysis was insufficient to enable us to make wise decisions about use of systems prone to EUE, and that considering the EUE and the consequences without attempting to assess probabilities of it happening, which he calls “possibilistic thinking”, is a more appropriate tool for making socially-responsible decisions about systems prone to EUE.

Dr. Downer is Zukerman Fellow at the Freeman Spogli Institute for International Studies at Stanford University, and works on ultra-reliability (in particular civil transport aircraft certification and engineering) and systems prone to EUE.

Professor Leveson is a founder of the discipline of software safety engineering, and is a major contributor to system safety engineering with her methods STAMP for accident and catastrophe analysis and STPE for Hazard Analysis. She wrote the fundamental reference book Safeware (Addison-Wesley, 1995) (also see her description) and has written a new book Engineering a Safer World (MIT Press, to appear 2011). She consulted for the Columbia Accident Investigation Board looking into the loss of the Space Shuttle Columbia, was a member of the Baker Panel investigating the Texas City Oil Refinery explosion and is expert advisor to the Presidential Oil Spill Commission (Deepwater Horizon).

Professor Thomas is Director and Principal Consultant of Martyn Thomas Associates Limited and was founder of Praxis, now Altran Praxis, developers of the SPARK toolsuite and set of techniques for the development of demonstrably highly-reliable software. He co-wrote the US National Academies report Software for Dependable Systems: Sufficient Evidence? (National Academies Press, 2007), and most recently Chairman of the UK Royal Academy of Engineering GNSS working group and co-wrote the Academy report Global Navigation Space Systems: Reliance and Vulnerabilities (Royal Academy of Engineering, 2011) which was released the day before the Tohoku megaquake.

I and my group are here in Bielefeld, where we developed the causal analysis method Why-Because Analysis (WBA) (see also Causalis Limited, Publications for some more examples) and are developing the hazard analysis method Ontological Hazard Analysis (OHA) (see for example this paper or Bernd Sieker’s PhD thesis (in German). I am a Director of the system-safety consulting company Causalis Limited whose clients include major legal firms and insurance companies in civil aviation, as well as individuals in criminal and civil cases related to accidents. I am member of the German standardisation committee for functional safety of E/E/PE systems, DKE GK 914, as well as of the subcommittee DKE 914.0.3 “Safe Software” and the IEC “Maintenance Teams” for the international standard IEC 61508. I chair various related DKE advisory committees.

Romney Duffey, Robin Bloomfield and John Knight have indicated their intention to attend. We hope also to have participation from experts in nuclear power from Japan, Germany and other countries.

Dr. Romney Duffey is Principal Scientist of Atomic Energy Canada Limited and coauthor of the book with John Saull Know the Risk: Learning from Errors and Accidents (Butterworth-Heinemann, 2003).

Professor Robin Bloomfield is Director of the Centre for Software Reliability at City University, London and founder and Director of the safety consultancy Adelard , whose clients include the UK nuclear industry.

Professor John Knight leads the Dependability Research Group in the Computer Science Department at the University of Virgina, and is a Principal of Dependable Computing Incorporated which specialises in computing applications which have extreme consequences of failure. He leads the Helix project to design and build self-regenerative software architectures resilient to attack, for use in critical infrastructures such as energy.

I take this opportunity to thank heartily our confirmed sponsors CITEC, the Excellence Cluster for Cognitive Interaction Technology, the Faculty of Technology, both of the University of Bielefeld, the Centre for Software Reliability at the University of Newcastle upon Tyne, and Causalis Limited.

I would like to dedicate the Workshop to Professor Harold Lewis , amiable and entertaining correspondent of mine on safety and aviation safety for many years, coauthor of the fundamental document known as the “Lewis Report” on the safety of US nuclear power plants, NUREG-CR/0400, also IEEE Trans. Nuclear Science 26(5), 1979. He wrote Technological Risk (Norton, 1990, winner of the Science Writing Award 1991), and Why Flip a Coin?: The Art and Science of Good Decisions (John Wiley, 1999). Hal is unable to attend.

Folks (other than those above), please let me know by e-mail if you wish to attend. For planning purposes, please say whether you might like to present a paper or a position paper.

Peter Bernard Ladkin


Leave a Reply

Recent Comments

No comments to show.

Archives