Gregory Travis has responded to my comments in the Risks Forum Digest at https://catless.ncl.ac.uk/Risks/31/22#subj23 . He includes a wealth of interesting new information. He only disagrees with one of the points I made concerning the accuracy of his article, namely the categorisation of the frequency of failure of AoA sensors. I said Travis suggests AoA sensors…
(This article is a modified version of one which appeared in the ACM Risks Forum Digest, Issue 31.21) Gregory Travis published an article on 2019-04-18 on the involvement of the MCAS software on Boeing 737 MAX aircraft in two recent crashes, Lion Air flight 610 and Ethiopian Airlines flight 302, in IEEE Spectrum. The article…
Robert Schaefer just pointed me (via a mailing-list note) to a list of features of “AI safety”, via a reference in the blog of Victoria Krakovna at https://vkrakovna.wordpress.com/2018/11/01/discussion-on-the-machine-learning-approach-to-ai-safety/ . The features of “AI safety” pointed to, from http://www.foldl.me/2018/conceptual-issues-ai-safety-paradigmatic-gap/ , are Short-term: This work involves immediately practical safety risks in deploying machine learning systems. These include data poisoning, training set…
Readers may know that for quite some time I have been working on topics in requirements engineering, in particular for safety requirements. They may recall previous posts here at https://abnormaldistribution.org/index.php/2010/11/09/formal-definition-of-the-notion-of-safety-requirement/ and https://abnormaldistribution.org/index.php/2010/11/09/the-parable-of-the-exploding-apples/ as well as the terminology engineering in OPRA at https://rvs-bi.de/publications/books/RVS-Bk-17-02/Ch03-OPRA.pdf and the derivation of demonstrably-relatively-complete safety requirements in Bernd Sieker’s doctoral thesis (in German) https://rvs-bi.de/publications/Theses/Dissertation_Bernd_Sieker.pdf Unfortunately, requirements analysis and engineering…
Judging risks is fraught with uncertainty. But say this to a decision theorist and heshe will likely suggest you are contradicting yourself. Decision theorists speak of a risk when all the probabilities are known; they speak of decision-making under uncertainty when some of them are not. I was recently reminded of this by reading Isaac Levi’s introduction to Daniel…
Matthew Squair wrote in http://www.systemsafetylist.org/3785.htm I’ll stipulate that a car can drive itself when I see one successfully and safely negotiate…. Beware of the Turing Test and its successors! Back eleven years ago or so, I participated regularly in an aviation forum called PPRuNe https://www.pprune.org , which was started by a couple of professional pilots, for…
In October 2013, an Oklahoma civil court found that it was more likely than not that faulty control SW had caused a car crash in which the car accelerated, contrary to the apparent intention of the driver, resulting in death and severe injury to the occupants. Readers may find some discussion of this at the…
Some computer security experts put the majority of extant vulnerabilities down to poor code quality; for example, Martyn Thomas in his keynote at the 2016 IET System Safety and Computer Security conference in London. This was evidently the case in the late 1990’s, when some 80% of the newly-formed US CERT’s publicly-announced Internet-transmitted vulnerabilities were…
In the last little while I have repeatedly encountered people in safety&security standardisation circles who are trying to equate IEC 61508 SILs (Safety Integrity Levels) with IEC 62443 SLs (Security Levels). I saw another instance yesterday, in a paper written for AMAA 2015 by someone actively involved in international safety+security standardisation. A SIL is a pure reliability…
Matthew Green’s blog post on the KRACK vulnerability, entitled “Falling through the KRACKs”, makes two points which have come up on the System Safety mailing list frequently. One is that the IEEE standards business model makes it difficult for researchers to access standards, namely they want you to pay lots of money for them. We have had…