Further to the Gotterbarn/Miller study of software engineering ethics in the June 2009 edition of IEEE Computer, and my letter to the editors which I published here on 27 June, Professors Gotterbarn and Miller have replied to my letter. Both letter and reply will appear in the August 2009 edition of IEEE Computer.
Professors Gotterbarn and Miller write:
[begin G&M citation]
Our description of the Qantas accident was overly simplistic. Dr. Ladkin and other experts we have consulted agree that a problem in the Flight Control Primary Computer (FCPC) seems to be involved, in conjunction with anomalous “spiking” in one of three Air Data Inertial Reference Units (ADIRUs). There were reports of ADIRUs spiking in different airplanes earlier, but without the diving behavior of the Qantas incident.
The issue of data integrity is complex in avionics systems. These systems include multiple techniques to deal with possible false data readings, including the possibility of human pilot overrides and algorithms that attempt to distinguish between anomalies from errant sensors and actual emergency situations. Through a complicated series of events, at least one of these algorithms yielded a “false positive” identification of a dangerous stall-inducing climb that was “corrected” when the FCPC ordered a steep dive. This occurred twice during the Qantas flight in question. Interested readers can read
the interim report from the Australian Transport Safety Board . At this writing, the Board has not issued its final report.
We contend that complex system interactions like this create ethical as well as technical challenges for all involved. This case, no matter how badly Dr. Ladkin thinks we described it, deserves further study and public discussion. Even when bugs are obscure, life-critical software decisions are ethically charged for software engineers and for the people their software affects. We hope that larger theme is clear in the article.
[end G&M citation]
I find this a very reasonable response to the issues which came up. I agree that complex system interactions such as these pose ethical challenges, and am glad that my colleagues’ reply goes into more detail on the incident that caught my eye (and my pen). I also agree that this case, and not just this case, deserves further study and public discussion. But I think I disagree with what I take to be the implication that such “life-critical decisions” to which they refer were taken in this case by software engineers. I suspect, rather, that the decisions were taken by the avionics and aeronautical engineers who designed the kit and either did not anticipate the anomalies that manifested themselves or misjudged their significance. I would not expect that software engineers had had access to the relevant information to enable them to contribute much to those decisions.
More broadly, it is not clear to me what moral lesson we could draw from this lack of anticipation, for it is not clear that current hazard-analysis methods enable one to anticipate all such anomalies – indeed, it is rather clear that they don’t, which is amongst other things why I and my RVS and Causalis colleagues work in this area.