Computer Reliability in Legal Arguments, with Some Observations about Arguments


The cybersecurity and public policy expert Susan Landau has published an article in the Lawfare blog about Problems with Evidentiary Software in English and US courts; the attitudes of English and US courts towards the evidence generated by, or about the behaviour of, computers in cases they consider. She uses the Post Office Horizon affair as the English example.

She refers to Paul Marshall’s masterful recent account to the University of Law of the PO Horizon affair

Susan also singles out Paul’s observation that “…. writing on a bit of paper in evidence is only marks on a piece of paper until first, someone explains what it means and, second, if it is a statement of fact, someone proves the truth of that fact.” This applies strongly also to arguments presented in assurance cases for safety-critical systems.

What both kinds of argument have in common is that they are essentially public, with an aim to be read, understood and accepted by numbers of people of varying ability, expertise and involvement with the subject matter. That almost compels them to be given in some kind of textual structure which supports this aim.

Argument aids such as Goal Structuring Notation and Assurance 2.0 and Why-Because Analysis/Why-Because Graphs have been developed to improve the structure, accuracy and comprehensibility of assurance argument, resp. physically-causal arguments. The only pervasive structuring aid I am aware of in legal documents is the one-point-per-paragraph convention, along with paragraph numbering. Which is of course helpful.

Given the generally complex structure of arguments, there arises the question of how a simple linear structure such as a legal argument or legal judgement can faithfully represent such a structurally-complex argument as, say, an attribution of causality in an accident (accidents do get litigated). There seem to be two main ways: focussing and delegation. Focussing means that one particular aspect of an argument is singled out for attention – a particular causal link, say, which for some reason is construed to be of more interest/to be more contentious than others. Delegation means that the assurance of the complex argument structure is given to a third party, with the explicit argument being presented that the complex argument is to be taken as correct. (If such contention is doubted by a party, then focussing is used to select areas of teh delegated structure for specific attention/dispute, perhaps recursively).

I suspect that the adversarial nature of legal argument (in the Western tradition) somehow renders this linear structuring with focussing and delegation generally adequate to the task, but right now I am at a loss to explain why.

Legal arguments, and law, have been the subject of translation into logical languages for many decades; for example the July 1990 edition of Ratio Juris from three decades ago. But it has been long known that “real arguments”, arguments used by people to establish assertions in a variety of contexts, have structure that is more complex than typical formal logics (which accounts in part for GSN). The study of real arguments is generally called “informal logic” by practitioners. Walton, Reed and Macagno speak of Argumentation Schemes (Cambridge U.P 2008) and try to classify them (as others before them, whom they reference). Rowe and Reed at Uni Dundee wrote a software package called Araucaria , which was intended to help people structure arguments. Last release was in 2001, as far as I know, and it only seems to be available now via the Wayback Machine (I would welcome more info on subsequent developments).

According to Robin’s Keynote at ScSS 2020, Assurance 2.0 has a rebuttal (counterevidence) facility, as well as a measure of confirmation (Robin spent some time on both, as far as I remember). Counterevidence representation has been part of informal logic for quite some time. It is well-known to mathematicians and philosophers that attempting to construct counterarguments can be productive (especially when they turn out to be right!). The problem for safety engineering in general is that counterarguments to the reliability of a given piece of SW are usually very much more available than a reliability argument. That suggests that the Bloomfield-Rushby explicit inclusion of these may well have a salutary effect on the quality of the resulting reliability arguments.

I must say I have lower intuition about the suggested confirmation measure. I don’t quite see how it differs from the Odds Ratio, and they haven’t told me yet.**

** Footnote added 2021-06-28. John Rushby has replied to my query:

….there are many confirmation measures and the Odds Ratio is indeed one of them (due to I J Good).

He indicates that there is a longer technical paper in preparation.


Leave a Reply

Recent Comments

No comments to show.

Archives