Automated Road Vehicles and Risk Calculations


Judging risks is fraught with uncertainty.

But say this to a decision theorist and heshe will likely suggest you are contradicting yourself. Decision theorists speak of a risk when all the probabilities are known; they speak of decision-making under uncertainty when some of them are not. I was recently reminded of this by reading Isaac Levi’s introduction to Daniel Ellsberg’s PhD thesis, published almost forty years after it was written.

The case for risk calculations in a nutshell. Humans building systems with some tangible benefit to society (power stations, railways, commercial transport airplanes, buses, bicycles) which also can malfunction either individually (a high-speed train) or collectively (the interaction of train, signalling and track) in such a way as to cause damage have, in my view and also the general view and law of (western?) societies as a whole, a moral obligation to recognise such possibilities for damage and avoid or mitigate them. Avoiding them completely is often regarded as a pipe dream, for well-known reasons, in which case the only approach is to make them rare and mild. Engineering risk is consequently construed as the combination of likelihood with severity (consequential damage) and virtually all engineering standards say you have to minimise it; some combinations are ruled out altogether (via the risk-matrix approach, as used in commercial aviation and European rail certification).

A main difficulty is that there is usually so much uncertainty, associated in many
digital-computing-based systems with almost unfathomable complexity of detail, that, for general purposes such as conformance with engineering standards, simplifying assumptions need to be made. These assumptions often formulated in such a way that line engineers can claim they are “following the accepted rules”. Such assumptions can be questioned. On the other hand, some approaches such as the risk-matrix can be construed in such a way that risk numbers all but disappear, in that certain mostly qualitative combinations are deemed unacceptable and others deemed acceptable, as in transport-aircraft airworthiness certification and European rail certification.

But sometimes reasoning with numbers is unavoidable. When the volcano Eyjafjallajökul erupted in 2010 and a possibly-extremely-damaging ash cloud spread itself across Northern Europe, there was an immediate question if and how one could safely fly. Some damage was unavoidable – either the economic damage through extreme restriction of air transportation (to those people dependent on it as well as the financial consequences for airline companies which provide it) or physical damage to aircraft and their passengers consequent upon a failed attempt to fly through it.

How do you reason in such circumstances? It is unavoidable – you can’t just say “we don’t do that; we shall ignore it”. But anything you do will be dependent on simplifying assumptions and all of those assumptions can be questioned. You still need an answer, and preferably you want the best one you can figure out. My treatment of that situation may be found in my book chapter An Example of Practical Risk Analysis . Amongst other things, the analysis indicates that some of the major players were maybe not getting the best advice.

I conclude that sometimes numbers help, even if you can doubt most of them.

I think that running automated vehicles (ARVs) on public roads is a situation in which numbers help. If we had a fatal accident a week, they’d be gone. What about a fatal accident in a few years? What about one a year? Well – the first point at which numbers help – surely that depends on how many miles, and where, are being racked up. So what might we want to say about how it depends?

ARVs need to be run on public roads at some point, but what is that point? Should it be now, or should it be in a year’s time? Apparently Arizona thought it should be “now” a number of months ago; but, as a consequence of the fatal accident with a pedestrian on March 18th in Tempe, has now decided it should be “later”. On what basis? I don’t know anyone who seriously believed that ARVs will never, ever have a fatal accident. Everyone thought it would happen at some point. It is implausible to imagine that Arizona regulators thought “let’s let them drive, until someone gets killed; then we’ll remove them”? But that is what happened.

Is there a defined, thought-through process to ARV maturity playing out here? Do we really want to ignore numbers in any such process?

As Peter Bishop has recently pointed out, there are (and have been since the decision was made to allow trials on public roads) numbers as to expectation and confidence. Specifically we can assume that fatal road accidents form a Poisson Process. (Don Norman wondered in private communication if this was really so. That is a discussion which must be conducted in any such use of statistical methods. But I leave it for elsewhere.) The one-sided confidence limits for the exponential distribution (the model for time to next failure accompanying the Poisson Process) are calculated following a 1934 paper of Clopper and Pearson. Alternatively, all this is in Sections A8.2.2.1 and A8.2.2.2 of Birolini’s Reliability Engineering handbook . We are going with a figure of about 1 million miles of ARV driving on public roads to date.

Peter’s conclusions were as follows.

… if we had 1 million miles and zero accidents, we could only be 63% confident [that the fatal accident] rate is better than 10exp(-6) per mile.. If we had one [fatal] accident in the next mile, the confidence drops to 26% in 10exp(-6) per mile, but we would have 60% confidence the rate is better than 2 x 10exp(-6) per mile…… For human driving [in the US], the 63% confidence [level] is around 10exp(-8) per mile.

I’ve checked the numbers and they work out. The figure for human driving can be taken from the Wikipedia entry Motor vehicle fatality rate in U.S. by year, which says the rate has been between 1.08 and 1.18 fatalities per 100 million vehicle miles since 2009. (I have also checked the number using available data from other organisations and it seems to be plausible.)

Where are the public decision criteria for conducting ARV trials on public roads? Even before the Tempe accident, with a million road miles under their fan belts, we could only be just over 60% confident that the fatal accident rate was at best two orders of magnitude worse than human drivers. No matter how ball-park this figure may be, and notwithstanding caveats about the statistical modelling, shouldn’t such observations play a role in those decision criteria?

Instead, we seem to be treated to observations such as that of the head of car company Tesla, who was reported in Today’s Guardian as having tweeted

It’s super messed up that a Tesla crash resulting in a broken ankle is front page news and the ~40,000 people who died in US auto accidents alone in past year get almost no coverage

Tesla technology is without doubt excellent. With three well-reported high-speed collisions under AP control, would it be too much to ask the company apply the same care to its risk analyses? Or they could just give me the total S-type miles up to the May 2016 Florida accident, then up to the March 2018 Mountain View accident, then to to May 2018 Utah accident, and I’ll do the numbers.


Leave a Reply

Recent Comments

No comments to show.

Archives