The System Architecture of Autonomous Road Vehicles

Matthew Squair wrote in

I’ll stipulate that a car can drive itself when I see one successfully and safely negotiate….

Beware of the Turing Test and its successors!

Back eleven years ago or so, I participated regularly in an aviation forum called PPRuNe , which was started by a couple of professional pilots, for professional pilots to discuss all sorts of aviation matters anonymously (it has been commercial for over a decade now, I think). I started with it almost a couple of decades ago, because it was the only place then in which to have a half-decent technical discussion about specific aviation accidents on-line, then I quit, because of the now-usual then-nascent WWW-forum behaviour, and came back in mid 2007 for a couple of years.

A distinguished colleague of mine struck up a correspondence on the aerodynamics thread with a participant who appeared to him to be a combination of naivety and insight. I told him “she” was a bot. There was still the annual “Turing Test” competition, but bots were already around plentifully in WWW forums fooling real humans – subject matter experts – such as my colleague.

So the Turing Test had already been well fulfilled more than a decade ago, in that humans were regularly being fooled by bots. But there were ways to tell if a correspondent was a bot. And you would never have asked one for engineering-scientific advice. Nobody was prepared to say on the basis of WWW-forum bots that artificial intelligence had been achieved. For very good reason. The bots couldn’t do aero; they could only do conversations on aero, through using some features of conversation that discourse theorists had nailed a decade or two before that.

I suspect there is a moral case for symbolic inference engines in autonomous road vehicles. Let me try to make it.

Doug Lenat at Stanford built a very simple inference-generation engine and fed it some axioms, representing “real world knowledge” (first Math, in the system AM for his PhD thesis in the late 70’s, then Eurisko). It generated some interesting inferences, and colleagues would recount how he would say “look, it’s intelligent!” He was of course selecting. They would point out all the dumb things it did, and observe, “Doug, you haven’t done artificial intelligence, you’ve done artificial stupidity, and, just as with humans, that’s a lot easier.” Dr. Lenat was, and apparently still is, single minded He started the Cyc project some thirty-plus years ago. Cyc was an inference engine with what was intended to be a massive amount of axioms, naive basic knowledge about the world. The idea was that if you got enough naive knowledge, represented by all those axioms, you would be able to figure out stuff like humans. The initial attempts didn’t work well, of course. It was a respectable attempt to flesh out a hypothesis about which many were sceptical, but the project only had a couple of staffers and many suspected, including Dr. Lenat, that it would probably have needed a couple of hundred at least. That kind of research money was not available then if you weren’t a particle physicist. But Cyc appears to have those resources now. It has been running for three decades, and appears to have some commercial success  I don’t know how it handles the issue of knowledge structuring, which had come to the fore pretty rapidly with the ’80’s attempts at “naive physics”. If you don’t get the structuring “right”, such efforts don’t work. I imagine that is the IP of Cycorp.

I wonder if Cyc can “do” trolleyology? (Likely answer: nope. But might it not be worth a try?)

I think “naive physics”, or, as it came to be called, qualitative physics, was one of the most interesting intellectual exercises I have ever seen. People had thought “we can do physics”, since Newton. Then some AI researcher had said, here, I am holding this steel ball two feet above this table. On the table is a stoneware dish with water in it. What’s going to happen when I open my hand? The ball will drop on the dish, which will break, and the water will flow out. Since the table is a bit sloping, the water will mostly (but not all) flow to THAT side, and flow-then-drip off the edge on to the floor, where it will pool. You know that. I know that. My eight-year-old knows that. So it is in some sense science, isn’t it? Derive it from Newton’s axioms. Of course, you can’t. It is, in some sense, a completely different subject matter from academic physics. I still think qualitative physics is a fascinating problem. I don’t know if Cyc can do naive physics. A chapter written by Ken Forbus from a book I have from the mid-80’s is available on-line (it is also archived at SemanticScholar). MIT Press apparently still has the Faltings/Struss collection in print

It seems to me that symbolic qualitative physics would be very useful for AVs. You can have a designed ontology for road travel (I did bits of it in a study of decision problems in same-direction traffic conflict resolution which I undertook in early 2012 – I am primarily a cyclist, and so-called “vulnerable road users” like myself tend to be interested in such things from a survival point of view. I think an ontology is doable). And, along with that, a (huge) number of rules for what to do in what situations, designed carefully, discussed in conferences and public meetings, assessed by regulators, and so on. You can do all your pattern recognition and sensor fusion with NNs if you like. And the action synthesis is then done in a dedicated flash-inference engine. You can have confidence in it because all the rules have been formally assessed (and can be modified if need be).

Except that this idea is apparently a non-starter. The idea with traction nowadays seems to be that if you throw enough resources into DLNNs, you can get them to do anything (except negation, of course. NNs can’t negate). You just need enough balls, stones, breakable dishes, tables, and water, and a DLNN will come up with all that. Or, in the current context, enough pedestrians wheeling bicycles across roads at night. And that is a problem.

There are plenty of hypotheses about what went “wrong” in Tempe. It could have been sensing, or sensor fusion. Or it could have been that the DLNN “inferencing” didn’t deliver when needed. I suspect that last is going to be hard to figure out. If that bit had been a symbolic inference engine, you would have likely had a good chance to figure it out pretty quickly, and then to modify the inference engine to the satisfaction of regulators. Always assuming it isn’t impenetrably complex, of course, and we do know there are cases of that elsewhere in road-vehicle automation.

Leave a Reply