I don’t have time to write any blog posts or anything else for that matter at the moment. But it seemed to me that an e-mail I wrote today might be converted to a post. Herewith.
Steven Tockey pointed to an article in the Huffington Post about delivering tacos by robotic helicopter. Apparently there is a Silicon Valley company of three people called Tacocopter (why not Tacopter?) which wants to use small robotic helicopters to deliver tacos ordered by your smartphone before they get cold. Huffington Post spoke with Star Simpson, one of the founders. She is reported to have had something to do with the MIT Media Lab Personal Robots Group, but she doesn’t turn up on the list of people, including alumni, on their WWW site.
The FAA says, of course, “you can’t fly drones in civil airspace”. Ms. Simpson’s characterisation of the “obstacles” to getting started is telling:
It’s really the legal obstacles in the U.S. that seem insurmountable at this time.
To which the journalist comments:
So, there you have it: The U.S. government is single-handedly preventing you from ordering a taco and having it delivered to you by a totally sweet pilot-less helicopter. So get out your pitchforks, sign those petitions, start calling your local congressmen, and let them know: We want our tacos hurled at us by giant buzzing robotic helicopters, and we want them now.
….. which does rather give me the impression that he doesn’t take it all quite as seriously as the founder.
Let’s see. Is it just “legal obstacles”? Well, as the journalist points out, there are plenty of real ones in the way of flying helicopters in urban areas, so one should surely also think of safety and liability insurance.
When it comes to buying liability insurance Ms. Simpson might well find that companies won’t do it and she has to ask Lloyds. The price of insurance is bound to include the almost-certainty that there will be at least one accident. What is one accident likely to cost? The cost of the aircraft (almost all that seriously crash are written off), plus the cost of repairs to or replacement of any infrastructure and personal goods that are damaged along with it, not to speak of the cost of people possibly being hurt. So take that, add insurer’s administrative costs, as well as a bit of profit on top. That’s just for one accident. How many might there be? The first accident might well shut the company down, because that’s what happens to very small airlines which have an accident. (Airline you say? She will in fact be running a cargo airline. I am not sure how good an idea it is to let the FAA know in advance that you think the Federal Aviation Regulations stand in the way of your business. The FAA is likely to point out that the FARs do a very good job of assuring safety in what used to be the very risky business of flying – and they are right!) So Tacocopter would either have to self-insure, if they are rich enough, or have to interest someone in the idea who is rich enough to self-insure, and most of those people are very interested in the business model, for that is how they got rich in the first place.
So let’s look at the business model. How many people are going to be willing to pay more than double the usual price for a taco so delivered? (I am thinking here that the cost of a new private airplane in the US is -still- over 50% liability insurance for the manufacturer.) It might indeed be a nice party trick. Then again, those wealthy enough to pay double the price for food might well care a lot about the quality of that food, so turning up in a van with a cooker in the back and preparing tacos on the spot might generate far more business, and is obviously an easy way to ensure the freshness of the delivered product. As well as not requiring the FAA to change the Federal Aviation Regulations.
Ms. Simpson has surely also taken the five minutes to think about such things. It would be nice to have read her answers.
But there is a more general phenomenon here worth remarking. There appears to me to be a blind spot amongst mobile-robot enthusiasts and researchers concerning safety, and this seems to occur in this article also.
This issue is “close to home” in the following sense. Until the end of October I run a research group in Interactive Safety at the “Cognitive Interaction Technology” research lab in Bielefeld, CITEC. As Germans like to express it, our interests in safety have not “resonated” in CITEC. People are building small mobile robots of various sorts, even having them run around in public areas and interacting with ordinary people. I have had many conversations about safety, what the issues are, why it is important, and what you can do about it (Hazard Analysis, Hazard Mitigation or Avoidance Measures, Residual Risk Analysis; it’s helpful to have a set of principles which help you avoid the most well-known major hazards). Indeed, I gave a Keynote Talk on exactly this topic at the IET System Safety conference in London in 2009, along with a paper. But I haven’t met anyone at CITEC who has read the paper or who knows what is in it (it’s only about 3000 words long, so length can’t be a factor). When I have talked to CITEC people about safety, the reaction is typically “that’s nice. How interesting, I hadn’t thought about that” and they turn back to what they were doing before.
I remember a long conversation I had in November 2010 at an evening reception with a CITEC researcher who was part of a team building a mobile robot which interacted with people. They were aiming to exhibit it in a local gallery as a robotic guide, along with the human guides. I suggested to her that the insurance company would likely want assurance against accidents, and the way to do that is Hazard Analysis, etc (as above), that we were expert at that and could help. I said that the classic mitigation for robots with moving parts (indeed, defined in a draft international standard which was in review) was defining a space of motion and installing interlocks to prevent people entering that space when the robot was in operation. I suggested that was likely to be what will happen if one doesn’t think about the issues more creatively, and it was probably not what she wanted. Indeed not, she said, it’s not really what we’d like.
I didn’t hear from her again, despite trying e-mail contact. (Remember, this is in the very institute in which I work! That “resonance” thing.) But a few months ago there was a picture in the local newspaper of the robot doing its guidance job – in an area roped off in the corner of the gallery, well away from exhibits and with gallery personnel on hand to ensure people stayed the other side of the rope. The classic measure, as I predicted.
A robotic helicopter has fast-moving parts which are open to the atmosphere – if it didn’t, it wouldn’t fly. Quite apart from damage caused by physical collision with the drone body, these fast-moving rotating parts are going to be moving in environments which have not been built with that in mind. Children get very curious and like to touch things, for example, so you have to keep it away from them. If operation was in a industrial plant, interlocks would be required to prevent any people from approaching the device when it was operating. And that is with presumed-trained, professional personnel. You don’t have three-year-old kids running here and there on the factory floor. That is the legal situation with human interaction with such devices currently, and it’s not just the FAA. In the US it’s OSHA and 150 years of experience with dangerous workplaces, such as 19th century and early 20th century railroads. Are people just going to throw all that out so that some company can deliver tacos? I doubt it. It took decades to get that level of protection, and I suspect that a lot of that was driven ultimately by consideration of the costs of accidents. So you probably can’t let the Tacocopter land (the journalist’s idea about getting tacos thrown at you is not that far-fetched!). So what about when it has to, for some reason – a fault, for example? Which aircraft doesn’t have those occasionally? How do you assure to keep it well away from kids during such a event? However you do it, that subsystem better not be the faulty one!
Here is an article from this week’s Economist which does say something about liability. Someone at MIT is trying to devise algorithms to interpret hand-signalled movement instructions, as used on aircraft carriers, reliably and accurately. I used to know one of the authors, Randall Davis, from conferences. He is a well-known and well-respected AI guru. The final question that occurs to the journalist is who would be prepared to trust “the fate of a multi-million-dollar drone to such a system” (it is only about 75% reliable at the moment).
But, says the Economist:
But it is a good start. If Mr Song can push the accuracy up to that displayed by a human pilot, then the task of controlling activity on deck should become a lot easier.
Another point of view is that experience shows that 75% is the easy part. When you’re at 90% and you want to get it up to 95%, that’s when the hard work starts. And from 95% to 99% may well take orders of magnitude more. For example, staying with AI, raw Circumscription is not the way to handle Blocks-World planning. This has been agreed for a couple of decades. But it does handle 90% or more of Blocks-World planning very effectively, as do many of the other methods from that era that “don’t work”.
But that’s SW engineering and “Symbolic AI”. Curiously, people who work in “neural informatics” (as it’s called in Germany) and approximation techniques seem often to have a different view, that when they can do 75%, or 80%, or 85%, they are “nearly there”. How can such different views of success prevail in one and the same subject, informatics?