Search This Blog

Thursday, May 10, 2012

Nevada Issues License for Autonomous Car

I occasionally give talks on the future of robotics and since about 2000, I've been predicting that the technology required for cars to drive themselves would be feasible around 2020 or so.  My definition of feasible for this means robust enough, safe enough, and inexpensive enough that a manufacturer could sell autonomous vehicles to willing buyers.

However, I always added the following caveat: while the technology would be there, it might take decades before the public accepts autonomous vehicles.  This acceptance would include social tolerance of the whole concept of driverless cars, liability laws, and vehicle licensing.

In some sense social tolerance might be the biggest issue as it is an important driver of the other issues.  As an example of resistance to this sort of thing is that while airplanes could fly themselves, virtually nobody (including me) is willing to get on an airplane with no pilot, and most of us are uncomfortable (to say the least) with the concept of large, potentially explosive aircraft cruising around above our heads with no human guidance or backup.

On the other hand, cars are not aircraft, the cost of a pilot or two or three relative to the overall cost of operating an airline is relatively small, and autonomous cars seem more like robots and lots of people think that robots are cool. Furthermore, there are several very difficult tradeoffs regarding driving that society has and is increasingly being faced with.

For example, the fatal accident rate per mile driven for drivers over 75 goes way up and our society is aging rapidly.  This leaves the unfortunate choice of either limiting the mobility of many older drivers by taking away their licenses or allowing them to continue to drive and risk them killing themselves and others.  The autonomous car provides a third, and likely preferable, option.

Liability laws are a tough issue as well.  I wrote a humorous post about an early accident involving an autonomous research vehicle, but more seriously, if a manufacturer, with deep pockets until bankrupt, is liable for every accident, it's a huge disincentive to produce robotic cars.  While I used to think that this would be the biggest stumbling block, now I'm not so sure.  With Toyota being scapegoated and incurring massive costs when some drivers couldn't remember which pedal was the gas and which the brake, and others committing fraud to ride on that wave, perhaps the impact won't be as bad as I thought relative to what reality currently is.

So that leaves licensing.  Governments are generally slow to respond, so I figured it would take forever for the government innovations required for licensing driverless cars.  But I was wrong.  Nevada has just issued the worlds first license for an autonomous car!  And California might do the same soon!

So at this point, I'm definitely encouraged, and I think there could be some sort of licensed autonomous vehicle available for purchase sometime early next decade.

9 comments:

erp said...

I've read that when horseless carriages first started zooming around at 20 MPH, experts warned that the human body couldn't withstand that kind of speed.

That sounds silly now and probably in the future, it'll be silly to have worried about horseless and driverless carriages zooming around without a human hand to guide them, but right now, it sounds unsettling.

BTW - I'm one of those over 75 drivers and it annoys me no end that here in Florida there's a drive (pun intended) on to limit our driving when a casual perusal of the daily paper shows that almost all accidents are caused by teenagers and drunks.

All incompetent and impaired drivers should be off the road no matter their ages.

Hey Skipper said...

In some sense social tolerance might be the biggest issue as it is an important driver of the other issues. As an example of resistance to this sort of thing is that while airplanes could fly themselves ...

Not really. There are a couple fundamental issues that get hidden by the seeming success of a very few examples of self-piloted vehicles.

The first is that flying an airplane is at least three orders of magnitude more difficult than driving a car.

The second is derived from the first: flying is something that can be taught, but not comprehensively described. Autonomous (as opposed to remotely piloted) airplanes will not happen because there are too many situations that cannot be programmed: things will happen that will be outside the solved problem space, which will mean failure.

Until AI is real, which, at the current rate means sometime after never, the difficulty factor will swamp attempts at autonomous flight except in simple situations where hull loss is relatively inconsequential.

erp said...

Skipper, funny you should mention AI. I hesitated to do so not wishing reveal totally and completely my out-of-it-ness, but a friend was deeply into AI about 20 plus years ago. She worked on the MIT program probably at around the time Bret was there and basically said then it wasn't working and went on to other things.

I haven't heard or seen the term mentioned anywhere in a long time, so I guess she and you are right.

Of course, some genius is probably working on something that will change all that. Can't wait to see what it is.

erp said...

Skipper, funny you should mention AI. I hesitated to do so not wishing reveal totally and completely my out-of-it-ness, but a friend was deeply into AI about 20 plus years ago. She worked on the MIT program probably at around the time Bret was there and basically said then it wasn't working and went on to other things.

I haven't heard or seen the term mentioned anywhere in a long time, so I guess she and you are right.

Of course, some genius is probably working on something that will change all that. Can't wait to see what it is.

Bret said...

To me, the word "intelligence" has a dash of the inexplicable and a dash of magic. That which we considered to be artificial intelligence 30 years ago is all around us now, yet we consider it oh-so-ho-hum-boring.

For example, my wife talks to her phone, asks it to schedule an appointment with the dentist for the kids on Tuesday and the appointment gets added to the calendar. There not called "smart phones" for nothing! We would have been awed by that 30 years ago!

It's certainly artificial but it's not inexplicable and therefore the magic is gone so it's not "intelligent". It's just massive predictive pattern matching.

But that's probably all intelligence is.

erp said...

I don't know what's with Blogger and all these double comments. Sorry.

Bret, you're right, AI was closer to 30 years ago.

As amazing as technology is today, it's nothing compared to what we were told at the time about AI. It wasn't only going to do things we already do, like update our calendars, more conveniently. It was going to be a bigger and better brain which would relieve us of the mundane and lift us into the sublime.

Very similar to what the left wants to do. We can relax and live in virtual worlds while the brights and compassionates make the real world safe for our weak atrophied minds and bodies.

O Brave New World indeed.

Hey Skipper said...

To me, the word "intelligence" has a dash of the inexplicable and a dash of magic. That which we considered to be artificial intelligence 30 years ago is all around us now, yet we consider it oh-so-ho-hum-boring.

I'd say more than a couple dashes of both, because we have absolutely no idea how human intelligence works.

AI's promises have been extraordinarily slow in coming -- our "smart" phones only appear that way because they have access to essentially everything, and because speech recognition has enabled what are essentially vocal drop-down menus: Siri does only what it is told to do, to the extent it can understand what it is told.

I suspect human intelligence, however it works, has absolutely not binary, and so long as computers are, they won't be.

Remember the Qantas A380 that suffered an uncontained engine failure? Since AI still can't outsmart a grasshopper, I suspect there is some time to go before self-piloted airplanes.

Bret said...

One of the examples given when I studied AI 30 years ago was that if a machine could beat humans at chess, it was exhibiting AI. The best chess player in the world is now a computer yet we don't consider it to be intelligent. It just performs gazillions of computations using a somewhat clever set of algorithms and heuristics.

But as I do more and more in robotics and observe and think about intelligence and computers. the more I become convinced that's all intelligence is - massive processing using simple algorithms.

We're not intelligent either. We're just massively parallel computers with moderately advanced sensors who're just not intelligent enough to know that we're not intelligent.


(This comment is likely to be expanded into a post in the not too distant future.)

erp said...

I look forward to that post because we aren't "just massively parallel computers with moderately advanced sensors" and only the self aware among us know we aren't intelligent.

We do, however, have something no machine I can imagine will ever have and that is intuition, imagination, sensitivity and emotions which allow us to dream of and create what hadn't existed before.

Chess playing isn't a very good example of AI because even though the number of possible moves is very large, it is calculable and if the programmer is very clever (far different than intelligent) and the machine very fast, it can beat mere mortals, but so what? Can the computer "accidently" drop its coffee on the board and say oops thus ending the game. I was present at a championship bridge match when a variation of that happened causing as near a riot as bridge players can manage.