As autonomous cars come closer, ethical issues

Google unveiled the second generation of its self driving car Tuesday night, interestingly taking a step toward cutting out the middleman and building the actual car this time.

The announcement, made at the opening night of the Code Conference hosted by Re/code editor's Walt Mossberg and Kara Swisher, featured a car with no steering wheel that looks like a Volkswagen Beetle with the added touch of a face on the front that seems to personify the car.

Google's car only reaches 25 mph at this point and seems to be more of a proof of concept. Further along is Google's attempt to place the technology inside existing car models.

In a couple of videos featuring Lexus SUVs, the company demonstrated that it's system now makes appropriate lane change decisions based on road construction and can detect a bicyclist's hand signal.

I preface this next observation with the realization that my lack of real-world driving experience means I could be totally off base here. One odd portion of the safety video showed the car stopping at a railroad crossing. The arm was not down, but the car stops. The narrator explained that the car stops and waits for other traffic to clear the tracks. The crossing was clearly wide enough for at least two cars because you could see them passing on the left. If cars can pass you, traffic might not clear on the track for quite a while. If the aim here is to prevent train collision it would be better in my mind to either see or get the signal that the arm is down.

The cars still have a little ways to go before the public will be getting their hands on these. Google concedes that at this point the car does not do well in rain or snow. This is no easy challenge.

I covered the Intelligent Ground Vehicle Competition  each June for three years at Oakland University. The international competition, partially funded by the Department of Defense and some local military contractors, had teams design and build robots which could handle a variety of tasks. Among these, the robot had to be able to learn and navigate its way around an obstacle course autonomously. There were varying approaches, but they commonly involved combinations of cameras, lasers and GPS.

I spoke with OU's team as they prepared their entry, "Replicant," for a run on the final day of competition. They had been having one of their most successful showings so far, but they were concerned with rainy conditions. The team members explained to me that the laser guidance system they were using could reflect off the raindrops causing improper navigation.

In addition to the remaining technical challenges, there are also legal and ethical concerns to be dealt with. If everyone had an autonomous car, the roads would undoubtedly be safer. It takes human error out of the equation. The reality, however, is a little more messy.

Not everyone is going to rush out and buy one of these things the instant they are legal. This mix with cars being driven by humans creates its own set of variables. Setting aside insurance issues of who is responsible in the event of a collision, there are going to be basic philosophical questions the programmers of the car's software will have to answer. Adrianne Jeffries of The Verge wrote a post on efforts to teach robots ethical decision-making skills.

Suppose that a human driver pulls out into oncoming traffic. Our hypothetical autonomous car does not have time to stop, but it can steer in such a way as to control the direction of the impact. If the Google car has one passenger and the car it is about to collide with has five, should it kill the driver to save the family or protect the owner at all cost?

If the programming team opted for consequentialist theory, the best thing to do would be to kill the driver and save the family as it does less harm. However, if we were in control of the car contemplating our impending end, how many of us can say what we might do in that moment? I wonder openly how many would sign a license agreement (there would have to be one) that said your car could sacrifice you for the greater good. That's a tough provision to swallow, no matter how rational.

There are hurdles, but it's certainly interesting to watch this unfold.

Source: The Verge