Wanted to get the CD community’s feedback on this.
In the given situation, there are 2 choices: crash into car A, and crash into car B.
There has to be an optimal decision, given average masses, safety ratings, casualty rates, damage rates, etc, as to whether crashing into A or B would be a net safer option.
It’s not unethical, in my opinion, for the self-driving car to make that decision correctly just because many humans would not.
The question is posed as “which to crash into”, but I think it would make more sense to think of it instead as “how strongly to avoid colliding with”.
Depends on how you’re defining your moral utility function, but I think it’s pretty clear that minimizing harm can’t really be construed as “unethical” by any reasonable standards.
If the car determines that it MUST collide with one of the two then I think the import decision is not which will cause the most damage, but which will hurt the most people. Quantitative mechanical/monetary damage pails in comparison to human life, and determining which impact will cause more of that is nearly impossible.
For instance what if the more structurally sound car (the Volvo) happens to packed to the gills with newborns? I think the smarter decision in that case is to hit the Cooper, but the car won’t be able to know the occupants of the other vehicle in this scenario. So the result will be that it won’t attempt to choose which car to hit, it will simply do its best to miss both.
A much less interesting outcome than what the article predicts, but I think it is much more likely than what the article predicts as well.
I also wonder how “down-stream impacts” might affect the algorithm. For example, what if hitting the SUV sends it off the side of a cliff, or into oncoming traffic? What if the odds of creating a multi-car pileup are higher with one side versus the other? I think any algorithm that is designed to minimize damage and loss of life is going to be the best ethical choice we can make.
Three laws of robotics, and apply it to the cars.
Just so long as they don’t think of what’s best for humanity…
The resultant decision - an increased propensity for crashing into the Volvo - could ultimately result in increased cost to insure the Volvo, thus it does become much more than just harm avoidance. Also, if I am the person who chooses to own the Volvo, and more autonomous vehicles crash into me to avoid crashing into a less robust vehicle, them MY safety is more at risk now than it was previously.
This is a very interesting ethical situation.
As a person who is currently working on a PhD thesis related to this very topic, Control Theory of Autonomous Passenger Cars, this is not a clear cut question nor does it reflect the real problems this area of study faces.
Instead it is poised to create controversy and doesn’t portray the study of autonomous passenger vehicles correctly.
I assure you, there is no clean cut algorithms being deployed into cars that have deliberate object targeting in the manner the article suggests, at least not in the dozen or so vehicles I have had the privilege to study including Google Car. Furthermore, a Scenario of choosing to crash into A or B is hardly from reality.
In reality there are a lot more objects to crash into, and not all of them are cars. The answer to the choice is highly dependant on how you arrive at that choice in the first place, and where in the world you currently are. What is left out of the topic is why the vehicle determined it will crash. Is it due to a vehicle error (speed, rate of turn, loss of command/control etc) or due to environmental errors uncontrollable by the car (road condition, environment condition, unexpected object in path etc.). This is important to determine if the vehcile can even control the corrective action down stream. Also are you on a highway?, a local road?, a strip mall full of pedestrians?, but most importantly, the choice to employ is based on which sensors you trust on your car at that moment. Can you even trust that you detected one car was an SUV or a Volvo, vs a large tree? Can you calculate the speed and force on impact. Even if the computer has enough time to determine a choice, it is not reaction time of the computer that is important, it is reaction time of the mechanical car that determines if you can even pull off the maneuver to hit one object vs. the next, and if you had that time, why isn’t there a third or 4th option.
There are too many unknowns to answer this question. And asking for people to answer this inaccurate question just perpetuates falicies and mis-understandings around this technology. The real question is how to avoid a scenario like this in the first place. It might be safer to crash into a barrier or guard rail if on the highway or into a tree or parked car if on a local road. At a minimum, one of the possible way we are trying to avoid this scenario all together is based on the autonomous vehicle network. Allowing vehicles to have real-time communication between each other autonomatically is a huge key piece of technology that will help avoid vehcile-to-vehicle crashes. Imagine if your vehicle goes rogue, you can signal to all the vehicles around you to that, and they will track your movements to get out of the way (using the same technology you used to determine a Volvo vs other car was around you to begin with.) This is real and the DOT just approved this type of communication in the US… http://www.nhtsa.gov/About+NHTSA/Press+Releases/2014/USDOT+to+Move+Forward+with+Vehicle-to-Vehicle+Communication+Technology+for+Light+Vehicles
The fact remains, last year alone, approx 35,000 people died in the US from vehicular accidents based on US DOT reports. That is approx. 1 person every 13 seconds in the united states due to PREVENTABLE vehicle accidents. It is 1 person in the world every second. These accidents are based on inattentive driving, speeding, texting while driving, DUI, etc. Computers drive better than humans. We need this technology, support this technology, don’t fear it. Reducing deaths due to crashes is the goal, that is the purpose of this technology. Everyone that I have ever had the pleasure to work with, takes this part of the task very seriously.
People are very willing to get into planes that fly themselves, or trains that drive themselves because they are used to not controlling those vehicles, but are afraid of a car that can do the same? Why? We need this technology!!!
-Kevin
As Kevin pointed out, the kind of decision that was described in this thread would probably not occur in a real life. However, if it did, engineers would have some previous experience to draw on. Life and death decisions are made by some types of engineers, where personal safety and projected fatalities are traded off against cost, budgets, and schedules. It may not sound ethical, but I believe it does happen on a regular basis.
A good example is when a highway being designed. A large number of parameters need to be selected including the number of lanes, the location of entrances and exits, and the length of acceleration lanes, the width of break down lanes, the radius of curves, etc. A lot of data has been collected on these things over the years, and the civil engineers building the highway would be able to calculate fairly precisely how making changes such as increasing or decreasing the length of the acceleration lanes by 100 feet would have on the number of fatalities over a given period of time. It would be great to have the budget and schedule to build the safest roads possible. Unfortunately, in the real world engineers may be forced due to cost limitations and other factors to select less optimal implementations. I am pleased that I work in a field where this type of tradeoff is not required, but I believe that some engineers do face these types of problems.
Yeah applying it to cars wouldn’t be the best idea. Those robots probably weighed ~150ish lbs. in the movie. A an uprising of a few thousand 2 ton cars wouldn’t be very fun. :yikes:
As Kevin wrote, a far more likely scenario is the car completely avoiding the crash in the first place. As a technical employee of a car company that builds cars that could drive by themselves, the main focus is detecting situations far in advance of an actual collision. Systems don’t lose vigilance, aren’t drunk or tired or inattentive. Certainly high speeds limit your options, nonetheless one can significantly reduce the consequences of a collision even if it is unavoidable.
Engineers deal with ethics every day. Choices affect people. I remember my engineering ethics class in university many years ago; we explored concepts like this, and the general conclusion is quite a lot like the 3 laws of robotics.
Man, that Asimov really had it goin’ on…
a human would not be able to figure out in 15 second whether it would be safer to crash into car A or car b An autonomous driving car however would be able to decide whether or not the people in car a will have a better survival rate.
Either way I would rather being driving my own vehicle,
We need to come up with algorithm to find a way to crash into minimum number of cars…get the fluid dynamics from the rack!
In my driver’s ed class, they said “always leave yourself an out”. That said, something can conspire to remove that out (going through a construction zone where the shoulder is gone).
In addition to minimize damage to me, or to everyone. there is also: Let the vehicle that caused the problem bear the cost. For instance, if a tire blows out, and one of your choices is to hit that car, then hit that car.
The article argues against itself. It makes the point that having a smart car would be unfair (especially the helmet thing; what’s the correct answer there?) and says at the same time that having a random car would be undesirable. Obviously minimizing human fatalities is the priority, so what is it trying to say? That cars should make decisions based on… what? Basically it offers no solution to any problem and only seeks to stir people up.
That kind of “no solution” tone pisses me off more than pretty much anything else, regardless of situation.
On another note, for once the comments (on the article) are good comments.
People already make these types of decisions while driving.
Let’s say you’re driving at the speed limit through a residential neighborhood, and a child suddenly runs in front of your car to chase a ball. I’m sure most people would swerve to avoid the child even if it meant hitting another car or other inanimate object, because chances are at reasonable speeds no one in another car would be seriously injured but hitting a child would definitely result in serious injuries.
Man I’d hate to be the guy who has to write an insurance policy clause for an autonomous vehicle collision. Who would be at fault…the car company…the programmer…the AI…
So many things to consider.
Thats if you keep assuming people own their own cars. There are many different models for the autonomous car network.
Look at Dubai, they currently employ an autonomous vehicle network where each car operates like a taxi, you go to a location, enter your destination, and a car picks you up and takes you there.
There are many models that prove people do not need to own their own cars if the primary reason is transportation to and from places. Today 80% of a normal vehicles life is parked and the owner travels within a 50 mile radius of their home, to go to school, work, errands etc. Imagine how many resources we can save if we made less total vehicles, and made them available to more people, increasing their use time. Similar to an autonomous taxi service, that will pick you up and take you where you want to go.
In cases like this, the service provider is responsible, similar to an airline company when the airline crashes.
The autonomous car, affects every aspect of driving as we understand it today. It makes no sense to try to solve one off questions, while still thinking unilaterally about the topic.
Driving needs major reform. There are many other models which yield drastically different results, but have benefits and shortcomings as well.
Anytime someone talks about insurance with regards to an autonomous car the converstation will get derailed. Autonomous cars are being made to reduce accidenets and the loss of life and property damage among other things. These are all reasons why insurance companies exist. If those things don’t happen or the likelyhood of that event occuring is reduced significantly, then consumer insurance companies can sease to exist altogether and commercial insurance companies take over.
The way napster removed the need for record stores, autonomous cars can be the end of insurance for consumers… one additional benefit to atonomous cars, in addition to reduced traffic, smarter road construction, etc.
Regards,
Kevin
How does my statement reflect anything to do with personal ownership of the autonomous vehicle?