Self-driving cars are poised to take over U.S. roads and destroy American jobs… and they will also kill people, even if by accident.
Right now, their makers are in the process of convincing Congress that they can handle their own regulations – even as they continue working out the kinks.
The U.S. Senate subcommittee for Commerce, Science and Transportation heared testimony from Duke University roboticist Missy Cummings, who admitted that fatalities and accidents are inevitable as self-driving cars attempt to integrate with a busy and complex society.
The robot car revolution hit a speed bump on Tuesday as senators and tech experts sounded stern warnings about the potentially fatal risks of self-driving cars.“There is no question that someone is going to die in this technology,” said Duke University roboticist Missy Cummings in testimony before the US Senate committee on commerce, science and transportation. “The question is when and what can we do to minimize that.”
Automotive executives and lawmakers sniped at each other over whether universal standards were necessary for self-driving cars….
Senators Ed Markey and Richard Blumenthal, who have cosponsored legislation that proposes minimum testing standards for automated drivers… “The credibility of this technology is exceedingly fragile if people can’t trust standards – not necessarily for you, but for all the other actors that may come into this space at this point.”
These “standards” reflect the programming that will make sometimes fatal choices in the mix of situations that may involve innocent by-standers and no-win situations.
In these cases, is there a “moral” gradient that computers and people can see eye-to-eye on?
If the self-driving car is designed to avoid children at all costs, does that mean it could be programmed to kill (or sacrifice) you if/when you are caught inside of a car headed for disaster, or on the opposite side of the road as the child? There are no clear answers.
The standards are already becoming morally complex. Google X’s Chris Urmson, the company’s director of self-driving cars, said the company was trying to work through some difficult problems. Where to turn – toward the child playing in the road or over the side of the overpass?
Google has come up with its own Laws of Robotics for cars: “We try to say, ‘Let’s try hardest to avoid vulnerable road users, and beyond that try hardest to avoid other vehicles, and then beyond that try to avoid things that that don’t move in the world,’ and then to be transparent with the user that that’s the way it works,” Urmson said.
But the “morality” of the decision-making structure of the computer’s processes, and the inevitability of chaos for at least some individuals is only part of the story.
Autonomous vehicles will, ironically, also be quite vulnerable to hacking – as Internet-connected devices in the car can be manipulated and used to takeover the commands and data of really any of the newer “smart” cars on the road. The problem will be even bigger as self-driving cars become a bigger part of our life.
“We know that many of the sensors on self-driving cars are not reliable in good weather, in urban canyons, or places where the map databases are out of date,” said Cummings. “We know gesture recognition is a serious problem, especially in real world settings. We know humans will get in the back seat while they think their cars are on ‘autopilot’. We know people will try to hack into these systems.”
“[W]e know that people, including bicyclists, pedestrians and other drivers, could and will attempt to game self-driving cars, in effect trying to elicit or prevent various behaviors in attempts to get ahead of the cars or simply to have fun,” she said.
Back in 2013, a couple of white hat hackers demonstrated how vulnerable a number of newer cars are to hacking. The possibilities are down right frightening – everything from the stereo and windshield wipers to the brakes can be hacked and remotely controlled – or shut off when you need them most. Just imagine what is possible in 2016.
What happens when these self-driving cars of the future disagree with the human passenger about the priorities, or what is allowed in a critical situation – like escaping a car jacking assault or avoiding a cop in pursuit?
It isn’t hard to see how trusting technology on the roads is going to complicate the future, and restrict our human ability to make decisions about important factors on the road. Let’s just hope somebody programs these intelligent machines with some common sense.
Click here to subscribe: Join over one million monthly readers and receive breaking news, strategies, ideas and commentary.
Please Spread The Word And Share This Post
Mac Slavo Views:
Read by 3,624 people Date: March 16th, 2016 Website:www.SHTFplan.com
Copyright Information: Copyright SHTFplan and Mac Slavo. This content may be freely reproduced in full or in part in digital form with full attribution to the author and a link to www.shtfplan.com. Please contact us for permission to reproduce this content in other media formats.
The content on this site is provided as general information only. The ideas expressed on this site are solely the opinions of the author(s) and do not necessarily represent the opinions of sponsors or firms affiliated with the author(s). The author may or may not have a financial interest in any company or advertiser referenced. Any action taken as a result of information, analysis, or advertisement on this site is ultimately the responsibility of the reader.
SHTFplan is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.