A self-driving Chevrolet Bolt in tests on the streets of San Francisco.
Photo credit: REUTERS
August 19, 2018 06:01 CET
You’re crossing the street wrong.
That is essentially the argument some self-driving car boosters have fallen back on in the months after the first pedestrian death attributed to an autonomous vehicle and amid growing concerns that artificial intelligence capable of real-world driving is further away than many predicted just a few years ago.
In a line reminiscent of Steve Jobs’s famous defense of the iPhone 4’s flawed antennae — “Don’t hold it like that” — these technologists say the problem isn’t that self-driving cars don’t work, it’s that people act unpredictably.
“What we tell people is, ‘Please be lawful and please be considerate,’” says Andrew Ng, a well-known machine learning researcher who runs a venture fund that invests in AI-enabled companies, including self-driving startup Drive.AI. In other words: no jaywalking.
Whether self-driving cars can correctly identify and avoid pedestrians crossing streets has become a burning issue since March after an Uber self-driving car killed a woman in Arizona who was walking a bicycle across the street at night outside a designated crosswalk. The incident is still under investigation, but a preliminary report from federal safety regulators said the car’s sensors had detected the woman but its decision-making software discounted the sensor data, concluding it was likely a false positive.
Google affiliate Waymo has promised to launch a self-driving taxi service, starting in Phoenix, Arizona, later this year, and General Motors has pledged a rival service — using a car without steering wheel or pedals — some time in 2019. But it’s unclear if either will be capable of operating outside of designated areas or without a safety driver who can take over in an emergency.
Meanwhile, other initiatives are losing steam.
Elon Musk has shelved plans for an autonomous Tesla to drive across the U.S. Uber has axed a self-driving truck program to focus on autonomous cars. Daimler Trucks, part of Daimler AG, now says commercial driverless trucks will take at least five years. Others, including Musk, had previously predicted such vehicles would be road-ready by 2020.
With these timelines slipping, driverless proponents like Ng say there’s one surefire shortcut to getting self-driving cars on the streets sooner: persuade pedestrians to behave less erratically. If they use crosswalks, where there are contextual clues — pavement markings and stop lights — the software is more likely to identify them.
But to others the very fact that Ng is suggesting such a thing is a sign that today’s technology simply can’t deliver self-driving cars as originally envisioned.
“The AI we would really need hasn’t yet arrived,” says Gary Marcus, a New York University professor of psychology who researches both human and artificial intelligence. He says Ng is “just redefining the goalposts to make the job easier,” and that if the only way we can achieve safe self-driving cars is to completely segregate them from human drivers and pedestrians, we already had such technology: trains.
‘What just happened?’
Rodney Brooks, a well-known robotics researcher and an emeritus professor at the Massachusetts Institute of Technology, wrote in a blog post critical of Ng’s sentiments that “the great promise of self-driving cars has been that they will eliminate traffic deaths. Now [Ng] is saying that they will eliminate traffic deaths as long as all humans are trained to change their behavior? What just happened?”
Ng argues that humans have always modified their behavior in response to new technology, especially modes of transportation. “If you look at the emergence of railroads, for the most part people have learned not to stand in front of a train on the tracks,” he says. Ng also notes that people have learned that school buses are likely to make frequent stops and that when they do, small children may dart across the road in front of the bus, and so they drive more cautiously. Self-driving cars, he says, are no different.
In fact, jaywalking only became a crime in most of the U.S. because automobile manufacturers lobbied intensively for it in the early 1920s, in large measure to head off strict speed limits and other regulation that might have impacted car sales, according to Peter Norton, a history professor at the University of Virginia who wrote a book on the topic. So there is a precedent for regulating pedestrian behavior to make way for new technology.
And while Ng may be the most prominent self-driving proponent calling for training humans, as well as vehicles, he’s not alone. “There should be proper education programs to make people familiar with these vehicles, the ways to interact with them and to use them,” says Shuchisnigdha Deb, a researcher at Missippi State University’s Center for Advanced Vehicular Systems. The U.S. Department of Transportation has stressed the need for such consumer education in its latest guidance on autonomous vehicles.
Maya Pindeus, the co-founder and CEO of Humanising Autonomy, a London startup working on models of pedestrian behavior and gestures that self-driving car companies can use, likens such lessons to public awareness campaigns Germany and Austria instituted in the 1960s following a spate of jaywalking fatalities. Such efforts helped reduce pedestrian road fatalities in Germany from more than 6,000 deaths in 1970 to less than 500 in 2016, the last year for which figures are available.
The industry is understandably keen not to be seen offloading the burden onto pedestrians. Uber and Waymo both said in emailed statement that their goal is to develop self-driving cars that can handle the world as it is, without being dependent on changing human behavior.
One challenge for these and other companies is that driverless cars are such a novelty right now, pedestrians don’t always act the way they do around regular vehicles. Some people just can’t suppress the urge to test the technology’s artificial reflexes. Waymo, which is owned by Alphabet Inc., routinely encounters pedestrians who deliberately try to “prank” its cars, continually stepping in front of them, moving away and then stepping back in front of them, to impede their progress.
The assumption seems to be that driverless cars are designed to be extra cautious so the practical joke is worth the risk. “Although our systems do have super-human perception, sometimes people seem to think Newton’s laws no longer apply,” says Paul Newman, the co-founder of Oxbotica, a U.K. startup making autonomous driving software, who recalls the time a pedestrian ran up behind a self-driving car and jumped suddenly in front of it.
Over time driverless cars will become less fascinating, and people will presumably be less likely to prank them. In the meantime, the industry is debating what step companies should take to make humans aware of the cars and their intentions.
Drive.AI, which was co-founded by Ng’s wife, Carole Riley, has made a number of modifications to the self-driving cars it’s road testing in Frisco, Texas. They’re painted a distinctive dayglo orange, increasing the chance that people will notice them and recognize them as self-driving. Drive.AI also pioneered the use of an external LED-display screen, similar to the ones many city buses use to display their destination or route number, that can convey the car’s intentions to humans. For instance, a car stopped at a crosswalk, might display the message: “Waiting for you to cross.”
Uber has taken this idea further, filing patents for a system that would include a variety of flashing external signage and holograms projected in front of the car to communicate with human drivers and pedestrians. Google has also filed patents for its own external signage. Oxbotica’s Newman says he likes the idea of such external messaging as well as distinctive sounds — much like the beeping noise large vehicles make when reversing– to help ensure safe interactions between humans and autonomous vehicles.
Deb says her research shows that people want external features and audible communication or warning sounds of some kind. But so far, besides Drive.AI, the cars these companies are using in road tests don’t include such modifications. It’s also not clear how pedestrians or other human drivers could communicate their intentions to self-driving vehicles, something Deb says may also be necessary to avoid accidents in the future.
Pindeus’s company wants those building self-driving cars to focus more on understanding the non-verbal cues and hand gestures people use to communicate. The problem with most of the computer vision systems that self-driving cars use, she says, is they simply put a boundary box around an object and apply a label — parked car, bicycle, person — without the ability to analyze anything happening inside that box.
Eventually, better computer vision systems and better AI may solve this problem. Over time, cities will probably remake themselves for an autonomous age with “geofencing” — a fancy term for creating separate zones and designated pickup spots for self-driving cars and taxis.
In the meantime, your parents’ advice probably still applies: Don’t jaywalk and look both ways before crossing the street.