The company's flawed programming extends beyond the self-driving program and into deep-set racial bias, as well, drivers claim.
The self-driving Uber that struck a women on a bicycle this past March didn’t malfunction, so much as purposely ignore the victim, who later died as a result of the collision. According to the tech news site, The Information, on-board systems detected the cyclist crossing a street in Tempe, Arizona, but had been “tuned” to be less cautious about objects moving within the vicinity of the vehicle.
According to an internal report compiled in the wake of the tragedy, the company has seemed to tacitly admit that the accident was the result of corner-cutting. That is, creating a smooth self-driving experience. In its efforts to compete in the self-driving car race, Uber is trying to avoid the issue of a unpleasant ride, which is often caused by hair-trigger sensors responding to variety of cues during drivers.
As the report puts it: “There’s a reason Uber would tune its system to be less cautious about objects around the car: It is trying to develop a self-driving car that is comfortable to ride in. Uber’s perspective has been that a self-driving car prototype that constantly slams the brakes and comes to hard stops is also dangerous.”
The death rightly caused a public panic about the use of autonomous vehicles on our roads, and Arizona’s governor immediately ordered the taxi app to pause any further tests, until they figured out a solution, but it’s not stopped the industry’s faith in automation.
The very concept of a driving experience mediated by algorithms is under additional fire. New research out of Pennsylvania State University analyzed forum discussions used by Uber drivers to discern their experiences of working with the company. In the current arrangement, Uber’s drivers still have physical control over the vehicles they command, but their time, performance and journeys are all dictated by programming. The technological promise of Uber, or its self-driving cars, is still extremely exposed to human bias. The research examined testimonies from Uber drivers who say the hidden realities of prejudice regularly creep into how the app manages their work: “Uber and its platform converged to breach the stakes of drivers when they suspected they were the victims of bias. This was especially true when drivers belong to a minority receive low ratings for reasons that are unknown to them.”
One anonymous source told researchers that they thought the app was intentionally sending black drivers to black neighborhoods. “I think as much as possible Uber tries to send us black drivers into the “hood” [….] To pick up black passengers [….] This morning I was at the airport the 3rd one to go out [….] when I get a ping …] I look at my phone, and see the pax is 25 min away and has a very ethnic-specific name.”
The answer seems to come to back to us. No technology is born in a vacuum—it’s an artificial, watered-down version of our own problem-solving abilities, subject to the very flaws we perpetuate every single day. In response to the March’s accident, Uber hired a former head of the National Transportation Safety Agency to bring some personal prowess to the driverless future we seem to be heading towards. Let’s just hope they understand bias better than the app does.