Bad Robots – Uber’s “Robo-Firings” Challenged in Court by Former Drivers

Bad Robot Outcome:
If you drive for Uber, you can be terminated by an algorithm. Four former drivers who faced such dismissals have brought suit against the ride-sharing behemoth, claiming that the company’s practice of “robo-firing” violates the EU General Data Protection Regulation (GDPR).

The Story

A Birmingham, England-based Uber driver who had, for five years, cultivated a 4.96 (out of 5) star rating, achieved “Gold” status with the company, and worked tirelessly throughout the pandemic was simply unable to log into his account one morning – instantaneously eliminating 70% of his income.

The dismissal was the result of Uber’s use of an algorithm. More specifically, the company employs such technology to uncover what it considers fraudulent activity, such as using a rider and driver profile at the same time, creating duplicate accounts, accepting trips despite having no intention to complete them, claiming false fees or charges, and similar behavior.

 

Uber’s algorithms are used for much more than dismissing drivers on the basis of fraud. They are also used by the company to make decisions as to which drivers get jobs, how much they’re paid, and to rate driver profiles based on inappropriate behavior and/or late arrivals.

 

Despite being subject to the use of this technology, including – for many – being terminated by it, Uber’s drivers are not provided with any of the details or underlying information that goes into the algorithmic determinations. Furthermore, the drivers are unable to appeal the decisions made by the algorithms used by the company.

The Fall-out

In October 2020, The App Drivers & Couriers Union (ADCU) filed suit against Uber on behalf of four drivers in the UK and Portugal who were dismissed based on determinations made by the company’s algorithm. The group claims that the algorithmic dismissals (of which they have seen well over a thousand since 2008) violated Article 22 of the GDPR.

 

This particular legal provision protects individuals from automated decisions that have adverse effects and are carried out without meaningful human intervention. Uber has responded to the legal challenges by asserting that “at least two specially trained members of the Uber team review all facts and circumstances in every case before reaching a conclusion to deactivate a driver.”

The ADCU claims that, despite the above, Uber’s human intervention does not rise to the level of “meaningful” required by the law. The group rests this assertion on the fact that drivers were sent automated responses (as opposed to being able to interact with a human) when trying to reach the company to learn more details about their various dismissals.

 

These legal challenges go well beyond just Uber and its drivers. Jeremias Adams-Prassl, a University of Oxford professor with a focus on the gig economy, notes that algorithms are being used not only to set pay rates for individuals, but are increasingly being used in the decision making processes with respect to whether to hire and fire employees or contractors.

 

While the use of such technology might be advantageous in that it might be able to alleviate traditional employment problems (such as gender-based wage gaps), it requires both transparency and human oversight to do so. This is what’s at the heart of the suit against Uber.

Our view

Our team at the Ethical AI Advisory has a keen interest in this case, as it implicates two of the eight AI Ethics Guidelines that we support and encourage the adoption of: (1) Transparency & Explainability; and (2) Contestability.   Starting with the first (Transparency & Explainability), it is our position that all use of algorithmic decisionmaking with the potential to affect human beings must be understood by, and explainable to, such individuals. Uber has fallen short here.    The Birmingham-based driver mentioned at the beginning of this article has never been told exactly why he was dismissed by the company. Despite numerous attempts at reaching out, he has only been provided with automated responses and has been unable to speak with a human about the experience.    Turning to the second guideline at issue here (Contestability), Uber has once again fallen short by not allowing drivers dismissed by the algorithm to appeal such decisions. Without proper recourse, it becomes nearly impossible to determine whether the dismissals are justified.   The violation of these two guidelines leaves Uber’s algorithmic decision making opaque, potentially arbitrary, and downright unfair. Given the company’s vast number of drivers, in conjunction with its global influence, the cases against it will be important for the advancement (or not) of ethical AI. 
Joy Townsend

Written by:

Andy Dalton