Bad Robots: Apple Card discriminates against women
Bad Robot Outcome: Apple Card gives up to 20 times less credit to women
The Story
In August 2019, Apple released a credit card in partnership with Goldman Sachs. Apple stated that the Apple Card is a “digital first,” numberless credit card “built on simplicity, transparency and privacy.”On 8 November 2019, a well-respected Danish software developer, David Heinmeier Hansson tweeted that is wife, Jamie, even though having a higher credit rating than he did, was denied a credit line increase for the Goldman Sachs backed Apple Card. The tweet stated, “The Apple Card is such a fucking sexist program. My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time. Yet Apple’s black box algorithm thinks I deserve 20x the credit limit she does. No appeals work.”
The tweet went viral.
On the 10th of November, 2019, Apple’s co-founder, Steve Wozniak, also tweeted, “I am a current Apple employee and founder of the company and the same thing happened to us (10x) despite not having any separate assets or accounts. Some say the blame is on Goldman Sachs but the way Apple is attached, they should share responsibility.”
On November 11, 2019, Goldman Sachs tweeted a response noting, “In all cases, we have not and will not, make decisions based on factors such as gender.”
But this is exactly what happened.
Our view
The data used to train the Goldman Sachs AI will have had gender bias already within the data set. It will have been an historical banking data set likely purchased for the purpose of training the algorithm. Clearly the data set was not rigorously set up, nor cleaned properly, nor statistical tests performed that should have alerted both Apple and Goldman Sachs to the issues of bias and any other skews. The algorithm and the customer facing AI should then have been properly tested post set up and pre-launch. It would have been prudent to not only have had a technical person sign off the algorithm and the AI but also a legal person given the algorithm is blatantly discriminatory.Questions that Goldman Sachs and Apple should have asked:
Who was responsible for acquiring the training data set
Who was responsible for cleaning and testing the data?
What statistical methods were used to ensure the AI would not discriminate? Are these best practice?
Who is on the team managing the algorithm and was there diverse views and lenses who should have picked up this mistake?
If this algorithm produces biased results, what legal implications are there?
The tragic part of this Bad Robot story is that this is Goldman Sachs and Apple – both leaders in their fields and some of the largest most powerful companies in the world. It defies belief really. Clearly there was not an Ethical AI framework or processes they were working to. And well done Steve Wozniak for calling out his own company!