“People think it's this veneer -- that the designers are handed this box and told, ‘Make it look good!’ That's not what we think design is. It's not just what it looks like and feels like. Design is how it works.” — Steve Jobs speaking about the iPod in this 2003 article in The New York Times Magazine
Discrimination, programming AI, and Apple Card
Another week, another crisis with Apple at the centre. It’s almost as if it is part of the communications messaging grid these days. It all started with this lengthy Twitter thread from David Heinemeier Hansson, mostly channelling his wife Jamie Heinemeier Hansson. The tweet thread is worth a read, but in a snapshot, Jamie has a higher credit score than David, and a decent income to boot, but was granted a credit limit 1/20th of her husbands on the Apple Card. The initial, and likely conclusion is that Goldman Sachs (provider of Apple Card) has an algorithm that is biased, and despite claiming in a tweet of all the places that it does not discriminate and does not factor in gender into the decision making; but it’s ok because “your concerns are important to us”.
You might think I’ve jumped to a conclusion. This was not a one-off example though, not even close. Soon after David and Jamie went public, up popped none other than Apple co-founder Steve Wozniak, a megamillionaire and technophile to say that his wife got offered a credit limit 1/10th the size of his. He explained to David on Twitter that him and his wife “have no separate bank or credit card accounts or any separate assets”. But I thought the algorithm didn’t take into account gender?
Wait, there’s more… After this started getting noticed, the experiments began. Twelve people purportedly signed up via their iPhone (six men, and six women), whilst no specific numbers were revealed the results apparently showed that the men, even with bad credit scores, got higher limits and better terms than the women. But repeat after me: the algorithm doesn’t take into account gender.
At this stage I suspect none of us are particularly surprised by this outcome; sexism is everywhere in our lives and entrenched in big tech through the errors of white men. For those with close to boiling blood right now, the errors are not necessarily intentional programming errors but the fact that so many white men refuse to let their teams and companies be truly diverse—or to accept that the training data supplied to AI models contains bias from the outset.
I’m willing to suspect at this point that the Goldman Sachs’ Apple Card algorithm also cares about the colour of your skin, the university you went to, your sexual orientation and many other factors it will discriminate you for.
Am I writing this to try and change the world of black box tech algorithms and the world of AI? Not really, although people should check themselves, interrogate their beliefs, and strive for truly diverse working environments.
Goldman Sachs is not alone here. For example, there was the time that Amazon tried to build an AI powered human resources system to help sift through CVs and cover letters to find the perfect candidate for jobs. After feeding the system lots and lots of examples (the kind that had been sorted by humans), the system soon taught itself that men were better candidates. Downgrading candidates that included “women’s,” as in “women’s chess club captain” or graduates or attendees of girls schools/universities. The project was soon disbanded.
And can we talk about biased AI without mentioning the dream Microsoft had to create a chatty online bot? Well to create a chatty bot you have to feed it lots of reference data; after 16 hours of Tay being live it turned out to be a “racist asshole”. Surprise, surprise.
These opaque choices about credit limits, rates, and who gets in a scheme and who doesn’t isn’t a new headache to the old school banks. They are trash. It begs the question, why did Apple partner with Goldman Sachs? Or even a step back from that, why did Apple want to release a credit card and open itself up to this kind of trashy behaviour?
Sadly, Apple thought it could do things different. Unsurprisingly, it was wrong; but still to this day you can visit Apple’s Card website and see the phrase: “a new kind of credit card created by Apple. And built on the principles of simplicity, transparency, and privacy.” Laughable.
As of writing, three days into this latest crisis. No comment from Apple.
Just last week I spent 1,500 words explaining the reason why Apple does things. As part of that argument, I explored Apple’s reason for wading into many different industries at different points in its history. One of the key threads to that argument was Apple’s desire to bring a good customer experience to an area with bad customer experience—no better place than banking. And Apple Card’s mission to simplify cashback, make managing spending easier, and easy payment of due balances is a success story to Apple’s mission; it is sadly undone with opaque and discriminatory decision making.
A McKinsey report into the growing use of artificial intelligence in hiring, healthcare, credit decision making, etc explores how AI can help reduce bias, but that it can also “bake in and scale bias”. With Goldman Sachs’ credit card algorithm it appears that the evidence to hand points towards this. So, on the one hand “AI can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used”. But on the other hand, “extensive evidence suggests that AI models can embed human and societal biases and deploy them at scale. Julia Angwin and others at ProPublica have shown how COMPAS, used to predict recidivism in Broward County, Florida, incorrectly labeled African-American defendants as “high-risk” at nearly twice the rate it mislabeled white defendants”.
The report goes on to explain that the underlying data are often, but not always, the source of bias. But in this instance, the Goldman Sachs/Apple Card algorithm, the evidence we have shows some form of bias. But biased against what? There needs to be a scale of fairness to understand the bias. There are many approaches to introducing fairness balances into AI, including pre-processing to maintain as much accuracy as possible while reducing any relationship between outcomes and protected characteristics. And post-processing techniques that adjust the model’s predictions after it has made them, using human intervention to satisfy a fairness constraint.
Apple can fix this, big tech can fix this, the world can fix this. Consider the above discussed methods of developing AI within an environment which can help correct for bias; as well as awareness of when there is a high risk that AI could exacerbate bias. It’s also important to establish processes and practices for testing, and engage humans in decision making.
But also, start by building truly diverse teams, and truly diverse businesses. I’m not a chief diversity and inclusion officer, but I know that a team works better, thinks smarter, and makes less of these mistakes when it is naturally diverse.
16-inch MacBook Pro’s Pleasing Arrival
Surprise! Well, not really. The 16-inch MacBook Pro might have been the most accurately rumoured Mac to be released in recent years. And reading the first impressions and judging by the positive reception, Apple has done good by the pros.
As Quinn Nelson (aka SnazzyQ) said on Twitter: “I’m proud of Apple in 2019. They’ve fixed laptop thermal throttling, they’ve made their phones and laptops thicker and heavier for the sake of improved battery, they’ve given us a laptop keyboard that doesn’t suck, etc. They’ve listened. For the first time in a longgg time.”
It is crazy that in 2019 one of the shining examples of improvement coming out of Apple is the keyboard of a Mac. After thirty years of great keyboards, Apple delivered a bad, broken, and unnecessary upgrade to the 2016 MacBook Pros And there is a lot of blame out there as to whose precise fault this may be. Currently in the lead for having his head on the block is Jony Ive, who may be responsible as the person at the end of the line for design during this period of time. But if you ask me, this was an engineering decision, not a design one. And you’ve got no way of proving that, and neither do I. (Let’s not forget, despite all the talk about the “de-Jony-Ive-ification” of Apple, his head is still on the executive page and he’s still the Chief Design Officer.)
Enough of my judgement, how did it do in first impressions.
Mr “de-Jony-Ive-ification”, ask John Gruber of Daring Fireball notes:
We shouldn’t be celebrating the return of longstanding features we never should have lost in the first place. But Apple’s willingness to revisit these decisions — their explicit acknowledgment that, yes, keyboards are meant to by typed upon, not gazed upon — is, if not cause for a party, at the very least cause for a jubilant toast.
The new MacBook Pro has no massive asterisks or qualifications. It’s a great computer, period, and it feels so good to be able to say that again. (Emphasis his)
Matthew Panzarino over at TechCrunch:
“In my brief and admittedly limited testing so far, the 16” MacBook Pro ends up looking like it really delivers on the Pro premise of this kind of machine in ways that have been lacking for a while in Apple’s laptop lineup.”
iFixit’s teardown of the product revealed that the new keyboard is basically just the old keyboard. So that settles that.
Your long butterfly keyboard nightmare is over. The new Magic Keyboard in the 16-inch MacBook Pro uses switches that look and feel almost identical to much older Apple devices—so close, in fact, that you can stick desktop Magic Keyboard keycaps onto these switches.
Apple also allowed a number of senior executives to do a round of interviews with some publications.
Jason Snell and Myke Hurley of Upgrade podcast interviewed Apple’s MacBook Pro product manager, Shruti Haldea.
CNet had an interview with Apple’s marketing chief, Phil Schiller. In which a few details are revealed about Apple’s process by the new and old keyboard designs:
“As we started to investigate specifically what pro users most wanted, a lot of times they would say, “I want something like this Magic Keyboard, I love that keyboard.””
The interview also touched on the education market, oddly considering the MacBook Pros unlikely place in the education setting. In which they appear to have paraphrased this line from Schiller:
“Yet Chromebooks don't do that [achieve best results]. Chromebooks have gotten to the classroom because, frankly, they're cheap testing tools for required testing. If all you want to do is test kids, well, maybe a cheap notebook will do that. But they're not going to succeed.”
It was a paraphrasing enough to cause Schiller (ie. his PR team) to issue a sort of correction via Twitter:
What I’ve been reading...
Apple’s Reach Reshapes Medical Research - The New York Times
Apple Plans Mega Bundle of Music, News, TV as Early as 2020 - Bloomberg
Steve Jobs Was Right: Smartphones and Tablets Killed the P.C. - The New York Times
Apple Watch forced Fitbit to sell itself - Above Avalon