Amazon scrapped the Artificial Intelligence recruitment tool it was working on since 2014 because the software took a certain dislike towards women. The e-commerce giant was heavy on its love for automation and then there came a prospect of an engine, wherein they put 100 resumes and it churned them to get the top five. In the apt words of an Amazon personnel on the project, “Everyone wanted this holy grail.”
They hoped to get objectivity from an AI, but it turned out that biases affect every form of intelligence.
“Artificial intelligence and machine learning and algorithms, in general, are designed by none other than us – people,” said Dipayan Ghosh, fellow Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School. He could not be more right. Amazon’s computer was fed with resumes that came into Amazon for over a period of 10 years. No points for guessing that they were mostly of men – the classic story of the tech industry. The machine went on an offensive mode when CVs had the word “women” mentioned in them and it put graduates of two all-women college levels below other resumes.
All the while we were thinking that a dystopia with AI will be the militaristic downfall of humanity, we should have known better than the biases we feed AI will come to bite us back.
It is interesting to see that Amazon’s Alexa, Google’s Assistant, Microsoft’s Cortana have female voices and name. They help you to keep up with a lifestyle you intend to. On the flip side, IBM Watson and Saleforce Einstein have manned up with their names and are perceived as serious computing machines. The apple never falls far from the tree. Tech industry is notorious for its chauvinism and guess who creates AI? If you got your hands on the latest Gmail version, then you will see Smart Compose AI filling out words and responses for you. But, Paul Lambert, product manager Gmail raised a red flag when one of the researchers typed “I am meeting an investor next week” and the Smart Compose suggested, “Do you want to meet him?”. You can see how convenient it was for the AI to omit “her” or “they” (for transgender). Roles in finance, tech are predominantly all male, AI took this lopsided ratio as a fact and projected it.
It is not the first time Google AI got things all wrong and it is not always about gender. 2015 saw an auto-tagging feature in Google photos. A software engineer named Jacky Alcine tweeted that Google Photos tagged his African origin friend as “Gorilla”.
Moral of the story — AI will be as bias-free as we, the creators are. And when it comes to inclusion, we are miles away from a utopia.
The word inclusion may look synonymous to diversity, but it is not. As Lauren Taylor Wolfe points out, “Diversity is being invited to the party. Inclusion is being asked to dance… Companies get what they celebrate, and if they celebrate diversity, they’ll get it.” A true culture of inclusion will take time. Thasunda Duckett, chief executive consumer banking at JP Morgan Chase puts emphasis on “manage to the difference”. Take the case of parenting; parents have a distinct set of communication for each of the kids – they are humans; individuals who are wired differently.
Framing policies might be the most clichéd approach towards diversity, but John Hope Bryant, founder, and Chairman Operation Hope believes in a different pitch for championing policies to give diversity a push. He put forth the idea of Federal policies that give credit to companies that include women and people of color aka diversity.
AI when used right can eliminate biases that humans suffer. Case in point is Unilever North America that joined hands with HireVue and Pymetrics for complete digital hiring. They ended up hiring 10% more non-whites than usual, the most diverse hire to date.
But the catch with AI remains. “Sometimes we have the opportunity to create more evil than good, and we need to talk about that.” — as Dennis Crowley, co-founder and executive chairman of Foursquare puts it. Companies need to start talking about the biases and ethics that will build upon AI. Before putting AI to a task we need to get our diversity game straight. It will take time, but with technology and this phenomenon on the horizon, there has to be an expressway. We may not be all pro-diversity, but we can self-critique by identifying toxic biases we put in selecting people. How far is your recruitment plan for implementing AI? How do you view HR and AI in the future?