Artificial intelligence promises to make countless aspects of our lives smarter and more efficient. Unless we tackle the current lack of diversity among programmers however, we risk ending up with a future that is as biased as the present.
It might surprise you to know that the American city with the fourth-largest collection of Fortune 500 companies lies in the Deep South. What’s more, it’s in a city where African Americans make up over half the population – about twice the percentage of New York.
Ever since the 1970s, Atlanta has been known as the “Black Mecca”, a moniker which has been reinforced in recent years by an influx of African-American founded tech companies. The city now is home to over 13,000 such companies and in the words of Atlantan entrepreneur Barry Givens, “This is the place … If there was a Wakanda, this is it.” Wakanda being home to Marvel Comics’ superhero Black Panther.
Black tech entrepreneurs chose Atlanta after years of being turned down by white investors in America’s biggest tech hubs, like Silicon Valley and New York. Or, as one developer put it, “we go there, we feel excluded, we come back.” The appeal of Atlanta as a black-led tech hub is explained by the city’s so-called “Tech Twins”, Troy and Travis Nunnally, who reveal that in their experience “people tend to invest in people who look like you.” They realised that being in a predominantly black community was the best way to secure investment, even if it wasn’t as much as they might theoretically attract in Silicon Valley.
Atlanta is also home to some of the country’s finest public universities, like Georgia Tech and Georgia State, as well as some of the best historically black colleges, who are “churning out talented black developers and engineers” at such a rate that Atlanta’s status as a Black Mecca looks secure for years to come.
The only problem with Atlanta’s status as a diverse tech hub, however, lies in its absolute uniqueness. Because when we look at the rest of the tech industry, we quickly see that there is a diversity crisis.
– A 2019 study of workers in Silicon Valley found that African American and Hispanic people combined made up between 3% and 6% of workers, despite those two groups accounting for nearly a third of the American population.
– In 2018, the Center for Investigative Reporting found that 10 large tech companies in Silicon Valley did not employ a single black woman. Three didn’t hire a single black person at all.
– As for so-called “Big Tech”, as of 2019 only 2.5% of Google’s workforce is black. Facebook is at 3.8% and Microsoft at 4%.
– 91% of Silicon Valley employees are men.
– According to Atomico’s State of European Tech 2019 report, less than 1% of founders on the continent identify as black, African or Caribbean.
When we talk about bridging the racial divide, a lot of our collective focus inevitably (and rightly) falls on fixing the mistakes of our past. And yet if all our collective focus is set on righting past wrongs, we are in danger of overlooking the ways in which our society is developing. Particularly when it comes to tech.
Take Artificial Intelligence. The top experts in science all agree that our society will, in a few short years, be essentially based on AI. AI’s global market revenue is set to grow from $5bn in 2015 to an astonishing $125bn in 2025. Most of us use AI every day without realising it. There’s the obvious cases of Alexa or Siri, but every time you use Google Translate, or watch a recommended video from Netflix (which, let’s be honest, over the last three months you definitely have), you’re using AI. But does the diversity problem in tech have an impact on AI?
“In other words, AI reproduces the biases and assumptions of the humans who program its algorithms. And as we have seen, those humans are overwhelmingly white men.”
Why does it matter who makes AI?
AI is meant to be everything humans aren’t – impartial, objective, unbiased. The second most widely held belief about AI is that it will “help people make better decisions.” So, as long as we have a good AI system, surely it’s irrelevant who makes it?
There is a common saying among programmers: Bias in, bias out. In other words, AI reproduces the biases and assumptions of the humans who program its algorithms. And as we have seen, those humans are overwhelmingly white men.
There are dozens of examples of AI reproducing the innate biases of its programmers. In 2017, an ingenious study was published in Science Magazine focusing on the AI-based “Implicit Association Test” (IAT). The IAT was actually designed to expose innate human biases, by recording the differences in people’s response times when asked to pair two concepts they found similar (i.e. “Programmer” and “Man”) versus those they might find dissimilar (i.e. “Nurse” and “Man”).
The Science researchers, however, turned their attention to the AI itself. They found that when AI subjects were asked what they thought of names, European American names were “significantly more easily associated [by the AI] with pleasant than unpleasant terms, compared with a bundle of African-American names.” The AI was learning from the web, where people often say racist things about African Americans. From the deepest and darkest of internet forums to … Fox News.
Bias in, bias out.
“An even more controversial usage of AI appears in “Predictive Policing”, where police departments share their crime data with tech companies, who then produce algorithms to “predict” where a crime will take place.”
A highly controversial use of AI can be found when looking at face recognition technology. Studies have shown that even the best systems misidentify black faces at 5-to-10 times the rate of white ones. Amazon’s Rekognition software recently falsely identified 28 members of the US Congress as criminals. Of those, 40% were people of colour, despite making up only 20% of Congress. Despite this, the popularity of facial recognition technology grows, with companies such as Faception claiming to be able to predict sexual orientation from faces, or even identify if people are terrorists or a paedophile. (This latter company’s staff includes no black people and just one woman, according to their website.) In the words of Princeton facial perception expert Alexander Todorov… “if you feed machine learning with crap, it will spit out crap in the end.”
An even more controversial usage of AI appears in “Predictive Policing”, where police departments share their crime data with tech companies, who then produce algorithms to “predict” where a crime will take place. You can see where this is going.
Researchers at AI Now institute examined 13 jurisdictions in America that utilised predictive policing. Worryingly, this information was only accessible because these jurisdictions were under investigation for “corrupt, racially biased, or otherwise illegal policing practices.” In Chicago, police developed an algorithm, to list “individuals at risk” of committing a crime. Over a third of the individuals on the list had never been arrested, and 70% of that third received a “high risk score”. Furthermore, the AI gave a risk score to 56% of black men under 30 in Chicago, the group that bore the brunt of the police’s proven unlawful practices.
Predictive policing doesn’t predict where a crime is going to take place. It looks at where past police operations occurred to base its predictions. In Chicago, the data relied on arrest records – not convictions. This means that the AI can only ensure that the same historically over-policed areas will remain so. PredPol, a company behind one of these systems, acknowledged this seemingly fatal flaw by excluding drug-related offences, given the racialised nature of such arrests. As if that is the only racialised crime.
Predictive policing is not unique to America, and indeed can be found across most of Europe. On its independent study of its usage in the UK, RUSI expressed “legitimate concern that the use of algorithms may replicate or amplify the disparities inherent in police-recorded data, potentially leading to discriminatory outcomes.” However, they claimed a “lack of sufficient evidence” when assessing whether such discrimination was actually occurring. Given the litany of reports of systemic racism in the UK police, this seems like wilful ignorance.
As for after an arrest, ProPublica studied how police use AI to predict the risk of reoffence, and revealed how these risk assessments “are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts … to criminal sentencing.”
The AI had an accuracy rate regarding reoffence of just 61%. Only 20% of those earmarked for committing violent crimes actually went on to do so. And the imbalance of the AI’s results against black people is startling.
Bias in … you know the rest.
What’s the solution?
As with all these types of problems, the solution of simply dismantling the structures that support systemic racism … isn’t an easy one. One clear step that needs to be taken is that tech companies need to be more proactive in opening their doors to black engineers.
The benefits of this change of approach aren’t hypothetical – you can see it today. As just one example, in their list of “the Most Influential Blacks in Technology” Black Enterprise listed Ralph A. Clark, CEO of Shotspotter, an AI-powered company that works with police to detect the location of gunshots in real-time. Instead of relying on inaccurate or incomplete “dirty data”, the AI relies on a network of sensors to determine the gunfire’s location. This is an example of AI being used appropriately (indeed, expertly) in the world of law enforcement.
However, severe underrepresentation can make tech an unattractive prospect for black engineers, and it is often uncomfortable for the few who do work there. Leslie Miley, an engineering manager for Google, revealed how he has on more than one occasion been physically body-checked by a colleague on his way into the office, who demanded to see his badge.
One anonymous Black Facebook employee, claimed to be “very lonely” at the office, and that Facebook’s ham-fisted attempts to promote diversity at team events made him feel like he was in the movie Get Out, where black people were used to manufacture a more “welcoming” environment. Former Facebook employee Mark Luckie claimed there were more “Black Lives Matter” posters in their office than there were black people.
This sentiment was echoed by Nathan Macabuag, founder of the bio-tech company Mitt Wearables and one of the 1% of black European tech founders. While Nathan has taken every aspect of his experience firmly in his stride, he does admit that the world of tech is a “whitewash”. Akin to the case of the anonymous Facebook employee, Nathan says that he was made most aware of underrepresentation at tech meet-ups. He explains how, when speakers would exalt the need for greater diversity, he’d “feel all eyes looking at me, and I’d look around the room and think ‘oh… well… this is awkward’.”
Facebook, for their part, say that there simply aren’t enough qualified minority candidates to choose from, claiming in 2016 that “appropriate representation… will depend upon more people having the opportunity to gain necessary skills through the public education system.” This, frankly, is rubbish. As well as the existence of Atlanta providing a city-sized counterpoint, Black and Latinx people currently “earn nearly 20% of computer science Bachelor’s Degrees”, despite making up <5% of the workforce in Big Tech. In that same statement, Facebook patted themselves on the back for their new senior leadership hires being 9% Black, 5% Hispanic and 29% women. That’s -5%, -13% and -21% of the general population statistics, for those keeping count.This disappointing attitude is parroted in other Big Tech companies. Amazon’s response to criticism of their Rekognition technology was to attempt to discredit the research behind it. And a study of 32 Big Tech companies found that just 5% of 2017 philanthropic giving was focused on correcting the industry’s gender imbalance. Less than 0.1% was directed at removing the barriers that keep women of colour from careers in tech.
“Tech companies know that (yet) more empty promises will threaten their company’s well-being in the current climate”
But the last few weeks have an opportunity to make a real difference. The snail’s pace of “progress” of diversity in tech is becoming more obvious in its unacceptableness, and subsequently more difficult to ignore. Tech companies know that (yet) more empty promises will threaten their company’s well-being in the current climate.
In addition, there are numerous organisations working hard today dedicated entirely to improving black representation in tech. Some, such as Code2040, have gathered a network of over 6,000 students, company partners and volunteers, in order to provide black and Latinx tech students with their first jobs. Others, such as the Hidden Genius Project, focus more on training up young black men, and getting them ready for a career in tech. And, returning where we started, to Atlanta, digitalundivided focus on improving the prospects of black and Latinx women entrepreneurs, and removing the barriers that are set out in front of them.
It’s unsurprising that such a large proportion of black tech workers are dedicating their time to improving representation. The appetite for this is evidenced in the strength of industry networks, such as Blacks In Technology, which has 14 chapters across the US, dedicated to “increasing the representation and participation of black women and men in the technology industry.”
They know the importance of having an influence on the development of the tech that our society depends upon. They know better than anyone the legitimate dangers of the current diversity crisis in tech. But you only need to look at Atlanta, to see how black entrepreneurs have managed to prosper despite systemic racism in the industry, to be encouraged that the future of tech will be in better hands than it is now.