Site navigation

Tech Sector “Diversity Disaster” Contributing to Biased AI Systems

Dominique Adams


artificial intelligence

New research has revealed that the lack of diversity in the field of Artificial Intelligence (AI) has reached “a moment of reckoning” and risks perpetuating historical biases and power imbalances.  

Research published by the AI Now Institute has found that the AI sector, which is predominantly populated by white males, is contributing significantly to the continuation of historical bias and patterns of discrimination.

The report, which is the product of a year-long exploration of the issue and takes into account 150 previous studies, found that AI technology is mostly being created by large tech companies, a few universities, and by older wealthy white men who stand to benefit from such flawed systems.

Such systems, researchers say, can potentially harm people of colour, gender minorities and other under-represented groups and attribute these flaws to a distinct lack of diversity in the tech sector. The study cites examples such as image recognition services making offensive classification of minorities, chatbots adopting hate speech and Amazon technology failing to recognise users with darker skin colours.

According to the study, only 15% of AI research staff at Facebook and 10% at Google are women, and that the overall number of black employees at companies such as Microsoft, Google, and Facebook range from 2.5% to 4%. In 2015, the National Science Board revealed that only 24% of the field of computer and information sciences were women.

The researchers are calling the current ratio a crisis, especially as this AI is being used in determining loan and insurance approvals, who gets a job interview, who will be granted bail and in predictive policing and more.

Recommended: Unions Warn Against Companies Using AI to Monitor Staff

Kate Crawford, contributor to the report said: “The industry has to acknowledge the gravity of the situation and admit that its existing methods have failed to address these problems. The use of AI systems for the classification, detection and prediction of race and gender is in urgent need of re-evaluation.”

Expressing deep concern, the report urges action to rectify the situation saying AI systems need to be overhauled. To demonstrate the impact of this AI bias the study cites an incident that took place last year where Uber’s facial recognition system – which is made by Microsoft – failed to recognise a transgendered driver’s face causing her to be locked out of the company’s app and to miss three days of work.

Sarah Myers West, a postdoctoral researcher at AI Now and lead author of the study, said “There is an intersection between discriminatory workforces and discriminatory technology,” adding that there is a need for “a greater level of transparency” around AI, which she says is “largely obscured by trade secrets”.

Meredith Whittaker, co-founder and co-director of AI Now and founder and lead of Google’s Open Research group, said it’s “important to look beyond technical fixes for social problems”.

Danaë Metaxa, a PhD candidate and researcher at Stanford focused on issues of internet and democracy, said: “The urgency behind this issue is increasing as AI becomes increasingly integrated into society. Essentially, the lack of diversity in AI is concentrating an increasingly large amount of power and capital in the hands of a select subset of people.”


The worry amongst researchers and diversity advocates is that the issue must be addressed before it reaches a “tipping point”. Tess Posner, CEO of AI4All, a not-for-profit that works to increase diversity in the field of AI, said: “Every day that goes by it gets more difficult to solve the problem. Right now we are in an exciting moment where we can make a difference before we see how much  more complicated it can get later.”

To solve the issue, the report cautions against fixating on improving the “talent pipeline” problem, or on who the makeup of who is hired, as the majority of applicants for AI roles are men, who comprise 71% of the applicant pool according to the 2018 AI index

Instead, the report urges companies to add measures such as including publishing compensation levels for workers publicly, sharing harassment and discrimination transparency reports, and changing hiring practices to increase the number of under-represented groups at all levels.

Dominique Profile Picture

Dominique Adams

Staff Writer, DIGIT

Latest News

Cybersecurity Data Protection
Editor's Picks Recruitment Trending Articles
Cybersecurity Featured Skills
%d bloggers like this: