Balance and Neutrality in Artificial Intelligence: Why it Matters

brain over a computer chip to show Artificial Intelligence

With all the potential benefits artificial intelligence can bring, it is crucial that we ‘get it right’ now to mitigate possible negative outcomes in the future. 

From banking, education and shopping, to other normal everyday activities, artificial intelligence (AI) is becoming an ever-increasing presence in our world. The development and deployment of artificial AI holds enormous potential to improve billions of people’s lives around the globe.

The technology has a huge capacity to do good in areas such as government, healthcare, law, business, education and environment. And the EU’s High-Level Expert Group on Artificial Intelligence has said that AI is key to addressing many of the grand challenges in the UK Sustainable Development Goals.

The ways in which AI can benefit us are seemingly limitless. From managing cities to making government more efficient and transparent, it has the power to accelerate innovation exponentially. For example, the application of AI technologies in the UK’s cash strapped NHS can shorten waiting times, facilitate better treatments and patient outcomes, speed up illness detection and thus maximise the resources it has.

But, despite all the possible benefits AI can bring, there is a risk that without careful intervention and consideration it could instead exacerbate existing structural, economic, social and political imbalances further. Without balance and neutrality, rather than levelling the playing field, ensuring greater access for all and benefiting society, AI could reinforce inequalities based on different demographic variables including ethnicity, gender, location, age and educational and/or socioeconomic status.

Recommended 

Research by Boston University found that AI could be gender biased in terms of semantic connections. It found that men were more likely to be connected to the word ‘programmer’ than women. A study by MIT’s Media Lab also revealed that facial recognition algorithms are 12% more likely to misidentify dark-skinned males than light-skinned males.

There is an emerging AI divide that is being driven by AI bias that is caused by the quality of the data being used to train the AI. In 2018, Amazon shutdown an AI-based recruitment system that displayed bias against women. The failure in this system was due to the pool of data it was trained on, which was predominantly male.

A label or proxy as simple and unassuming as a postcode could lead to bias in an AI system; revealing race or socioeconomic backgrounds, according to Ivana Bartoletti, head of privacy and data ethics at Gemserv and co-founder of Women Leading in Artificial Intelligence.

If allowed to continue, it could have disastrous consequences for marginalised groups. To address the emerging divide and mitigate the risks, more stakeholders must engage with the problem, particularly those who may be most impacted by the changes it will bring to everyday lives.

To address the issue of bias there needs to be increased diversity within the tech sector itself. It is more than just a case of addressing algorithms. Research conducted by WIRED and Montreal-based startup Element AI revealed that only 12% of leading machine learning researchers were women.

It is key that government is prepared for the socio-economic changes brought about by AI, and that it has the appropriate ethical and legal frameworks in place to protect those who could be adversely impacted by the advent of AI. Fostering greater dialogue between public bodies, industry, academia and the public could reduce the risk of a negative outcome.

According to professional services firm PwC, “perhaps no other emerging technology has inspired such scrutiny and discussion”. Therefore, it is crucial that society fosters an environment of transparent open communication on this topic. In an ever evolving marketplace, the adoption of AI is swiftly becoming a matter of survival for many organisations. Research by consultancy firm Mckinsey suggested firms that fail to adopt could risk losing out on up to 20% of cashflow. The need to have these discussions and to put the necessary ethical frameworks in place is pressing and cannot be put off for a future generation to deal with.

To join the debate and learn more about this topic make sure to attend our AI: Driving Positive Impact & People-Centric Innovation panel discussion at DIGIT Expo 2019, which will be chaired by Lynsey Campbell, Chair of Scotland Women in Technology.

She said: “As the desire and need for AI and automation continues to grow, this is the crucial time for ethics and innovation to work in harmony to develop safe and inclusive solutions for the world. Our panel will explore the challenge and provide guidance on how to act responsibly in this space.”

Panelists will include Ishbel MacPherson, Legal Director, DLA Piper; Dr Shirley Cavin, Data Science Manager, Leidos; Olivia Gambelin, Founder, Ethical Intelligence and Jane Pitt, Program Manager, Microsoft.



Latest News

News Technology
Business News
15th November 2019

Women’s Enterprise Scotland Appoints New Male Ambassadors

Digital Infrastructure News
Editorial Technology
15th November 2019

Meet the Finalists for the Startup of the Year Award 2019