Artificial Intelligence: Ethics, Regulation and Public Perception

Artificial Intelligence Ethics Regulation

Ivana Bartoletti, head of privacy and data ethics at Gemserv, believes that the deployment of discriminatory AI systems may “automate inequality” rather than alleviate it. 

The ethical debate surrounding the use of artificial intelligence (AI) is becoming deeply ingrained in the public consciousness, with questions lingering over bias, discriminatory systems and regulation.

Bombarded with news stories of autonomous weapons systems reminiscent of Hollywood films, of mass-unemployment and dystopian futures, are the public being adequately informed on the potential benefits of AI?

Ivana Bartoletti, head of privacy and data ethics at professional services firm Gemserv, believes the debate is being convoluted by hyperbole, which risks highlighting only the negative outcomes of what is a transformational technology.

“For a lot of people, when they think of artificial intelligence, they think of Sci-Fi,” she says. “They may just think of robots and they don’t truly realise that AI is already here with us. They don’t associate it with a sat-nav or with digital advertising.

“People often do not understand that the advert they’ve seen online is algorithm-driven. The problem here is because of the way it is portrayed in the media and because of the complexity of the topic.”

Job losses due to increased automation are, Bartoletti concedes, a topic over which people should be concerned and talking about. The fear of being replaced by a more efficient system is, she believes, an inherent human trait; one that isn’t exclusive to people in the 21st century.

Hyperbole

Hype and hyperbole in the media certainly have not helped the situation. It’s hardly surprising that people jump to the negative when the majority of the content they’re faced with is inherently pessimistic.

“There is a fear of losing jobs, but I think that comes second. People fear the loss of jobs because we’ve been told that some sort of AI is coming and it’s going to take over everything,” she asserts. “It’s more of a consequence, rather than a cause. The consequence is the way that AI is portrayed in the media. For example, if one driverless car makes one mistake, we talk about it for days on end.”

People rarely discuss the positive impact of AI with the same intensity, she suggests. Maintaining a balance of objectivity is essential to informing people over something that has the potential to change the world around us. There is an abundance of positive work underway in the AI field. Experts at South Korean company Lunit developed their INSIGHT algorithm to detect lung and breast cancer with alarming efficiency – a 97% detection rate, to be precise.

Similarly, AI developed by DeepMind has also borne promising results. On this occasion, it correctly diagnosed common eye problems with an impressive 94.5% success rate during a pilot scheme. These are the areas we should be focusing on, Bartoletti remarks. Rather than embracing the pessimism and creeping dystopian rhetoric, instead ask what AI can do for society.

“We have this great wide awakening in society on the environment and we are at a tipping point right now, so why don’t we ask how we can use AI to improve efficiency in the energy sector, for example?” she asks.

“Why don’t we use artificial intelligence to alleviate and detect physical illness before it materialises? There is so much in terms of what we can do and address. I think this is where AI is ultimately a good thing and it’s where I’d like to see the debate go. If we decide to go down this path then that’s the best way forward.”

Maintaining that balance, however, will be key to the future use of artificial intelligence, she insists. While there are many great examples of its transformative potential, there are still lingering questions in a host of areas – ethics, of course, being a key talking point.

The term ‘algorithmic bias’ is often banded around, Bartoletti suggests, which once again underlines the lack of appropriate information on the subject. This is a critical issue for AI and practitioners within the sector, and it is a topic that could come to define the future of the technology.

“Bias, in itself, is not a problem – we are all biased,” she says. “The problem is when bias becomes prejudice and this is the risk and why we are concerned about it – it’s the scale of it all. What is concerning is the idea that this bias can turn into stereotype and, therefore, prejudice.”

The origin of bias comes from two main sources, Bartoletti says. The first being the use of data, much of which is historical. If one were to take a photo of, say, senior engineers at this precise moment, perhaps the majority of them would be men. If an organisation or company were to use this pool of data then outcomes or understanding of the landscape would be warped and one could believe that engineering is an exclusively male profession.

“One of the problems we have is that whether it’s the entire historic data pool or just a sample, either way, we have potentially biased data going into systems,” she explains.

In 2018, Amazon scrapped an AI-based recruitment platform that displayed bias against women. Computer programmes to review job applicants’ CVs were intended to pinpoint the best available talent for the firm. However, the system was not highlighting female candidates for positions because these models were trained to review a talent pool dominated by male candidates.

“The second cause [of bias] proves to be slightly more problematic and this is what scientists call a ‘ground truth’, which relates to the hidden biases that are in some labels or proxies.”

Labels and proxies are a major talking point within the ethical debate. Even a label or proxy as simple and unassuming as a postcode could lead to bias; revealing race or socioeconomic backgrounds, she explains.

“If you post that into the algorithm without understanding the consequences, that might come up with a biased output simply because the specific postcode may bring specific characteristics with it.”

Recommended:

Addressing the issue of bias is no easy task and Bartoletti suggests that ongoing problems in this regard raise questions over diversity in the tech sector workforce – not just within developer teams, but from the boardroom down.

“Diversity is important and organisations need to do everything they can to ensure they are diverse,” she says. “There is one major piece of the jigsaw here, though, which is that it’s not always about just fixing an algorithm. That’s the other mistake, people say ‘let’s just fix it and it will be fine’.

“You can have a perfect algorithm, developed by the most diverse workforce ever assembled but then still use it for the wrong thing. There is an issue about what AI should be used for and that’s why diversity is not just in the developer team, but diversity at the board level so they can define what AI should be used for.”

Bartoletti is fiercely passionate about the topic of discrimination and exclusion in this field. Having co-founded Women Leading in Artificial Intelligence, she says there must be an emphasis on bringing more women into the debate. This isn’t specifically focused on women who are data scientists either. It’s about women at every level within an organisation.

“AI is much more than just technology,” she says. “It’s about power. I’d like to see more people from different backgrounds getting involved in the conversation.”

Women and people of colour are at the greatest risk of being discriminated against when automated decision making or AI software is in the equation. For example, the Notting Hill Carnival has become somewhat of an ideological battleground when discussing discriminatory AI – with people of colour having been wrongfully identified by facial recognition systems.

Related:

In Bartoletti’s view, something that hasn’t been considered is that when we begin to automate decisions and have machines taking the lead instead of human beings, then people are at great risk due to bias. This problem is also interwoven within the topic of public perception and awareness. With such a lack of clear-cut understanding of how technologies work and how they are being used, then the opportunity to react simply isn’t there.

“If a person isn’t aware of what’s happened to them, then what happens is instead of alleviating inequality, you’re actually automating it,” she asserts.

Regulation of artificial intelligence will be crucial going forward. While Bartoletti outlines two potential routes that can be taken, she admits that there is no one-size-fits-all approach. Instead, regulation must be catered to, and accommodate for both the changing dynamics of society and the needs of individual sectors.

Regulation at a sector level has to be a priority. The way in which we regulate artificial intelligence’s use in health will be very different from the way it is done in the financial services sector, for example. This need not be cumbersome either, she insists, as there are already regulatory frameworks which can be leveraged and updated to evolve.

“We need to be leveraging what we already have. We do have human rights law, we do have anti-discrimination law and we do have data privacy legislation. These are all living instruments. GDPR is there and it is an instrument that will likely need to evolve over time and with the technology,” she says.

The main hurdle will be whether or not regulators can collaborate and demonstrate they have the ability to truly regulate. A strong regulatory framework could help to reassure the public over what technologies are being deployed – and how – as well as ensure that companies operate within the letter of the law.

“The real question is whether or not we can have a regulator with real teeth, which can go into an organisation and say ‘I want to see how that thing works’ – regardless of whether that algorithm is proprietary.”

Cooperation

International collaboration and cooperation is a staple requirement as artificial intelligence continues to be deployed in societies around the world. The European Commission has already broken ground in laying the foundations for this governmental approach; creating guidelines and issuing template algorithms for organisations to draw upon. Earlier this year, the EC issued a call for businesses to sign up to the scheme.

Guidelines which draw upon current legislation and take into account the views of the public are a strong start, Bartoletti believes. However, cultural and regulatory differences could prove troublesome.

“There’s a great disparity in what’s happening around the world. For example, artificial intelligence and big data, in general, has been largely unregulated and market-driven. Whereas what we see in China is the complete opposite – it’s about trying to find a middle ground of the two,” she says.

“There will need to be cooperation between all sectors of society. The problem right now is that the big organisations are the ones that are dominating the digital world and they’re also the ones dominating the debate on this. Perhaps it’s time to open this debate up a little bit.”

While organisations such as Google, Amazon and other multi-national corporations are leading the charge in AI development, questions must be asked over whether or not they can be trusted to shape the future of ethical frameworks.

Ethics boards have been one avenue through which companies such as Google or Microsoft have sought to lead the debate. However, increasingly there is a disruptive mood among employees at top companies which is helping to shape the vision of what AI should be used for.

Workers have spoken out in recent years and flipped the debate on its head. Google experienced a walk-out and fierce backlash to its involvement in military projects, while employees at Amazon have raised serious concerns over its Rekognition software being used in a military capacity. This movement is, in part, driven by employees who want their technology to be used for good.

“What we are seeing with the Google walkout and at Amazon is that people want their tech to be used for good things which improve society. Where they don’t see this happening, they are more than ready to speak out,” she says.

“It’s not by chance that younger generations don’t really want to work for certain companies anymore – they feel like they’re not contributing to society as much. Younger generations are much more values-based and I think what it’s showing is that there’s a mismatch in society in that digital dividends are not distributed fairly and in the hands of big companies.”

There is no denying that artificial intelligence will have an enormous impact both on society and a myriad of industries in years to come. This underlines the importance of ‘getting it right’ – addressing the lingering questions, fears and ensuring that we amplify the positives to the public.

For organisations, AI adoption will not be a case of ‘take it or leave it’. Instead, it will be integral to their survival. A recent report from management consultancy firm Mckinsey suggested that companies that fail to adopt could risk losing out of up to 20% of cash flow. With such high stakes and the potential to fall behind in a rapidly evolving marketplace, maintaining adherence to ethical frameworks while continuing to develop and deploy will be crucial.



Latest News

Digital Infrastructure News
News Space
Digital News
Games News