Artificial Intelligence Ethics Crucial says House of Lords

Artificial Intelligence Ethics

Britain can take a leading role in the future of artificial intelligence development, but how will the future look without clear-cut regulation and a code of ethics? The House of Lords weighs in…

A report from the House of Lords recommends that ethics be a key focus in the future of artificial intelligence, saying it should be developed “for the common good and benefit of humanity” and never given “autonomous power to hurt, destroy or deceive” people.

AI in the UK: Ready, Willing and Able highlights the UK’s opportunity to become a world-leading nation in the development of artificial intelligence but insists that clear-cut ethics must be established around the development and deployment of such technology. The report claims that, as a world leader, the UK can “help shape the ethical development of artificial intelligence” on a global scale; thus helping to set the fundamental guidelines for all future AI development.

The report also addresses other key issues surrounding the development of AI – the potential effect on workers and data privacy rights.

For some time now the issue of job losses due to the introduction of AI and robotics has been discussed, with conflicting claims of job losses and exaggeration of figures. In regards to data privacy, concerns surrounding the potential use of AI for data harvesting have been raised.

Just how can Britain lead the world in the development of AI?

Benefiting Mankind

Britain could take a leading role in the development of artificial intelligence in years to come. As a potential world leader the report implies that Britain has a moral responsibility to begin developing a basic ethical framework by which other nations can follow, saying “[the UK] can help shape the ethical development of artificial intelligence and to do so on the global stage.”

The committee has recommended a series of principles that it believes AI development should follow; stating that AI should “operate on principles of intelligibility and fairness, and not be used to weaken data rights and privacy”. It also added that “the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence” – Clearly establishing that AI should not be used in aggressive capacity.

In addition to basic principles, the committee has also called for a code of ethics and that the government should take an active role in leading the discussion, stating: “Many organisations are preparing their own ethical codes of conduct for the use of AI.”

It also said: “This work is to be commended, but it is clear there is a lack of wider awareness and coordination where the government could help.”

The government has recently allocated £9 million in funding to the Centre for Data Ethics and Innovation, setting in motion plans to establish the worlds first public advisory body on the use of this technology. Through this it hopes that cooperation between industry, government and regulatory bodies will both fuel the development of AI in industry and establish guidelines for years to come.

Scottish Cybersecurity Forum, 25/04/2018, Murrayfield Edinburgh

Public Understanding

Educating the public on artificial intelligence is a key point in the report, which says “all citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.”

It says it welcomes government plans to increase the number of computer science teachers in secondary schools and urges that support structures are put in place for teachers to potentially retrain in order to adequately teach students on this issue. For primary school children, education to prepare them for later life is also essential. Today’s young students will likely be working with, and using artificial intelligence, as such the need for training and understanding is critical.

AI and Big Data

According to the committee, the way in which data is currently gathered and accessed is outdated and damaging to individuals. As such, the introduction of AI raises additional questions over legality and ethics. The committee highlights recent failures in regards to data privacy and notes that in an ever-evolving, digital world citizens’ privacy is becoming increasingly important.

The report said: “we have heard considerable evidence that the ways in which data is gathered and accessed needs to change, so that innovative companies as well as academia, have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency in this rapidly evolving world.”

The use of AI in big data is revolutionary and highly effective, but that efficiency – if abused – is highly dangerous and as such, clear-cut guidelines and regulations must be introduced to prevent foul play. This means “using established concepts, such as open data, ethics advisory boards and data protection legislation” to protect citizens’ data and “developing new frameworks and mechanisms, such as data portability and data trusts” to provide greater oversight of the issue.

Protecting Jobs

The threat posed to jobs by artificial intelligence is brought up regularly as the technology develops, and the report predicts that “AI will disrupt a wide range of jobs over the coming decades, and both blue and white collar jobs which exist today will be put at risk.” To combat the potential risk of job losses, the report recommends retraining programmes be introduced for workers affected by the introduction of AI in their respective industries.

This does not necessarily mean retraining people to work in entirely different industries. Instead it insists that retraining will enable these workers to develop skills for new jobs and professions that will arise from these new technologies. It said:

“It will therefore be important to encourage and support workers as they move into the new jobs and professions we believe will be created as a result of new technologies, including AI.

“It is clear to us that there is a need to improve digital understanding and data literacy across society, as these are the foundations upon which knowledge about AI is built.”

An Oxford University study from 2013 predicted that up to 35% of jobs in the UK were at high risk of being automated within 20 years, which seems to match with the House of Lords predictions. Despite both these predictions a recent study by the Organisation of Economic Cooperation and Development (OECD) says claims of huge job losses are exaggerated.

According to the report, the figures are far lower than that predicted by Oxford University, with only 12% of jobs disappearing due to rapidly developing technologies.

Public Confidence

A recent report from OpenText also suggests that public fears of artificial intelligence are diminishing.

A similar survey in 2017 detailed how nearly a third of participants feared that their job would inevitably be taken by a robot or AI, however in this year’s report the number has fallen to 21%. Around 60% of participants even believed that their job would never be lost due to automation or other technological advances.

This points toward a growing belief that artificial intelligence and robotics will assist and improve human roles, rather than outright replace them. 18% of respondents surveyed claimed they were “nervous” of the rise of AI, but 17% said they were “excited” by the meteoric rise of the technology.

 



Latest News

GDPR
25th May 2018

DIGIT’S 20 Best GDPR Memes

GDPR
25th May 2018

The DIGIT Cut-Out-and-Keep Guide to GDPR

Hardware Opinion Review
25th May 2018

DIGIT Product Review: Jabra Evolve 75e Headphones

Business News