Site navigation

GCHQ Using AI to Fight child Abuse, Disinformation and Trafficking

David Paul

,

GCHQ

The intelligence and security organisation said they want to embrace AI to combat online threats, according to a new report.

GCHQ says its analysts will use artificial intelligence (AI) to “protect Britain from threats” such as state-backed disinformation campaigns or cyber-attacks.

In a newly published paper, titled Ethics of AI: Pioneering a New National Security, GCHQ says that the tech will “be at the heart” of its mission to keep the country safe as technology is increasingly adopted.

However, the technology will primarily be used to tackle the ongoing threats of online child sex abuse, disinformation and trafficking. The report states that AI “will be a critical issue for our national security in the 21st century”.

Director at GCHQ Jeremy Fleming said at using AI technology “offers great promise for society, prosperity and security,” and that its impact on GCHQ is “equally profound”.

“AI is already invaluable in many of our missions as we protect the country, its people and way of life,” Fleming said. “It allows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats – from protecting children to improving cybersecurity.

“While this unprecedented technological evolution comes with a great opportunity, it also poses significant ethical challenges for all of society, including GCHQ.”

The paper coincides with UK Government plans to publish its Integrated Review into security, defence, development and foreign policy.

GCHQ’s paper also considers AI transparency and details how the organisation will “ensure we use AI fairly and transparently, applying existing tests of necessity and proportionality”.

This would include establishing an AI ethical code of practice to recruiting more diverse talent to help develop and govern the use of AI, protecting privacy and striving for systematic fairness.

Fleming continued: “Today we are setting out our plan and commitment to the ethical use of AI in our mission. I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI.”

Recommended

Today’s paper follows a report released in November 2020 and backed by Europol and the United Nations Interregional Crime and Justice Research Institute (UNICRI) warning that continued use of AI increases the dangers of cyberattacks.

The report, titled Malicious Uses and Abuses of Artificial Intelligence, warned that AI “provides both a powerful tool for hackers and a vulnerable target for attacks”.

However, AI has already proven itself as a potentially potent tool to carry out tasks traditionally done by humans.

In August last year, an AI-controlled fighter jet beat a human pilot in US Defense Advanced Research Projects Agency (DARPA) as part of its Air Combat Evolution (ACE) programme.

The first two days saw eight AIs battle each other, being narrowed down to four before a final winner emerged.

David Paul

Staff Writer, DIGIT

Latest News

%d bloggers like this: