Artificial intelligence (AI) could soon be a common feature at European airports as part of an EU trial at border checkpoints.
The project, called iBorderCtrl, will feature AI-powered lie detectors which question passengers seeking entry into the European Union.
This AI border guard will be tested across the union, including airports in Hungary, Latvia and Greece, on passengers travelling from outside the EU.
AI Border Guard
Passengers arriving at the trial airports will be asked a series of questions by a virtual avatar, with AI monitoring their facial expressions to assess whether they are lying.
As part of the process, the avatar will become increasingly sceptical and forward in questioning; even changing the tone of its voice if it believes a passenger has lied. Those who are believed to be answering honestly will be allowed to pass through border controls while those suspected of lying will be referred to a human employee.
Other basic questions asked by the virtual avatar include the passenger’s name, age and date of birth. The purpose of their visit will also be included in the process.
According to the European Commission, the project aims to streamline border checks, as well as helping border guards in “spotting illegal immigrants” – this, the Commission said, will help “contribute to the prevention of crime and terrorism.”
Project coordinator, George Boultadakis of Luxemburg-based European Dynamics, explained: “We’re employing existing and proven technologies – as well as novel ones – to empower border agents to increase the accuracy and efficiency of border checks.
“iBorderctrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit,” he added.
Researchers said the technology has been only been tested on 32 people so far in its current form, however, the scientists involved hope to reach an 85% success rate.
Additionally, human guards will oversee the pilot scheme, and only passengers who give their consent will be questioned by the technology in this initial trial.
Technology adviser Theo Priestley believes the project has “all the hallmarks of the wrong applications of artificial intelligence” and that any pilot scheme should be treated with caution.
“To implement a system trained on a sample size of 32 is irresponsible to say the least,” he said. “Not withstanding that natural language processing is nowhere near the success rates required to determine whether someone is trying to lie or not.”
Algorithmic biases are also a serious concern, Priestley added. The use of automated facial recognition technology in this project could present significant challenges. Previously, concerns have been raised over facial recognition algorithms’ error rates when analysing women and people of colour.
In the UK, police services have come under intense scrutiny for their use of this technology, with false-positives and a poor results occurring at an alarming rate.
Priestley added: “It’s equally disturbing on two other counts; algorithmic bias is incredibly easy to introduce into a system, and the proposals also discuss the collection and storage of additional biometric data on individuals which may be unethical without express consent.”