New EU rules are to place a ban on AI systems used for mass surveillance or for ranking social behaviour, according to Bloomberg.
Proposals will suggest banning the use of facial recognition for surveillance, or algorithms that manipulate human behaviour under EU regulations, or firms could face GDPR-level fines.
The proposals, leaked before the release of the European Commission’s official legislation, also suggests that systems that exploit a person or group’s information would also be banned.
Bloomberg research suggests that AI applications considered to be ‘high-risk’ would have to undergo inspections to ensure systems are “trained on unbiased data sets,” in a traceable way and with human oversight.
Additionally, biometric identification systems used in public spaces, such as facial recognition, would need to be authorised by local authorities.
Previous concerns have been raised about the increasing use of AI by the EU. In December last year the European Union Agency for Fundamental Rights (FRA), published a report warning against the use of the tech in medicine, targeted advertising and predictive polling as it could pose a threat to civil rights.
The FRA report suggests that the EU should “further clarify how data protection rules apply to AI,” as well as providing “more clarity” on the implications of automated decision-making and the right to human review when AI is used.
As part of the latest proposals, EU member states would be required to appoint assessment bodies to test, certify and inspect the systems. Firms that develop ‘prohibited AI services’, supply the wrong information or fail to cooperate with authorities could be fined up to 4% of their global revenue – around the same amount as a fine under GDPR rules.
- EU watchdog calls on nations to curb use of artificial intelligence
- Calls grow for ‘right to disconnect’ as remote working set to stay
- Dundee Uni spinout gains £911k funding for skin testing technology
“Artificial intelligence is a fast-evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities,” the report states.
“By improving prediction, optimising operations and resource allocation and personalised service delivery, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education, infrastructure management, energy, transport and logistics, public services, security, and climate change mitigation and adaptation, to name just a few.”
European policy analyst Daniel Leufer said in a post on Twitter that the proposed definitions were very open to interpretation, and that some elements of the legislation contain some “serious loopholes”.
There are 4 prohibitions, which are very very vague, and contain serious loopholes
For ex. A & B discuss systems that cause people to “behave, form an opinion or take a decision to their detriment.” How do we determine what is to sby’s detriment? And who assesses this?
— Daniel Leufer (@djleufer) April 14, 2021
Despite the strict rules, there are several exemptions in place, including the use of AI for safeguarding public security as well as AI systems used exclusively for military purposes.
Commenting on the new proposals, Peter van der Putten, assistant professor of AI at Leiden University and Director of Decisioning at Pegasystems, said: “There is a common opinion from everyone, including AI technology providers and companies consuming these services, that self-regulation is not sufficient and some clear rules and boundary conditions will actually make it easier for companies to invest in responsible and trustworthy AI.
“Whilst driving the acceptance of trustworthy AI might seem a lofty goal, it should be clear that the real goal is for AI applications to become truly worthy of our trust, by making the technology fair, transparent, explainable and robust.
“It is important to note that these kinds of policies will be setting broad boundary conditions only. There will still be a lot of AI in use that may technically be legal but is not empathic towards the best interests of customers, and more one-sidedly will serve the interest of companies.”
He continued: “In the long run, this will be corrected towards mutually beneficial AI, as consumers will vote with their feet if they don’t feel there is a win-win for both parties.”
“It will also be important to take any guidelines and rules for automated decisions, and do the sanity check to determine to what extent these same rules are actually applying to decisions purely made by humans today.”