In the months leading up to the riots and terror attacks in Sri Lanka, Facebook refused technology capable of stopping millions of hate speech posts in dozens of languages, according to a Finnish AI startup.
Experts from Utopia Analytics will address the Digital, Culture, Media and Sport (DCMS) Committee’s investigation into disinformation today and outline how Facebook rejected their technology in 2018.
The Finnish firm claims its technology can quickly learn to identify hate speech in any language, as its artificial intelligence programme was developed learning Finnish, regarded as one of the most complex languages for machines to understand.
Despite that fact that Facebook has manufactured automatic systems to check and block hate speech, the social media giant also relies on thousands of human moderators to check posts.
Facebook CEO Mark Zuckerberg has previously emphasised that automated moderation is “five to ten years away”.
In March 2018, when tensions in Sri Lanka began to rise and hate speech spread across social networks in the region, Utopia offered to show Facebook how it could use its machine learning tools to remove hate speech in multiple languages within a millisecond of it appearing.
According to Mari-Sanna Paukkeri, the startup’s chief executive: “We could build a Sinhalese [the Sri Lankan language] moderation tool in two weeks. They were not interested in [working with] us”.
A Facebook spokesman said its technology could now be used to detect more than 40 languages, adding: “This is still very much an unsolved problem for the entire industry.”
While Facebook uses AI and automatic detection tools to identify harmful posts, it has admitted that it struggles to stop hate speech posts in languages and countries in South and East Asia. In 2018, Sri Lankan politicians criticised the social media giant for not doing enough to halt the spread of hate speech online in the country. As a result, in September 2018, Facebook updated its software to include Sinhalese.
However, following terror attacks that killed hundreds in Sri Lanka in May, the government blocked the social network as it claimed it was still enabling hate speech to spread.
Utopia will provide evidence today to the subcommittee alleging that Facebook could have prevented some of the hate speech from spreading. Ms Paukkeri explained that the start-up had been using this technology to detect hate speech in many languages “for over three years”.
A Facebook spokesman said: “It will take many more years of research and engineering to build AI systems that can understand language and context at a level that humans can. We were able to remove 4 million pieces of hate speech in the first quarter of this year – 65pc of which was detected proactively by our technology before it was reported to us.”
“We know there is a lot more work to do here, and we will continue to invest in technology as well as our growing team of 15,000 content moderators to identify and remove this content.”