Site navigation

Could Artificial Intelligence Help Police Predict Hate Crimes?

Ross Kelly

,

Deepfakes

Researchers analysed Twitter and crime data to determine whether a link existed between social media activity and real-world violence.

Artificial intelligence could help British police forces predict and prevent hate crimes before they happen, a new study claims.

Researchers from Cardiff University’s HateLab project have developed artificial intelligence which they say is capable of anticipating violent hate crimes based on Twitter posts and comments.

Over a period of eight months, researchers at HateLab monitored Twitter and crime data from London to establish whether a link between social media activity and real-world violence existed.

Nearly 300,000 “hateful” Twitter posts were trawled as part of the study while some 6,500 racially and religiously aggravated crimes were also filtered out of police data. These figures, combined with census data, were placed into one of 4,720 geographical areas throughout London to enable researchers to pinpoint trends and flashpoints within the city.

The research determined that, as the number of “hate tweets” – which are those deemed antagonistic in regards to race, ethnicity or religion – made from one location increased, so did the number of racially and religiously aggravated crimes.

Professor Matthew Williams, director of HateLab, said the study is the first of its kind in the UK to highlight a “consistent link” between Twitter hate speech and real-world violence.

“Previous research has already established that major events can act as triggers for hate acts,” he said. “But, our analysis confirms this association is present even in the absence of such events.

“The research shows that online hate victimisation is part of a wider process of harm that can begin on social media and then migrate to the physical world.”

Long-term, researchers believe an algorithm based on HateLab’s AI methods could help police predict and prevent spikes in crimes against minority groups by allocating additional police resources during specific times or certain points.

Williams added: “Until recently, the seriousness of online hate speech has not been fully recognised. These statistics prove that activities which unfold online should not be ignored.”

Data used in the HateLab study, researchers noted, were collected between August 2013 and August 2014 – a time before many social media giants began implementing stronger hate speech policies.

Williams said that “rather than disappear, we would expect hate speech to be displaced to more underground platforms”, yet there is still a significant undercurrent of racially or religiously-motivated hate speech permeating social media.

Twitter has cracked down on hate speech and harmful content on the social media platform. Last year the firm embarked on a sweeping project to monitor the site and help it to identify and remove hate speech. Similarly, a series of purges have cracked down on fake accounts blamed for spreading misinformation and fake news.

Ross Kelly

Staff Writer

Latest News

Emerging Tech News
Cybersecurity News
Business Editor's Picks News
%d bloggers like this: