Site navigation

Online Sites Given One Hour to Remove Harmful Content Under French Law

David Paul


Harmful Content

Failure to remove content deemed inappropriate could cost billions of euros for the largest social media outlets and other online companies.

The French National Assembly has passed a law giving online companies a one-hour window to remove harmful content from their platforms or face huge fines.

The rules apply to all online sites, although fines could add up to around €1.25 million ($1.1 million), hitting smaller businesses hard and costing billions for the larger companies.

Any content deemed as an offence is now considered as illicit under the new law, including death threats, discrimination, terrorist content and child pornography.

This new legislation is aimed at protecting users online, but has already drawn criticism, with some groups concerned that it could leave the government open to gaining more control over online content censorship.

There is also the worry from social media firms that one hour is not long enough to remove all unacceptable content, meaning they could face fines for being to slow to act, something that could have huge financial implications for smaller organisations.

A spokesman for the digital rights group ‘La Quadrature du Net said the requirement to take down content one hour was impractical.

“Except the big companies, nobody can afford to have a 24/7 watch to remove the content when requested”, they said.

“Hence, they will have to rely on censorship before receiving a request from the police. Since 2015, we already had such a law that allowed the police to ask for the removal of some content if they deemed it to be terrorist… this has been used multiple times in France to censor political content.

“Giving the police such power, without any control… is obviously for us an infringement on the freedom of speech.”

Social media sites have already moved to crack down on inappropriate or misleading content over the past month. The spread of Covid-19 has caused many to take to social networks to share posts containing information about the virus that may not be true, potentially harming other users.

Twitter confirmed in March that was going to begin removing posts that contain unverified or false information on the virus in a bid to combat misinformation and fake news.


In a statement online, Twitter said it planned to enforce the new rules “in close coordination with trusted partners”, including public health authorities and governments.

The post said: “As the entire world faces an unprecedented public health emergency, we want to be open about the challenges we are facing and the contingency measures we’re putting in place to serve the public conversation at this critical time.

“We are regularly working with and looking to trusted partners, including public health authorities, organisations, and governments to inform our approach.”

In 2019, the Department for Digital, Culture, Media and Sport (DCMS) proposed an independent watchdog to write a “code of practice” for tech companies to monitor ‘online harms’ – fining sites that failed to protect its users.

At the time, critics said that this could threaten freedom of speech in Britain, echoing the same concerns with the new law in France.

David Paul

Staff Writer, DIGIT

Latest News

Cybersecurity Data Protection
Cybersecurity Editor's Picks Trending Articles
%d bloggers like this: