Twitter Exploring New Rules to Combat the Rise of Deepfakes
Twitter will take new steps to assess the potential impact and damage that deepfakes can cause.
Twitter plans to implement new rules to combat ‘deepfakes’ amid growing concerns over the rise of manipulated content on social media.
Last night, the company revealed it has been developing the rules aimed at tackling what it described as “synthetic and manipulated media” posted on the platform.
“We’re always updating our rules based on how online behaviours change. We’re working on a new policy to address synthetic and manipulated media on Twitter – but first, we want to hear from you,” the company said in a tweet.
- Facebook prepares to tackle general election misinformation
- Loss of access to EU airline data will create an ‘intelligence black hole’
- £5.3m investment boosts e.fundamentals’ international expansion
Synthetic and manipulated media (deepfakes), Twitter explained, is media that has been “significantly altered” or created in a manner that changes the original meaning or purpose.
Additionally, synthetic or manipulated media can also be used to make it seem like fake events actually took place. This applies to photos, videos and audio content posted to Twitter or other forms of social media.
Twitter said it will take these new steps to assess the potential impact and damage that deepfakes can cause when shared and proliferated across the platform. A consultation period with users will run before any changes are introduced on the platform, though, and will help the firm improve its approach.
Why are we doing this?
1️⃣We need to consider how synthetic media is shared on Twitter in potentially damaging contexts.
2️⃣We want to listen and consider your perspectives in our policy development process.
3️⃣We want to be transparent about our approach and values.
— Twitter Safety (@TwitterSafety) October 21, 2019
Deepfakes have become a growing problem over the past 12-months, with a host of technology companies and social media firms moving to take action. Many have raised concerns that deepfakes could be used as a key tool in misinformation campaigns or to spread fake, harmful content such as manipulated pornography.
In September this year, Facebook launched the DeepFake Detection Challenge (DFDC) initiative – a partnership which sees the social media giant team up with Microsoft, the MIT and a host of other academic institutions to combat the rise of deepfakes.
Amazon announced it will join the DFDC this week amid an ongoing drive to cut down on manipulated content. The company said it will contribute up to $1 million (£772,000) in AWS credits to researchers and academics over the next two years.