Social media company Facebook is using a scaled-down version of its platform to simulate user behaviour using bots in an attempt to curb the exploitation of new updates by scammers and trolls.
The simulation, named WW, can simulate hundreds to thousands of users at a time with a mix of hard-coded and machine learning-based bots. The ML bots learn through trial and error to optimise their behaviour, while the hard-coded bots contain the most common vulnerable behaviours exhibited by users.
The bots are then made to play out different scenarios, such as a scammer trying to exploit other users or a hacker trying to access someone’s private photos, to try and catch issues that arise. The system will enable engineers to identify and fix problems contained in new updates before they’re released and to pre-empt issues before they can be exploited.
The platform can also automatically recommend changes that can be made to improve the community experience.
In a paper released by Facebook, researchers explained in more detail what the WW platform is: “A Web-Enabled Simulation (WES) is a simulation of the behaviour of a community of users on a software platform.
“It uses a (typically web-enabled) software platform to simulate real-user interactions and social behaviour on the real platform infrastructure, isolated from production users.
“While a typical user interacts with Facebook through a front-end user interface, such as a profile and other website features, fake bot users can interact directly with the back-end code,” the paper said.
Recommended
- Reddit Advertising Update to Target Political Ad Transparency
- Gamers to be Notified of In-Game Purchases Under New Rules
- Police Scotland Campaign to Help Tackle Online Child Abuse
Unlike a traditional simulation, the WES system is built on the real-world Facebook platform rather than an entirely separate entity. The bots work in the background, essentially acting as Facebook users, carrying out commands such as pressing the ‘Like’ button or clicking through pages.
“This user behaviour could be captured in a rule-based system, or could be learnt, either supervised from examples or unsupervised in a reinforcement learning setting.”
The technology is currently only being used to test and improve features that could exploit Facebook’s community guidelines. But there is potential for the system to apply in other areas in the future, such as testing how updates might affect engagement and other metrics.