Social media giants Twitter and Facebook today faced questions from MPs – including Joanna Cherry MP for Edinburgh South West – as part of the Human Rights Committee at Westminster regarding how they handle online abuse and the safety of women.
MPs took the companies to task over the abuse hurled unchecked at politicians online, saying it undermined democratic principles.
When questioned by Cherry, Katy Minshall, head of UK Government, public policy and philanthropy for Twitter, admitted that it was “unacceptable” that the platform relied on users to flag up harmful content, adding: “I think that’s absolutely an undesirable situation.”
Harriet Harman, chair of the Human Rights Committee, said: “There’s a strong view among MPs generally that what is happening with social media is a threat to democracy.”
MPs raised concerns over Twitter’s policies on hateful conduct, which they say is dependent on the sex of the individual being targeted. The company admitted it was aware of the problem, and said it was working to review how it applies policies to fix its misogyny problem.
“We are acutely aware of the unique experience women have on Twitter and changes we may have to make in our polices to get that right,” said Minshall, “we are very much aware of the real issue that women experience on our platform”.
Cherry accused Twitter of taking a more relaxed approach to removing harmful and violent content when it was levelled at women.
She emphasised her point showing a variety of misogynistic tweets targeted at high-profile women that the company had been slow off the mark to remove or deemed did not violate its policy on hateful conduct.
“There seem to be a number of mistakes here,” Cherry said. “And they seem to be mistakes that are failing to protect women. Do you accept that?”
Minshall responded saying “there are clearly a number of steps that we want to take, we need to take, but we are in a different place to where we were even this time last year.
However, Cherry responded, saying: “There seems to be a pattern of Twitter initially ruling that extremely offensive and violent tweets directed at women in public life are acceptable and that Twitter only reviews their decision when they are pressed by other figures in public life.”
Both companies were grilled over their ability to be more proactive and to remove content faster. Facebook’s UK head of public policy, Rebecca Stimson, said that the platform used a combination of human moderators and an advanced automated system, to guard its site.
However, she admitted that the nuance in harassment language meant that much of the content was slipping through undetected. “There are places where we’re really, really good – terrorism, child exploitation, that kind of thing – our machines are able to find and remove around 99% of that kind of content before it’s ever seen by anyone,” she said.
“Things like bullying and harassment, and some of the subjects that we’re discussing with you today, are much harder for a machine to identify accurately what that is. It might be us just having an argument about something, it might be using some robust language.
“So there, we found about two million pieces of that kind of content, but only about 15% of that was found by our machines and the rest we rely on individuals reporting to us and human reviewers because often it’s more about context and it’s more about intent and those can be nuanced decisions.”
Both Stimson and Minshall said that their respective companies were working to gradually improve their systems and were also implementing tools to better flag and block abusive content proactively, before it was even posted.