Silicon Valley

Facebook claims to have proactively found and removed 99% of terrorist-related content on the site for the past three quarters. It’s given some insight into its processes in a blog post.

Some statistics: First it’s important to note that when it says “terrorism,” Facebook is referring only to ISIS and Al-Qaeda. On average, the firms claims it now removes terrorist content less than two minutes after it’s posted, versus the average of 14 hours it took earlier this year. Facebook took action on 9.4 million pieces of content in Q2 2018, a figure that declined to 3 million in Q3 2018, thanks to its efforts the quarter before, it said.

Detection systems: Facebook has launched a new machine-learning tool that assesses whether posts signal support for ISIS or Al-Qaeda. It produces a score indicating how likely it is that it violates their counterterrorism policies, with the ones that receive higher scores passed to its human reviewers to assess. For the highest-scored cases, posts are removed automatically. In the “rare instances” employees find the possibility of imminent harm, Facebook immediately informs law enforcement, it said.

Release the bots: Clearly, Facebook is keen to look tough on terrorism. And as always, it is relying almost exclusively on algorithms to do that. That makes a lot of sense from its point of view, as humans could never scan that much information that quickly (and they’re expensive). But it is yet another reminder of Facebook’s role as judge, jury, and executioner on the information it lets us see.