Facebook, Microsoft, Twitter and YouTube would cooperate to fight terrorist propaganda and limit the spread of terrorist content online.
Facebook, Microsoft, Twitter and YouTube are joining forces to more quickly identify the worst terrorist propaganda and prevent it from spreading online.
The program announced would create a database of unique digital “fingerprints” to help automatically identify violent terrorist images and recruitment materials they have removed from their services. The shared data, called “hashes,” is aimed at improving the efficiency at which tech firms control the spread of such content.
“There is no place for content that promotes terrorism on our hosted consumer services,” the companies said in a joint blog post. “When alerted, we take swift action against this kind of content in accordance with our respective policies.” The move comes a year after the same companies banded together to identify and remove child pornography using a similar technique. The technique was developed by the UK’s Internet Watch Foundation. It extends beyond social networks; Google scans every Gmail user’s account for child porn.
The program, which is expected to begin in early 2017, aims to assuage government concerns — and derail proposed new federal legislation — over social media content that is seen as increasingly driving terrorist recruitment and radicalization, while also balancing free-speech issues.
Technical details were being worked out, but Microsoft Corp. pioneered similar technology to detect, report and remove child pornography through such a database in 2009. Unlike those images, which are plainly illegal under U.S. law, questions about whether an image or video promotes terrorism can be more subjective, depending on national laws and the rules of a particular company’s service.
Social media is a tool for recruiting and radicalization by the Islamic State group and others. Its use by terror groups and supporters has added to the threat from “lone wolf” attacks and decreased the time from “flash to bang” — or radicalization to violence — with little or no time for law enforcement to follow evidentiary trails before an attack.
Under the new partnership, the companies promised to share among themselves “the most extreme and egregious terrorist images and videos we have removed from our services — content most likely to violate all our respective companies’ content policies.”
“We really are going after the most obvious serious content that is shared online — that is, the kind of recruitment videos and beheading videos more likely to be against all our content policies,” said Sally Aldous, a Facebook Inc. spokeswoman.