Tech Companies Dedicated to Fighting Online Terrorism, Senate Committee Told
Tech company efforts to combat online terrorism faced skepticism at a Senate Commerce Committee hearing Wednesday, with some senators pressing for fuller explanations of technologies for identifying and removing extremist content. Chairman John Thune, R-N.D., told reporters the hearing was a “really good first step” and Facebook, YouTube and Twitter witnesses “were pretty responsive.” The committee will keep tracking the issue but doesn’t plan further action at the moment, Thune said: “We know how important these platforms are to extremist groups to recruit and radicalize folks that will commit violent acts against Americans, and we just want to make sure we’re staying on top of that issue.”
Social media platforms created a new and “stunningly effective way for nefarious actors to attack and harm our citizens,” said ranking member Bill Nelson, D-Fla. “This should be a wake-up call to all of your companies,” said Nelson, one of many members raising questions about Russia using platforms to meddle in the 2016 election and to interfere with net neutrality comments at the FCC (see 1801030051). “We have to think of this as undermining our democracy,” said Sen. Brian Schatz, D-Hawaii. The companies have made progress, but “you’re not where we need to be” to assure Americans social media platforms are protected from harmful content, Schatz said.
“We share your concern about terrorist use of social media networks,” said Monika Bickert, Facebook head of product policy and counterterrorism. “We have a team of experts,” over 7,500 reviewers fluent in “dozens of different languages” dedicated to detecting and in many cases removing extremist content before it’s uploaded, she said. Facebook works in partnership with “more than” 68 technology companies that are part of the Global Internet Forum to Counter Terrorism to “address these concerns,” Bickert said. Artificial intelligence and other automation has “become increasingly central” to keeping terrorist content off of Facebook, Bickert said.
Supplementing Facebook's technology tools are content review teams that will double to 20,000 employees by the end of the year, along with language specialists who have expertise in over 30 languages, Bickert testified. Twitter has suspended more than 1.1 million terrorist accounts since mid-2015, said Carlos Monje, director-public policy and philanthropy. Twitter is preparing not only for the U.S. midterm elections but also for others around the world by instituting procedures to verify accounts; improve anti-spam technology; and implement new ad transparency policies that will let voters know who is behind political ads, Monje said.
“Basically, it’s a cat-and-mouse game,” Monje told lawmakers. “Our goal is to evolve to stay one step ahead. But as we make it harder, their [terrorists’] behavior evolves,” he said. "We’re constantly adapting how we attack the challenge.” Social media companies are “far too late” in detecting the work of bad actors, said Clint Watts, Foreign Policy Research Institute fellow. The U.S. government has had a "sustained and significant" focus on terrorist social media use, raising questions about why social media companies "would allow their platforms to be used for nefarious purposes," Watts said, urging more work to limit anonymity and increase accountability among platform users to help counter and stop bad actors.
YouTube has “long had policies that prohibit terrorist content” and uses a mix of technology and humans to review “violative content quickly,” said Juniper Downs, director-public policy and government relations: Algorithms are getting better at detecting violent content, flagging nearly 98 percent of videos taken down for violent extremism. The company plans to produce a new transparency report this year with data on content that’s been flagged and removed, Downs said.
Nelson asked Watts what Russians might be planning in coming elections and how tech companies should respond. “In non-election years, they tend to focus on infiltration," Watts said. "They’ve found out it works well and there is little downside.” Sen. Amy Klobuchar, D-Minn., plugged her Honest Ads Act (S-1989) that would require online companies to disclose who buys ads. Watts said he supports the intent behind the bill, which has 11 co-sponsors, "because I think that's where it's all going." Sen. Jon Tester, D-Mont., in his first hearing since joining the committee, urged social media companies "to put pen to paper and let people know who is paying for political ads."
Industrywide standards are needed to ensure "timely and permanent removal of dangerous content," said a statement submitted to the hearing record and provided to reporters from the nonprofit Counter Extremism Project. "If tech fails to act, then it is time for regulators to promulgate measures to force the industry to take necessary action to protect the public."