Just in Case Moscow Is Still Meddling, Here’s What to Do
President Trump seemed to say on July 18 that Russians are no longer attacking U.S. elections. But that’s a lonely position: even his director of national intelligence, Dan Coats, disagrees. So it seems like a good idea to erect stronger defenses than those in place in 2016, when Russian ads and fake news sowed discord and possibly helped tip the election in Trump’s favor.
A report issued on July 23 has a series of sensible recommendations for internet platform companies and governments around the world with defenses that would meddle with the Russian meddlers without impinging on civil liberties. Called “Combating Russian Disinformation: The Case for Stepping Up the Fight Online,” it’s from the Center for Business and Human Rights at New York University’s Stern School of Business. The authors are Paul Barrett, a former writer for Bloomberg Businessweek; Tara Wadhwa; and Dorothee Baumann-Pauly.
Companies should establish “specialized Russian-focused teams” staffed with experts in Russian language, culture, and internet practices, the report says. By singling out Russia for unique treatment, the companies can “signal externally and internally that Russian disinformation, because it is part of a broader Kremlin agenda to destabilize democracy, is more pernicious than other forms of ‘fake news,’” the report says.
Google can and should de-rank Russian material that’s clearly false, as called for last year by Eric Schmidt, the company’s former executive chairman, the report says. “Unfortunately, rather than clarifying and amplifying this statement, Google blurred Schmidt’s meaning only about a week after he spoke,” the report says. “Reacting to Schmidt’s comments, a Russian regulatory agency threatened to take action against Google. In full retreat, the company responded by contradicting its former leader, saying it does not adjust its main search algorithm to de-rank individual websites.”
Facebook should expand its partnership with third-party fact-checking sites such as PolitiFact and Snopes, the report says. And companies should step up the use of artificial intelligence to combat fake news, especially “deep fake” reports in which sound, pictures, and video are digitally manipulated. For example, “politicians could be shown giving speeches they had not delivered or fraternizing with shady characters they had never met,” the report says. “Made-up battlefield atrocities could be depicted in minute, if phony, detail.”
The most ambitious recommendation is that internet platforms “rethink” their reliance on advertising, which is ripe for misuse by Russian exploitation, and possibly depend more on subscriptions. A more modest step would be for companies to advertise only on websites that have been white-listed. JPMorgan Chase & Co. reduced the number of websites on which it advertises from 400,000 to 5,000 after discovering its ads appeared on a site called “Hillary 4 Prison.”
For government, the report calls for more coordination among countries and for stronger penalties. It says the U.S. should pass a version of the Honest Ads Act, a bipartisan bill introduced last year in that would require companies to disclose who paid for ads, how much they spent, and at whom the ads were targeted. That goes beyond what internet platforms are disclosing voluntarily.
The NYU group opposes government regulation of online content. Germany went too far with its NetzDG law, which is aimed at stopping hate speech, the report argues. “Governments or lawmakers have cited the German statute when discussing or approving censorship laws in Russia, Singapore, and the Philippines,” it says. “Lawmakers should remember that they are defending democracy,” the report says. “Imperiling free speech is not the way to accomplish this worthy goal.” That seems reasonable.