Advertisement

How Facebook tries to fight being a superspreader of fake news on voting and COVID-19: Q&A

Amid a hotly contested U.S. presidential election and the coronavirus pandemic, how is Facebook responding to criticism that the social media giant has become a superspreader of political and medical misinformation? The USA TODAY Editorial Board met virtually on Tuesday with Nick Clegg, a former British deputy prime minister who is the company’s head of global affairs and communication, and Nathaniel Gleicher, head of cybersecurity policy. Questions and answers have been edited for length, clarity and flow:

Q. We're six weeks from the U.S. presidential election, and the FBI says the Russians are making "very active" efforts to denigrate Joe Biden and sow discord. How much foreign interference is Facebook seeing this time around, and how does it compare with 2016?

Gleicher: The biggest difference between 2016 and 2020 is that in 2016, Russian actors were caught doing this after the election. This year, they've been caught months and years in advance. We publicly announce whenever we find a network of coordinated inauthentic behavior, which is essentially a network of pages and groups and accounts working together to deceive people about who's behind them. We've announced about 100 of these over the past three years.

Q. What trends and patterns are you seeing?

Gleicher: Rather than trying to run large numbers of fake accounts, we're seeing them actively try to target other people to amplify their message unwittingly for them, which includes both activists and, importantly, the press. We've seen an increasing effort to target journalists to try to trick them into telling the stories of Russian and Iranian actors.

Q. Are they getting away with it?

Gleicher: With industry, government and civil society all really focusing on this challenge, running networks of deceptive accounts is much harder than it used to be. We've seen these actors try, and we see them keep trying. But they get caught earlier and earlier, and the networks are less and less effective.

Q: Do you have a sense for how much you are not catching?

Gleicher: I mean, you always have the challenge of proving a negative. You can never prove what you can't see. What we do see is foreign actors and domestic actors evolving their tactics to try to defeat our systems. There are no guarantees in security. But all the patterns that we're seeing, all the evidence we're seeing, is we are catching them much faster. And it is much, much harder for them than it used to be.

Clegg: It's a real cat-and-mouse problem. Thirty-five thousand people have been employed since 2016 to protect the integrity of our platform. The billions of dollars that goes into that is bigger than the total revenue of Facebook when it went public back in 2012. The challenges we face in this election are as much, if not more, internal than external — the internal players within American democracy trying to play the system, spread misinformation, spread polarization and so on. That is arguably a bigger challenge this time around than the foreign and Russian interference last time.

Nick Clegg, a former British deputy prime minister who is Facebook's head of global affairs and communication, meets with the USA TODAY Editorial Board on Sept. 22, 2020.
Nick Clegg, a former British deputy prime minister who is Facebook's head of global affairs and communication, meets with the USA TODAY Editorial Board on Sept. 22, 2020.

Q. Tell us more about the domestic threats. How do you combat aunts and uncles who are unwittingly spreading misinformation across Facebook? And where do you draw the line between freedom of speech and misinformation?

Clegg: You just put your finger on the most difficult dilemma that we face. Wherever you draw the line, people criticize you. The right criticizes Facebook for taking far too much content down. Concerns about censorship, perhaps most especially from Republican voters, is one of the biggest user feedbacks we get. And then, of course, the narrative on the left is that don't take down enough and that (Facebook CEO) Mark Zuckerberg is in Donald Trump's pocket. So we get it from, in a sense, both sides. We need to try and come up with objective and coherent ways to draw the line.

Q. Where do you draw the line on political speech?

Clegg: That’s probably the most controversial issue that we've encountered. No Silicon Valley company fact-checks, vets, what politicians themselves say. Certainly no one is doing it at scale. Political speech is one of the most scrutinized, satirized and analyzed forms of speech, particularly in raucous democracies like the United States. And the idea that we should intervene at scale and speed to vet all the adjectives and adverbs and selectively used stats that come from politicians seems to us and, as I say, the industry as a whole, not to be realistic.

Q. What if a politician breaks your rules?

Clegg: If it contravenes our community standards — inciting violence or hatred — then we take that down. But what we don't do, which some people have urged us to do, is to vet what politicians directly say about each other, and one to the other. While understandably perhaps we're singled out for that, it's actually something which is totally mainstream in how all Silicon Valley companies have operated and also how the broadcast TV companies have operated under (Federal Communications Commission) law.

Q. Your critics say Facebook has become a vector for hate speech.

Clegg: We are now removing over 90% of hate speech before it's reported to us. That is compared with 23% just two or three years ago. And that's because of the very, very rapid velocity and the improvement of the machine learning tools. Of course, over 90% sounds impressive, but at the scale at which Facebook operates, the 8% or 9% that's not tackled is still a lot of content because of just the sheer scale. We have increasingly sophisticated tools to go after hate speech.

Q. What structural steps has Facebook taken to address the issues outlined in the civil rights audit that was completed in July?

Clegg: We have significantly closed the gap that was identified in that report. We now remove content where there's not just an explicit, but an implicit, intent to discourage people from voting, or to say stuff that would lead to people forfeiting their right to vote. We now aggressively label any content that seeks to delegitimize the way people are able to vote.

Q. Including content from the president of the United States?

Clegg: We have repeatedly, for instance, over the last couple of weeks, labeled posts from Donald Trump, which say that mail-in voting is a fraud and is a racket and will lead to a fraudulent election and so on. We put a great big label on it that the user has to read if they are trying to read that content, which says words to the effect of mail-in voting is a trustworthy way of voting, it has been for a long time in this country, and it is predicted to be so in this election as well.

Q. How about political advertising?

Clegg: We're not accepting new ads in the final week of the campaign. We're doing that for a very specific reason. It's there to deal with one specific dilemma, which is: What would happen if someone runs a vicious, aggressive, false, hateful ad, 24 or 48 hours before the end? The problem then is you're out of time. You don't have time for the media, for opposing candidates, to scrutinize and reject and to deliver on the counterspeech. Speech and counterspeech is the lifeblood of a healthy democracy. But you uniquely can't do that at the very end because you've just run out of time.

Q. What if a candidate declares victory before the results are official?

Clegg: We’ve made it very clear that candidates will not be able to create any prematurely declared victory. We'll label that if they try and do it. We're working with Reuters and others to make sure that our users are only informed of the certified result when it is finally available. We’re trying to acclimatize voters to the fact that (this election) is unlikely to have quick results. We're probably going to have quite a lot of time to wait before all the ballots are counted.

Q: Are you seeing efforts to suppress voting in minority communities, particularly in battleground states, and what are you doing about it?

Clegg: We removed around 100,000 pieces of Facebook and Instagram content between March and May of this year, if it violated our voter interference and suppression policy. So it is set policy, which is being enforced at scale.

Gleicher: The sophisticated actors that are using deception to target these communities are often smart enough not to share what anyone would consider to be voter suppression. They are very carefully choosing content that doesn't violate our policies. We certainly do see and know that foreign actors like Russia are continuing to target these communities. But what we have seen them try to do so far, although they are still trying, has not been that effective, this cycle around.

Q: What is the status of your efforts to crack down on the false QAnon conspiracy theory?

Gleicher: The QAnon folks have gotten smarter about it, too, and are no longer using terms that would identify them as Q's. And so this is going to continue to be complicated. I think we're making progress. But they in particular are an interesting target because they're pretty savvy about how to present as a right-of-center organization without showing too much evidence of the fact that they're also associated with some other efforts around Q.

Q: Can fears of a hacked election have some of the same effects as actual hacks?

Gleicher: The threat actors, particularly Russian actors, use many, many, many different platforms and they target every part of society, not just the major platforms. It turns out it's very hard to hack an election, but it's comparatively easy to play on the fears of the public that the election could be hacked. So rather than running a network of a hundred thousand fake accounts, just make it known and make people convinced that you are running a large number of fake accounts. And then play on everyone's fears.

Clegg: We're literally blocking millions of fake accounts a day. By the way, a lot of these are actually run through financial scams, in fact most of them. But our machine learning systems just get better and faster. The more they do it, the more they're able to recognize the telltale patterns.

Q. Can you expand on what you said about your plans if there's a contested election and or civil unrest?

Clegg: We're investing a lot of time, a lot of effort. We've recruited a bunch of very specialized folks to help us as a company do the most meticulous form of scenario planning that we possibly can, from the nonscenarios to some extremely worrying ones. We now have relationships with election authorities on the ground in all the states across the country. We've got a sort of central nervous system, which will pick up on circumstances on the ground. And we have a number of "break glass" tools, which we have deployed in other settings, that we could deploy as well in this country.

Q. What are those tools?

Clegg: I'm tight lipped on exactly what they are because I don't think it's very helpful to elaborate on, not least because it will no doubt elicit greater sense of anxiety than we hope will be warranted.

Q. What happens if President Trump seeks to use your platform to prematurely declare victory on election night?

Clegg: Very simply, he wouldn't be able to do it unchecked. We would put a great big label on his post with words to the effect that the results have not been certified. We will inform users with a very visible label on top of Donald Trump's post, if he does try to declare premature victory, saying in effect that the election result is not yet finalized.

Q: Aside from labels, what else have you been doing to inform your users about the election?

Clegg: We've been getting a tremendously positive response from our Voter Information Center, which is the most ambitious attempt of its kind anywhere, publicly and privately. We've already got 2 1/2 million Americans whom we appear to have helped to register to vote. Thousands of people becoming poll workers, who otherwise might not have been able to do so.

Q. What’s your response to people who say Facebook could be doing more?

Clegg: I always accept there are always people who will say that Facebook doesn't do enough. I certainly don't want to suggest that we are in any way complacent, that we're doing enough. We strive to do more. But I don't think any reasonable person could suggest that the extraordinary efforts that we have embarked upon are not pretty ambitious, pretty exceptional in scale.

Q: We just passed 200,000 coronavirus virus deaths in the United States. Your critics say Facebook is a superspreader of bad science and misinformation by anti-vaccine activists. What are you doing about that?

Clegg: One of the most important antidotes to misinformation is good information. We certainly have worked very closely with the (Centers for Disease Control and Prevention) and others to make sure that we are constantly providing a source of reliable, authoritative, credible information. And we're proud of the work of that. Separately, of course, we remove content related to vaccines or anything else where it poses an impending and immediate real world harm.

Q. What about private "anti-vaxxer" groups?

Clegg: We, as you can imagine, police private groups very carefully. We don't allow them to advertise or in any way try to attract people to the group. I mean, we're actually at the moment actively looking to see whether we need to tighten up those provisions further. But it would be an extraordinary thing for us to act against debate where people cast aspersions on vaccines.

Q. How come?

Clegg: Here in the United States, you have Kamala Harris and Joe Biden saying they wouldn't trust any vaccine that was rushed out before the election. Are we supposed to remove that? Because that's talking about the distrust in a particular vaccine which may or may not appear. There’s a lot of debate about the efficacy and safety of vaccines in China, in Russia. Some vaccines in history have not worked. So the idea that it's an easy line to draw is not the case.

This article originally appeared on USA TODAY: How Facebook tries to not be superspreader of fake news on voting: Q&A