Twitter announced expanded policies aimed at combatting misleading information about elections, including calling a result too early
Separately, Google said Thursday it is adding its own precautions ahead of the election, including for any candidate prematurely declaring victory.
Twitter’s expanded policies come as social media, more than ever, has become the central electoral information battlefield, as the coronavirus pandemic has drastically limited traditional rallies and door-to-door campaigning. The tech companies have heeded the advice of experts who predict that the election result may not be settled quickly in part because of mail-in ballots this year, leading to potential confusion about who wins.
Trump has more than 85 million Twitter followers, and the company has previously flagged misleading claims from the president, including the assertion that mail-in ballots are fraudulent.
Twitter said the policies would go into effect Thursday and would extend to whenever the election results are officially called. In some cases, the company will remove a tweet entirely. In others, it will add more labels to tweets. That includes when misleading information does not seek to directly manipulate or disrupt civic processes but leads to confusion, Twitter spokesman Trenton Kennedy said.
Twitter’s algorithms decide whether to share tweets in a person’s timeline if they follow a person or if people in the person’s network are engaging with the tweet. The labeling policy makes it so that the only people who will see the labeled tweet shared to their timeline are those that follow the account. The tweet will be masked by the label even for those who follow the account, and the label will clearly state the information in the tweet is disputed and link to mainstream news and official sources (including The Washington Post). Twitter will not amplify the tweet through its algorithms or other methods, such as injecting the tweet into the timeline of people who do not follow the account, even if people are conversing about it.
Last week, Facebook said that it would prohibit new political ads in the seven days before the election, although ads placed earlier could continue running. The company said it would expand its efforts to remove content that might suppress voting, would attach labels to posts that suggest that casting a ballot puts voters at risk of contracting the coronavirus, as Trump did on Twitter recently.
Twitter banned political advertising last year.
Facebook’s labels give users an option to visit a voting misinformation center where the company shares accurate information about elections from official sources. The labels have been criticized, because the labels themselves do not serve as a warning about the veracity of the information.
Google is also adding precautions. “We do have a plan,” Google’s vice president of engineering, Cathy Edwards, said Thursday.
On Election Day, the primary sources of information on top of Google search results will be from the Associated Press and Democracy Works, a nonpartisan nonprofit organization that provides information on how to vote. Google said it is relying on its existing ranking protections to minimize Election Day misinformation. Searches won’t show news reports that incorrectly declare victory for a certain candidate, and if any misinformation does slip through, Google says it will be ready to take it down.
Google will also start blocking certain auto-complete suggestions for searches about elections, it said Thursday. The goal is to prevent the suggestions, which are generated from popular or trending searches, from being used to direct people to confusing or incorrect information about voting or election results.
Heather Kelly contributed reporting.