For its 14-year existence,Twitter has allowed misinformation by world leaders and ordinary citizens to spread virtually unchecked.
Its bosses have long said that the platform's users would engage in debate and correct false information on their own.
Its much larger rival Facebook,by contrast,launched a fact-checking program several years ago. Facebook funds an army of third-party fact-checkers to investigate content,which then gets labelled on the site and demoted in its reach.
Twitter,which has roughly 330 million users compared with Facebook's 2.6 billion,has not had the resources nor the institutional will to engage fact-checkers.
But it has radically changed its approach during the pandemic.
In March,the company revised its terms of service to say that it would remove posts by anyone,even world leaders,if such posts went"against guidance from authoritative sources of global and public health information".
That includes comments,for example,claiming that social distancing is ineffective or essential oils can be used to cure the disease.
Soon after,for the first time,Twitter applied the policy to world leaders,removing tweets by Brazilian President Jair Bolsonaro and Venezulan President Nicolas Maduro,arguing the tweets about breaking social distancing orders and touting false cures had such potential for harm that labelling them would be insufficient.
Loading
Then,this month,it rolled out a new policy saying that it would label or provide warning messages about COVID-related misinformation,even when that information was not a direct contradiction of health authorities and did not directly violate the company's policies.
The company said at the time that it might expand the labels to other issue,such as other health-related hoaxes or situations where there is a risk of harm.
Tuesday's tweets on elections represent an expansion into a new area of election-related misinformation.
The Washington Post