US: Should Twitter disinfect Trump?

Today, Twitter, Facebook, Google and YouTube have a difficult decision to make: should they remove footage by President Donald Trump that suggests disinfectant could be a treatment for coronavirus? In his daily press briefing on 23 April, Trump said: “I see the disinfectant where it knocks it out in a minute. One minute. And is there a way we can do something like that, by injection inside or almost a cleaning?” The clips with these comments were reported in the mainstream media and subsequently shared on social media, reinvigorating calls on social media companies to remove the problematic content of politicians.  

Like so many other things in everyday life, the coronavirus pandemic is a game changer for how tech companies handle content posted by politicians.

Perhaps for the first time, Twitter and Facebook have taken action against heads of State for spreading misinformation. In March, Twitter took down a video posted by the Brazilian President, Jair Bolsonaro, in which he praised the use of the antimalarial drug hydroxychloroquine for treating coronavirus. In the video, Bolsonaro also encouraged the end of social distancing saying that Brazilians wanted to get back to work. Facebook followed suit and removed the video from its site shortly thereafter. A week before, Twitter had taken down a tweet by the Venezualan President Maduro promoting a “natural brew” to treat coronavirus.

Although these removals are a part of a wider attempt to remove misinformation about coronavirus by social media companies, this is a striking departure from these companies’ previous policy and practice. Twitter has previously stated that it would not take down content posted by politicians in breach of the Twitter Rules in order for them to be held to account for what they say. Meanwhile, Facebook has refused to engage in fact-checking of political ads and has been, rightly, reluctant to position itself as the arbiter of the truth when it comes to political speech. YouTube has said it will remove anything that is “medically unsubstantiated”.

During the pandemic, tech companies have been under significant pressure to remove misinformation about coronavirus more aggressively. Twitter has broadened its definition of harm “to address content that goes directly against guidance from authoritative sources of global and local public health information”. Meanwhile, Facebook has said that it removes COVID-19 related misinformation that could contribute to imminent physical harm.

So far so good. The removal of false information that is likely to result in loss of life may well be justified under international law for the protection of public health. There is no reason in principle why politicians should be exempt from the application of community guidelines to them. This is arguably even more important given that, as public figures, they can reach millions of followers on social media. They have an obligation to ensure that their populations have accurate information about government policies and how they tackle public health crises. They must refrain from the spread of misinformation or disinformation campaigns.

Yet, there are also powerful arguments against the removal of speech that elected leaders post on social media platforms. First and foremost, they must be held to account. The public have a right to know how politicians position themselves on the most pressing issues of the day, and the best way to hold them to account is for their content to stay up and be challenged directly as part of a broader debate.

Secondly, in a fast-moving situation such as the coronavirus crisis, there must be room for debate.  What if politicians call for mandatory facemasks for an entire population, contrary to WHO guidance? Should Facebook or Twitter remove that leader’s content? These are difficult questions with no easy answers.

Trump himself has not tweeted his suggestions about treating coronavirus with disinfectant or UV light, although the full footage of the press briefing was shared by the White House Twitter account. Clips of this are now being shared by news outlets and social media users, many of whom deride and debunk his comments. This highlights another issue when it comes to combating misinformation about coronavirus. Should tech companies censor those reporting on misinformation, or attempt to distinguish between social media users who endorse the President’s comments and those who condemn them?  This would only severely restrict a debate about a press briefing that has already been watched by millions of Americans.

Twitter and Facebook’s move on political content is a game changer. If they are prepared to take down politicians’ comments, one has to ask: where do they stop? Are they prepared to treat all politicians evenly, including the US President? The question for tech companies is how far they are willing to go to uphold the rules that they say are here to protect their users.

We should also be worried about the vast potential for mission creep. The removal of anti-lockdown protests content should be a warning shot to us all. Before it’s too late, tech companies owe their users and the public at large maximum transparency about their decisions, how they are made and why. Only through greater transparency can we hope to get better and more consistent policy and decision-making in content moderation. If tech companies want to regain users’ trust, the coronavirus crisis is an opportunity for them to step up, be transparent and show that they respect human rights and content moderation best practices such as the Manila Principles on Intermediary Liability or the Santa Clara Principles on Transparency and Accountability in Content Moderation. Whether or not they can do so will no doubt be tested as EU regulators ramp up preparations for a Digital Services Act.

This article originally appeared on Little Atoms