Now we know that misinformation is really contagious
The spread of coronavirus was matched by the spread of misinformation about its origins, symptoms and treatment. Whether a WhatsApp message about drinking warm water to kill the virus, or a Facebook post blaming 5G masts, most of us have probably seen unverified information about Covid-19. Those who have followed the anti-vaxxer debate will know that health misinformation is not new but for the first time, we are witnessing how a single health issue, affecting millions of people around the world contributed to the sheer scale of both true and false posts about Covid-19.
ARTICLE 19 has encouraged governments, the media and social media companies to publish and promote reliable and verified information to counter misinformation. However, we have warned against criminal sanctions for those who share misinformation, which should only be used for the most serious speech crimes.
Social platforms are trying to take it all down automatically
In response to the high volumes of misinformation online, the main tech platforms, Facebook, Twitter and YouTube, announced efforts to take more of it down. With many content moderators unable to work remotely, this meant increased use of algorithms to take down content and close accounts. Without any human review this approach risked an exponential rise in unaccountable take-downs.
In April, 75 organisations called on social media platforms to preserve data on content moderation. We know that prior to the pandemic, content has often been taken down in error. It is often those who already face discrimination offline, who are disproportionately affected. So it is vital that there is data and evidence about how, what and why content was removed during this period.
Racist attitudes and toxic speech have flourished
In the initial stages of the outbreak, there was a spike in abuse targeted at Chinese and south east Asian people but subsequently the pandemic has been used as a pre-text for other hate speech, including anti-semitic conspiracy theories.
In India, the virus has been exploited by Hindu nationalists to spread hatred about Muslims. In Malaysia, there have been attacks on Rohingya refugees. Much of this hate speech is the extension of pre-existing racism and prejudice. The contagious nature of a virus that has crossed borders appears to have encouraged the blaming of ‘foreigners’ for its spread. That fear along with economic uncertainty is contributing to an environment where toxic speech can flourish.
Political leaders are not immune from online sanctions
The tech companies’ content moderation policies were publicly tested when political leaders actively shared misinformation. Both Twitter and Facebook removed posts by President Jair Bolsonaro of Brazil, which promoted hydroxychloroquine as a cure for Covid-19. A few weeks later, questions were asked about whether online footage of US President Donald Trump suggesting that internal consumption of bleach could get rid of the virus should be removed.
Many would argue that what political leaders say is a matter of public record and we need to see it so that we can hold them to account. The companies’ change of policy over Covid-19 reignited this debate, which continued when one of Trump’s tweets about Black Lives Matter protests breached Twitter’s guidelines by glorifying violence.
And it seems Covid-19 is no laughing matter
Some governments have taken a hard line against ‘fake news’, often using existing repressive laws to bring criminal proceeding against people who have allegedly shared misinformation.
In Bangladesh, the controversial Digital Security Act was used to arrest two university teachers who criticised the former health minister for leaving Covid patients without sufficient care. However, the Spanish authorities went one step further with a criminal action against a Twitter user who made a joke about the virus. The police justified the arrest by saying that “creating ‘joke’ images and spreading them over the internet is harmful and does not help in preventing the spread of the virus”.
With all this going on, how should the oversight of content moderation be organised?
Reaching beyond the categories of content that is prohibited by the law, a deeply worrying current trend is a common ambition amongst governments to demand that social media be instantly “cleaned” of all types of “harmful content”. Targeting content that is very vaguely defined and requiring its almost instant removal is bound to undermine freedom of expression.
The pandemic has highlighted some of the ways that misinformation and hate speech can be tackled during a global crisis. Firstly, it’s important that Governments provide as much verified information as possible and do not share misinformation or use a situation for political purposes. Promoting a diverse media environment, robust freedom of information laws and protections for whistleblowers also encourage transparency. Journalists can support this by proactively reporting on disinformation, propaganda and discrimination, and ensuring their work does not use stereotypes, or unnecessarily refer to race or nationality or ethnic origin. Social media companies should have clear and easily understood policies, and make sure that they notify users when they remove content, restricting its reach, or block accounts. They can also work with partners, including fact-checkers, and ensure that they promote verified information.
Public authorities had often delegated the mission of removing unlawful content to social media platforms without any form of oversight or any consideration for international standards. ARTICLE 19 has repeatedly denounced the inherent risks in such an approach. Now, legislators tend to adopt a harsher stance and claim that the challenges of the times call for a stricter public control of social media platforms. We have elaborated a detailed set of recommendations for the development of the EU Digital Services Act to contribute to the consultation of the EU Commission over the summer.
On the business side, the creation by Facebook of the Oversight Board, an external body that will oversee certain content moderation decisions on the platform, is an interesting initiative. ARTICLE 19’s proposal to create Social Media Councils at the national level presents similarities with Facebook’s initiative, but it differs significantly in that SMCs are based on international human rights standards, are totally independent of any one company, have high standards of openness and transparency, and will operate in closer proximity to local users and the complexity of contexts in which content moderation decisions arise. We have engaged in extensive discussions and consultations on the SMC concept, which has now become an integral part of current international academic and policy debates on the future of the regulation of social media platforms.
What is the cure? Protect speech, challenge hate, and act against misinformation.
Most of us use social media platforms to access and share information – in fact it’s quite hard to imagine what it would be like if this right was completely taken away from us. But having content taken down or your account closed happens more often than you would think – especially if you are a woman, from a racial or religious minority or LGBTQi. To help protect all of us from this kind of censorship, join ARTICLE 19’s Missing Voices, which is campaigning for more transparency and accountability from social media platforms.
You can also help spread the word about how to combat hate speech. This is a deeply troubling trend and one that needs more understanding and courage to tackle directly. There are many resources out there but here is a less well-known resource called Challenge Hate. Please take a look and share.
Subscribe to Inside Expression to receive our update on the 19th of each month at: bit.ly/A19Updates.