ARTICLE 19 and the International Justice Clinic at the University of California-Irvine School of Law jointly responded to the United States Federal Trade Commission’s (FTC) request for public comment regarding ‘technology platform censorship‘. We warn that government pressure on social media platforms to reduce content moderation – or to align content moderation practices with the government’s preferred viewpoints – will undermine, not protect, freedom of expression online. Instead, the FTC should promote content moderation practices that are transparent, accountable, and grounded in international human rights standards.
In our submission, we emphasise that content moderation practices – both in terms of their rules as well as their enforcement – should be grounded in international human rights law. This requires transparency, consistency and a focus on user rights to understand and appeal content moderation decisions.
As human rights defenders we have consistently criticised current content moderation practices, as major platforms continue to fall short of their human rights responsibilities. However, equating all content moderation with censorship is misguided. Content moderation grounded in human rights law may require the banning of certain types of expression, such as hate speech that constitutes incitement to discrimination, hostility or violence. Examples like Facebook’s role in Myanmar demonstrates the harmful consequences of a completely hands-off approach to content moderation, in particular in conflict settings. The absence of content moderation can also lead to the silencing of certain marginalised voices and viewpoints online.
Thus, what is needed is not less content moderation but better content moderation and a serious commitment from social media companies to upholding human rights – not just in words but in practice. Yet, recent shifts amongst social media companies have signaled a change for the worse.
Regulators can and should play a role in ensuring that platforms respect human rights and safeguard online freedom of expression. However, there is little to suggest that this is what the FTC is seeking to do. Instead, its actions suggest an intent to exert extra-legal pressure on platforms to align their moderation practices with the government’s preferred viewpoints, rather than promoting content moderation grounded in human rights principles.
Any extra-legal pressure by the US government to intimidate social media companies into scaling back their use of content moderation will not result in more freedom of expression on these platforms. Instead, it risks creating an environment prone to politicised moderation, where platforms may feel pressured to suppress viewpoints perceived to oppose those of the government in power – undermining, rather than protecting, freedom of expression online.