ARTICLE 19 welcomes the critical discussion on the role of regulation and responsibility of States, internet companies and social media platforms to tackle online hate speech against minorities.
Concerns about the relationship between online manifestations of hate speech and offline acts of violence and crimes against minorities are legitimate and should be investigated. However, ARTICLE 19 is concerned about restrictive approaches adopted by many States to regulate and prohibit hate speech under overly broad and vague concepts that fail to comply the three-part test under Article 19 (3) and Article 20(2) of the International Covenant on Civil and Political Rights. It is therefore crucial to recall that the obligation of States is limited to the prohibition of incitement to violence, discrimination and hostility as interpreted in the Rabat Plan of Action, which establishes that criminal responses should be the last resort and civil or administrative measures should be also relied upon.
Those instances of online hate speech that impact on specific rights of minorities, including their right to privacy, right to life, or right to engage on issues of their concern, may be subject to freedom of expression limitations provided that States strictly meet the test of legality, necessity and proportionality.
ARTICLE 19 recognises that derogatory and discriminatory speech perpetuate stigmas and prejudices. However, legal prohibitions of offensive language or obligations imposed on companies to detect and remove it will not make hatred, discrimination and stigma related problems disappear. States should concentrate their efforts on comprehensive public policy responses that address the root causes of discrimination that cannot be eradicated by legal sanctions. Content-based regulations and mandatory removals imposed on social media platforms, as well as the use of automated tools to detect and takedown hate speech content online, has proven to be counterproductive to the right to freedom of expression, particularly problematic for the rights of those targeted online. Some removals can also have counterproductive effects for the purposes of investigations.
As for social media, ARTICLE 19 is deeply concerned about the impact of the dominant position of a few internet platforms that have huge power over what individuals can say online. Due attention should be put on the problematic business models of these companies, driven by maximisation of engagement as their main way to maximise profit, and the effect they have on the users’ rights to freedom of expression, non-discrimination and other rights, and on the fact that users do not have viable alternatives to switch to different platforms. This impact might be greater in case of minorities, who already face structural barriers in the use of these technologies, and have even less means of protection from content moderation practices that fail to respect human rights.