Self-regulation and ‘hate speech’ on social media platforms

On the occasion of the European conference on Media Against Hate, ARTICLE 19 proposes how models of independent media self-regulation could be adapted to address ‘hate speech’ on social media. In this briefing paper, ARTICLE 19 explores issues that would need to be considered, and reinforces recommendations on freedom of expression-compliant approaches to regulating social media.

Dominant social media companies hold considerable power over the flow of information and ideas online. This is particularly important in respect to how they address ‘hate speech’ on their platforms. Over the last few years, there has been increasing calls globally for social media companies to be treated as “publishers”. At the same time, these debates are hindered by the difficulty in appropriately classifying the role played by these companies in the modern media landscape.

ARTICLE 19 has long argued that international freedom of expression standards allow for different types of regulation of the media, based on the type of media. A model of self-regulation has been the preferred approach to print media. It is considered the least restrictive means available, and the best system for promoting ethical standards in the media. An effective self-regulation mechanism can also reduce pressure on courts and the judiciary. Generally, when a problem is effectively managed through self-regulation, the need for state regulation is eliminated.

A number of recent legislative initiatives on ‘hate speech’, including most prominently the 2017 German NetzDG law on social media, make reference to some form of self-regulation in relation to social media. Voluntary mechanisms between digital companies and various public bodies addressing ‘hate speech’ and other issues, such as the EU Code of Conduct on hate speech, also make reference to self-regulatory models. However, ARTICLE 19’s analysis shows that these fail to comply with international human rights law. They rely on vague and overbroad terms to identify unlawful content, they delegate censorship responsibilities to social media companies with no real consideration of the lawfulness of content, and they fail to provide due process guarantees.

ARTICLE 19 therefore proposes exploring a new model of effective self-regulation for social media. This model could include a dedicated “social media council” – inspired by the effective self-regulation models created to promote journalistic ethics and high standards in print media. We believe that effective self-regulation could offer an appropriate framework  to address current problems with content moderation by social media companies, including ‘hate speech’, providing it also meets certain conditions of independence, openness to civil society participation, accountability and effectiveness. Such a model could also allow for the adoption of tailored remedies, without the threat of heavy legal sanctions.

ARTICLE 19 is aware that the realisation of this model may raise certain practical challenges and problems. However, we believe that these  should be further debated and explored. In today’s digital societies, we must engage collectively around important issues to concerning the right to freedom of expression and human rights more broadly.

This briefing was produced as part of “Media Against Hate”, a Europe-wide campaign initiated by the European Federation of Journalists and a coalition of civil society organisations, including the Media Diversity Institute, ARTICLE 19, the Croatian Journalists Association, Community Media Forum Europe, Community Medien Institut  and Cooperazione per lo Sviluppo dei Paesi Emergenti.

Read briefing