Today, ARTICLE 19 launches the Social Media 4 Peace handbook, a guide to content moderation and freedom of expression.
In a rapidly evolving digital landscape, the role of social media platforms has been transformed. Once heralded champions of free expression controlled the flow of information; today, it’s influential gatekeepers who have the control. With a handful of major companies wielding immense power over what billions of users can see and share online, concerns about freedom of expression, media diversity, and privacy breaches have grown.
Offering a concise overview of the current state of content moderation on the largest social media platforms and the impacts on freedom of expression, the practical handbook seeks to dissect the complex intersection of freedom of expression, content moderation, and the business models of these tech giants.
The handbook, produced by ARTICLE 19 under the UNESCO project Social Media 4 Peace funded by the European Union, includes numerous concrete examples and cases to illustrate the questions raised by different standards, practices and policies pertinent to content moderation. It builds upon ARTICLE 19’s policies and expertise in content moderation and platform regulation and reflects ARTICLE 19’s long-standing calls that measures responding to problematic content including ‘disinformation’ and ‘hate speech’ must always conform with international standards on freedom of expression and other human rights.
Chantal Joris, Legal Officer at ARTICLE 19 said: ‘We hope the handbook will become a valuable resource for civil society actors and other stakeholders who are involved in content moderation issues or seeking to engage in the topic, and that it will support their engagement with platforms and policy-makers. It is essential that local civil society voices are heard in these discussions, as they know best how platforms’ practices affect the communities and human rights in their countries.’
This complements the extensive research conducted by ARTICLE 19 under the Social Media 4 Peace project, which focuses on Bosnia and Herzegovina, Indonesia, and Kenya. The research revealed the disconnect between the practices of these global companies and the local communities affected by their content moderation decisions. Social media companies routinely overlook the voices of local communities and fail to consider cultural, social, historical, economic, and political context when moderating content.
The findings were clear: flawed content moderation practices can transform social media platforms into hotbeds of ‘disinformation’, ‘hate speech’, and discrimination, with significant implications for communities and on peace and stability in post-conflict countries.
Building local coalitions in Kenya, Indonesia, and Bosnia-Herzegovina
In Kenya, Viktor Kapiyo from the organisation KICTANet said, ‘harmful content continues to persist online even when reported’, adding that ‘the content removal system lacks effectiveness, particularly in local languages’.
However, the challenges of content moderation also have another side. Kenyan social justice activist Mutemi Kiama Edwin, who has seen his posts consistently taken down, says human involvement is vital. He points out that it makes little sense for someone based in Ireland or California to make decisions about content, given that they will not ‘understand the nuances of the Kenyan conversation’. Quite simply, he says social media companies must ‘respect their users much more’.
Social media has been a crucial tool for so many communities: it has enabled people to become citizen journalists and exercise their free expression, forging connections between people, and spearheading democratic revolutions across the world.
But we now know that, despite positive experiences, social media can pose significant dangers. So, what’s the solution? Through building local coalitions and calling for the establishment of transparent policies that draw on the local expertise of people and groups using social media, the Social Media 4 Peace project aims to support freedom of expression while addressing the profound challenges that social media companies’ content moderation processes present.
In Indonesia, Social Media 4 Peace research looked at social, political, and cultural divides between Muslims and Christians in particular, as well as online gender-based violence. Social media companies have failed to adequately explain what factors they used to determine potential dangers posed by language and speech that may have different meanings in different social and cultural contexts – again, local knowledge and human experience was identified as essential to tackling these complex cases within a reasonable timeframe.
Wijayanto from Indonesia’s Center for Media and Democracy (LP3ES) pointed out that social media platforms have established community standards and rules concerning hate speech, and yet they fail to meet them. The recently-launched Indonesian Digital Space Democratization and Moderation Coalition has been established to address these limitations.
In Bosnia-Herzegovina, Social Media 4 Peace looked at inter-ethnic hatred, inflammatory narratives, gender harassment and assaults against under-represented groups, as well as ‘disinformation’ campaigns. It documented the role fact-checking by local experts can play in content moderation. The platform Raskrinkavanje rated approximately 10,000 news articles and photographs, encompassing more than 2,000 disinformation sources on Facebook, as well as other instances of false or misleading content. Fact-checkers identified emerging challenges and assessed the rising dissemination of disinformation, including the impact social media influencers – many of them based outside the country – can have on information, media, and political environments. Researchers tracked the growing sophistication of many of these disinformation campaigns, noting that they were often financially lucrative, and assessing that the most popular and most disseminated content was often the most extreme and sensational. This in-depth work helped forge a path for platforms to make decisions about takedowns based on solid research conducted by people who know the political and social context of the country.
It’s time for platforms to talk to civil society
Despite problems, including threats against journalists working with them and ongoing tensions between the media, the press council and the work of Raskrinkavanje, the fact-checkers said they felt their work made a difference to levels of disinformation.
In the three pilot countries, civil society said they felt empowered to engage with platforms to ensure the rights and concerns of their communities are properly respected. Over the course of this year, the Social Media 4 Peace project has also partnered with a local organisation in Colombia to investigate and evaluate the possibility of a coalition. ARTICLE 19’s Social Media 4 Peace report on Colombia will be published in early autumn 2023.
An overall objective of the project is to strengthen the resilience of societies to potentially harmful content which spreads online, in particular hate speech inciting violence, while protecting freedom of expression and enhancing the promotion of peace through digital technologies, notably social media.
ARTICLE 19 will continue to advocate these conversations and policy changes, and raise public awareness about social media and content moderation processes within these societies.
Local civil society groups play a pivotal role in advancing human rights in the digital space. It’s time for platforms to show they are ready for dialogue too.
Read and download the handbook
Read the reports:
Kenya, in English
Coming soon: the Social Media 4 Peace report on Colombia