Regulating social media content: Why AI alone cannot solve the problem

Digital 6 min read
ARTICLE 19
Regulating social media content: Why AI alone cannot solve the problem - Digital

Many of us were transfixed by the coverage of Mark Zuckerberg’s testimony  before the United States Congress back in April 2018. During a closely watched two-day hearing Zuckerberg  – when asked about Facebook’s standards for content removal or filtering on the platform – stated that Facebook’s standard for offensive speech includes speech “that might make people feel just broadly uncomfortable or unsafe in the community.”

This was an example of where reactive agenda-ridden declarations of intent create more problems than they solve.

Governments around the world are scrutinising how companies regulate speech on their platforms and are actively moving towards more regulation of speech online. For example, Germany’s recent Netzwerkdurchsetzungsgesetz or Network Enforcement Act suggests substantial fines for social media companies that fail to remove content considered to violate the German Criminal Code, without a court ruling to determine its legality.

The Indian government recently announced its intention to deploy an AI-powered social media analytical tool that will analyse user sentiments across major platforms, to create a “360 degree view of the people who are creating buzz across various topics”.

Over-broad restrictions on freedom of expression arising from regulation of speech online have to be challenged.  And the use of technological tools to deal with complex problems like fake news, hate speech and misinformation fall far short of the standards required to protect freedom of expression.

This is in part because of the way the problems are being defined, and the approach that is being taken to address vague concepts such as  fake news, fake speech and misinformation. They are too broad and susceptible to arbitrary interpretation and become particularly dangerous when State actors assume responsibility for the way these terms are  interpreted. For example, Malaysia’s government introduced a “fake news” bill in March 2018 that sought to criminalise speech that criticises government conduct or exhibits critical political opposition.  

A more significant challenge is posed by the way the attention has been focussed on technological tools like bots and algorithms to filter content.  While useful for rudimentary sentiment analysis and pattern recognition, they alone cannot parse the social intricacies and subjective nature of speech, which are in themselves difficult even for humans to grasp.  The nature of development and deployment of AI tools make the risk to freedom of expression even greater: the presence of human bias in the design of these systems means we are far away from datasets that reflect the complexity of tone, context, and sentiment of the diverse cultures and subcultures in which they function.  

This begs the question: what should be our response to content-related problems on platforms?

On one hand, policies that moderate content help users to understand  how content is assessed and classified and this in turn opens up possibility of dialogue, feedback, and assessment.  If and when they are developed on objectively justifiable criteria, as opposed to ideological or political goals, and in consultation with civil society content moderation policies can foster an enabling environment for freedom of expression.

On the other hand, content moderation policies are likely to promote multiple, diverging interpretations on what constitutes fake news, disinformation or illegal content.  Standards developed by States or private platforms would attain the status of ‘truth’, and inevitably lead to a situation where the ability to exercise free expression and opinion is severely fractured.

Most content moderation policies today err on the side of caution in terms of categories of content banned, i.e. have restrictions on freedom of expression that are far wider than those permitted under international human rights law.   

ARTICLE 19 has produced principles for content regulation Side-stepping rights: Regulating speech by contract , arguing that although social media companies are in principle free to restrict content on the basis of freedom of contract, they should respect human rights, including the rights to freedom of expression, privacy and due process.

The policy sets out the applicable standards for the protection of freedom of expression online, particularly as they relate to social media companies, and provides analysis of selected Terms of Service of Google, Facebook, Twitter and Youtube.  

In the recommendations ARTICLE 19 suggests how companies should respect basic human rights standards, and that any restriction on the right to freedom of expression must be in accordance with international human rights law.  

Disinformation is an economic, social, and legal problem that can be mediated by understanding how incentives work across all three domains. Seeing empowered users as an important tool to deal with misinformation is essential.  We the users have to understand content and make informed decisions about what we are reading and how we perceive what we read.

If using technical tools is a priority, this must be undertaken in a transparent manner, recognising and accounting for AI’s inherent inability to understand context and rhetoric truths:  they can only solve part of the problem. We must do the rest.