Facebook congressional testimony: “AI tools” are not the panacea

Facebook congressional testimony: “AI tools” are not the panacea - Digital

On the 10 and 11 of April 2018, Facebook CEO Mark Zuckerberg’s testimony before the United States Congress revealed the company’s increasing reliance on ‘Artificial Intelligence (AI) tools’ to solve some of their most complex problems: from hate speech to terrorist propaganda, and from election manipulation to misinformation. Given AI systems’ potential to undermine freedom of expression and privacy, it is important to analyse the perils of Facebook’s current approach towards content moderation, and the proposed ability of ‘AI tools’ to moderate speech.

Lopsided interest in tools, but not standards

While the move towards ‘AI tools’ to solve problems was repeatedly mentioned, the standards that these tools should adhere to seem to have been given far less thought. The issue of legitimate speech is often contextual, subjective, and complicated, which makes the criterion for takedown crucial. When pressed for a definition of what speech is deemed offensive or controversial for the purposes of content removal, Mark Zuckerberg stated, it would be speech “that might make people feel just broadly uncomfortable or unsafe in the community.”

This is significantly lower than accepted standards for restriction of speech under international human rights law. The right to freedom of expression is a fundamental right – crucial to the functioning of democracies. Restrictions on freedom of expression must: (i) be provided by law; (ii) pursue a legitimate aim; and (iii) conform to the strict tests of necessity and proportionality. The standards for takedown within Facebook fall well below this threshold, and seem to be defined ad-hoc. Given the repercussions of overbroad restrictions and vague standards for content removal, it is imperative that the company articulates its community standards in accordance with the minimum requirement of international human rights law.

Problematic assumptions of ‘AI tools’ as the key to legitimate content removal

An overarching assumption expressed throughout the course of the hearings is that ‘AI tools’ that can proactively flag problematic content are more desirable and effective than reactive content takedowns affected by human beings. This  warrants closer consideration.

At present, AI systems are not technically equipped to understand context in speech, social intricacies, let alone evolving and subjective social constructs like hate speech. While machine learning systems can recognise patterns and may even be equipped to carry out rudimentary sentiment analysis; understanding tone, context, sarcasm and irony – all key components of ascertaining problematic speech – are a long way off.

The assumption also rests on the fact that ‘AI tools’ – once they reach a greater level of sophistication – will be neutral and efficient in their treatment of content. Growing research in the field indicates otherwise. These systems could potentially amplify the problem, as they imbibe values of the human building them, and the bias embedded in datasets they are trained on, without being able to correct for it. The core issue of disagreement between perspectives cannot be solved by technology, regardless of the level of sophistication achieved.

Offering technological solutions to problems of human nature

Given the complex underpinnings of many of Facebook’s issues, the present approach of technology-based solutions to human problems will inevitably fall short. Beyond purely technical limitations, the complexity lies in opinion, context, and perception. Even within Facebook’s internal guide for human content moderation, the contours of protected speech are complicated. This will only be amplified by technological tools.  

Relying on AI systems for removal of content presently, or in the future jeopardizes the freedom of expression by asserting broad definitions and vague standards for what constitutes problematic content. The use of AI tools in such a situation can also impact the exercise of a number of other rights, including the right to an effective remedy, the right to a fair trial, and the right to freedom from discrimination.

Hate speech in particular is an emotive concept. It finds no definition in international human rights law, and no consistency across regional or national legal frameworks either. As ARTICLE 19 has pointed out through earlier work, the best response to hate speech is not removal of content, which often has a detrimental impact on freedom of expression, but counter speech. Censorship of hate speech has rarely been shown to address the root cause of hatred, whereas education and dialogue has yielded better results. Furthermore, giving a single entity – whether a Government or a company like Facebook – the power to determine legitimate political speech can have grave consequences for freedom of expression. Even more so in the case of a monopoly like Facebook.

Focus on winning the race, with no acknowledgement of responsibility

Mark Zuckerberg pushed for encouraging innovation in the field of facial recognition as an AI technology that must be supported lest American companies “fall behind Chinese competitors and others around the world who have different regimes for new features like that”. He also pushed for the use of ‘AI tools’ in election security, calling it an ‘arms race’ with Russian operators trying to exploit the social network.

The assumption that innovation amounts to competitive progress bears deeper discussion. Facial recognition technology has serious repercussions for privacy, anonymity, and the freedom of expression. It is more often than not carried out in a fashion that is beyond user control and redressal. The claim that such technology should be supported in order to keep up with other jurisdictions assumes that winning a race is important, even if it involves a systematic and pervasive erosion of internationally recognised human rights. A more deliberate approach is key.