The admission by Facebook that the famous picture of a young naked girl burnt by napalm has historical significance is just another episode in a long-term struggle for the protection of freedom of expression online. Democratic societies are confronted with a complex learning process to create the appropriate processes to deal with the influence of social media giants over public debates.
In the centre of the frame, among other children running away from a napalm bombing, we see a naked 9-year old girl in tears. Behind the small group of crying children, a few soldiers are walking; in the background, clouds of smoke signal a fire. It is a strong image: even without any knowledge of the context, one is immediately moved by the fear and suffering of these children fleeing the destruction of their village. It is a world-famous photograph by war photographer Nick Ut in Vietnam in 1972 (immediately after it was taken, he put down the camera and took the child to an hospital).
Last week, Facebook decided that this iconic image, one of the most famous pictures of the 20th century, should be removed from their platform because their community guidelines prohibit photographs that display fully nude genitalia or buttocks. This decision triggered a massive backlash that brought the incident to the world’s attention. The outcry was particularly intense in Norway as the image was originally shared by a Norwegian writer in a post about pictures that changed the history of warfare (Nick Ut’s picture of young Kim Phúc is often deemed to have contributed to ending the war in Vietnam). The Norwegian newspaper Aftenposten joined in the protest against Facebook’s decision with an open letter from its editor-in-chief. When the Prime Minister weighed in on the debate by reposting the picture, Facebook removed the post from her page.
Facebook then changed its mind and declared that they now recognised that the ‘global and historical importance’ of the photograph outweighed ‘the importance of keeping nudity off Facebook.’ Giving in to a solid push from its users, the social media giant suddenly saw common sense and bowed to the evidence. As Kim Phúc has declared about the controversy, it is indeed sad that some ‘would focus on the nudity in the historic picture rather than the powerful message it conveys.’
The truth is, this episode was really just a skirmish in a long-term struggle to protect the free flow of information and ideas in the online environment. Currently, Facebook is being sued in France in relation to the removal of a famous painting by Courbet: when a French teacher posted ‘The Origin of the World’, a representation of female genitalia, his account was immediately suspended. In February 2016, French courts asserted their jurisdiction over Facebook in spite of a clause in the platform’s terms and conditions that stipulates that all lawsuits should be heard in California. At the time of writing, it remains to be seen what the French judges will decide. In Brazil, the suppression of a poster showing naked indigenous people to announce a photography exhibition was a cause for similar debate in April 2015.
Nudity is not the only cause of controversy: the depiction of violent events in video distributed over the social media platform also potentially contradicts the community guidelines, a risk even more significant with the newly-developed live video capacity. In July, the removal of a video that showed an African American dying from violence inflicted by US policemen caused an outcry that led Facebook to quickly reinstate the clip.
Some argue that such situations do not amount to a violation of freedom of expression. After all, these are not stories of brutal censorship by public authorities, prohibiting critical speech and expediting journalists to prison, or indeed the cemetery. It is clear that the removed material can still be found elsewhere, in newspapers, books, libraries or even on other websites. Protesters have claimed that Facebook has become a “publisher” instead of a simple hosting platform, but where does the argument lead? Shouldn’t private businesses remain free to govern their own realms, to freely exert any editorial responsibility they see fit? Even if such removals impact the online availability of information and ideas, what power do we have, as users, but to engage in virtual protests, click the angry emoji and expect the corporations to ‘do the right thing’ – out of benevolence for their users’ preferences?
But the control of content by social media actually raises concerns from a legal point of view.
From the perspective of international human rights law, the obligations of the States extend beyond abstaining from interference with the exercise of freedom of expression: States also have positive obligations to ensure the effective protection of human rights, including in the sphere of relationships between private parties. On this topic, the Institute for Information Law of the University of Amsterdam has recently published a solid study on fundamental rights and digital platforms. In other words, States may have to regulate the behaviour of private actors where it is necessary to guarantee the effectiveness of the rights of individuals to freely receive and impart information and ideas.
Is Facebook, then, in such a special situation that States should take action to protect freedom of expression?
This question must be approached with extreme caution, notably because of the current temptation of public authorities to abandon the regulation of online content, leaving it in the hands of digital platforms. Clearly illustrative of this perilous trend is the Code of Conduct on Hate Speech recently adopted by the European Commission and IT companies such as Facebook, Google or Microsoft. This initiative has been criticised by civil society, and ARTICLE 19 published a detailed analysis of the Code of Conduct. Indeed, this form of regulation leads to a situation in which private businesses are asked to regulate speech according to their own internal standards instead of these decision being made by public authorities under the rule of law, according to human rights standards. It creates a framework where speech is controlled and censored by private actors without any consideration for due process or fundamental rights such as the right to freedom of expression.
In relation to the regulation of content hosted by digital platforms, three key principles must be respected:
First: Limited Third-Party Liability
In order to preserve the positive impact that social media and other intermediaries have upon the free flow of information and ideas, it is necessary to reaffirm and further specify the principles of limited liability for the hosting of third-party content.
There is no doubt that digital platforms facilitate the dissemination of information and ideas on a massive, and sometimes disruptive, scale. Facebook and others have allowed individuals to contribute to the news, comment upon the coverage of current events by mainstream media, and organise social movements. In the short history of the World Wide Web, the development of services which host third-party content has relied on a legal framework which shields providers from liability: as long as they have no actual knowledge of unlawful content being hosted on their servers, Internet intermediaries should be immune from legal consequences resulting from material produced and uploaded by third parties (for more detail here, see the Manila Principles on Intermediary Liability).
Facebook should not be liable for potentially unlawful material posted on its servers by its users – that is, until it comes to have actual knowledge of the unlawfulness. A determination of unlawfulness should only be done by an independent and impartial court or adjudicatory body in respect of due process and fundamental rights. The delicate balance of freedom of expression and opposing interests is the responsibility of democratically-elected public authorities.
Second: Freedom of Expression in Terms and Conditions
Facebook and other digital giants should respect international freedom expression standards in their Terms and Conditions.
Although social media platforms have arguably become the main vehicle for individuals to exercise their right to free expression, they have increasingly adopted lower free speech standards than those permitted under international standards on freedom of expression. In practice, these low standards are often the result of social media platforms adapting their community standards to domestic legal requirements which fall below international standards on freedom of expression. They may also be driven by the demands of the advertisers who do not want their image tarnished by association with content deemed objectionable.
While low free speech standards enable companies to grow their user-base by creating “safer” online environments, they also turn these quasi-public spaces into much more sanitised environment where freedom of expression is not limited by principles of international law but by the decision of these companies.
Third: Clarity and Transparency
There needs to be clarity and transparency about decision making processes for content removal.
There are many other faces to Facebook and other digital giants. While they continue to act as host for third-party content, they simultaneously engage in other types of activity. They routinely commission or acquire content to increase their reach and appeal (e.g. Twitter has bought the rights to stream NFL games). They rely on a combination of human decisions and automated decision-making processes (i.e. algorithms) to monitor and influence the findability, visibility and accessibility of the material that is posted on their servers. Even through the opacity of practices, and the veil of trade secrets which covers algorithms, it is clear that the relationship of digital platforms and content extends well beyond mere hosting. Their influence is increasingly being recognised but the appropriate ways to deal with it have yet to be invented.
If we were talking about a couple of geeks in a garage experimenting with algorithms, there would not be much of a freedom of expression issue to discuss. However, particularly with Facebook, the sheer scale of the platform has to be considered. It is a simple fact that Facebook now plays a major role in the way a growing part of the population interacts with the world. It has become an important source of news for many people. For media companies that are undergoing complex digital mutations, social media platforms are the place to be in order to reach the audience. Recent studies conclusively showthat Facebook and other digital giants exert massive impact over the public sphere.
Whether they want to acknowledge it or not, Facebook and a few other multinational corporations are in a dominant, even quasi-monopole, position which allows them to decisively influence the flow of information and ideas, a flow which irrigates crucial public debates. As they select the content to push forward, they actively contribute to the visibility of certain elements of news, entertainment, information and ideas. Beyond the discussion of the ‘bubble-effect’, the influence of tech giants upon the agenda and discussion of public affairs has grown to a significant degree. Intentionally or not, these organisations are in a position to influence the public agenda, trends in public opinion, and the topics and arguments of public debates.
In the current context, the right of individuals to impart and receive information and ideas in the online environment is to a large degree determined by the actions of social media giants. This explains why the question of a possible positive duty for the State to intervene must be raised and why the debate needs to move beyond the liberty of corporations to conduct their businesses.
In the course of the recent protests against the removal of the photograph of the young Vietnamese victim of war, Facebook has been compared to a publisher, and to an editor. Can we look at traditional publishers, such as the print media, as a source of inspiration to design the appropriate approach to the influence of social media giants over the public debates and the online flows of information and ideas? In certain countries, self-regulation mechanisms have contributed to holding the press accountable to the public.
What sort of self-regulatory framework might we create to hold social media giants accountable for their influence on the public debates? And should self-regulation prove to be insufficient, what form of public intervention could be necessary to achieve an effective protection of freedom of expression?
This is a learning process of great importance for democratic societies. Civil society, academia, corporate actors and others should collaborate on analysis and research to build a better understanding of how to address the impact of social media giants on civic space, media pluralism and the diversity of content. Discussions should also focus on the transparency of content selection and the development of appropriate remedies, possibly including algorithmic solutions, that will ensure users’ exposure to a real diversity of content.
When Facebook decided that they could not suffer to host the picture of a naked young girl burnt by napalm, an intense public mobilisation brought them to reason, and a photograph of historical significance was rescued from digital oblivion, or at least social media oblivion. But what happens when a less popular message is removed from the social networks? What happens when the content removed by Facebook does not receive the unanimous support of high-ranking officials?
The conversation on how to protect freedom of expression on social media has only just begun, and it is an important one. So please contribute… and share this blog on your favourite social media platform?