The past year has been a difficult one for Facebook. It seems barely a month goes by without another news story exposing the failings of a platform whose influence and reach has evolved much more quickly than the rules have adapted – including its own. From accusations of facilitating the spread of disinformation during elections, to censoring photography depicting the horrors of war in Vietnam, to enabling the spread of hate speech and violent extremism, Facebook has faced increasing pressure over how it manages information and content through the platform. In light of this long line of troubles, the recent scandal on the mining of personal data from the platform by private consulting firm Cambridge Analytica is not that surprising for those paying attention.
Facebook isn’t the only one under frequent pressure: a number of scandals relating to the suspension of activists’ Twitter accounts and widespread abuse on the platform, documented by Amnesty International’s latest #ToxicTwitter campaign, shows the range of problems platforms have to navigate, and the lack of transparency around how that is currently being done. We’re increasingly beginning to understand quite how much of our daily lives, views and information we entrust to social media, and yet how little we really understand about the rules they follow.
This month, in the wake of the Cambridge Analytica scandal, Facebook founder and CEO Mark Zuckerberg was answering questions in front of the media and the US Congress. On top of addressing privacy concerns, he acknowledged that Facebook has failed to properly tackle a number of issues including ‘fake news’ and hate speech on the platform. But while apologies and internal audits might offer patchwork responses, there are growing calls for more fundamental change to the way dominant social media companies like Facebook are held accountable and regulated.
In the modern world social media has become fundamental to how we communicate. Last year it was estimated that global social media users had reached over 3 billion in number, with around 2 billion active Facebook accounts. And despite valid concerns around the power and use of these platforms, they can have an extraordinarily positive effect on freedom of expression, facilitating public debate and strengthening social movements. Social media has the power to make and break political leaders, connect protest movements, and change societal attitudes on issues of equality. In countries where traditional media is more restricted, under the control of governments or corporate agendas, social media often provides a unique space for expression, if not always as freely as it should. Such is the power of social media to bring about change that it is considered a threat by repressive governments. Countries like Iran and China have blocked access to certain social media networks in their entirety, while others have engaged in targeted blocking. More commonly, countries use repressive laws to criminalise the posting and sharing of dissenting or controversial content.
It would however be naïve to see social media as a purely positive space. Hate speech and incitement to violence on social media has for example been explicitly noted by UN officials investigating recent atrocities against the Rohingya ethnic group in Myanmar. While this shows the problem at its most extreme, it speaks to the depth and complexity of an issue that these companies are currently struggling to manage.
In the wake of the news around Cambridge Analytica, Facebook’s shares plummeted, while thousands of users joined the #Deletefacebook campaign in protest (which was unsurprisingly vocally supported by rival platforms). However, an exodus from Facebook won’t solve the deeper issues that have developed around the role and regulation of social media. Where one social media giant declines another will most certainly take its place, and it’s important to remember the vital connection to others and tool for information, expression and organising that social media provides. We therefore can’t simply disconnect, or ignore the power of these platforms around the world. It is time to tackle the tough questions around regulating social media and addressing issues like hate speech in a way that protects privacy and free expression, and allows us to use these tools without relinquishing our rights.
Recently, Mark Zuckerberg acknowledged that it’s no longer a question of whether Facebook and other social media companies should be regulated, but how. In a series of interviews, Zuckerberg noted the need for transparency regulations, but in the next breath stated Facebook was already taking action on this itself. This is often the reaction of the company to public criticism: communicating their desire to do good, to be better or even learn from mistakes, but usually making unilateral decisions on what they consider will solve the issue of the day. More often than not these decisions only add to concerns. This type of unilateral regulation of content by Facebook lacks basic guarantee of transparency, due process, and consideration of the right to freedom of expression. The rules that Facebook applies to regulate content relating to sexual abuse, other kinds of violence, and hate speech had to be leaked to the press before we could learn even a little bit about how an important part of today’s public speech is being controlled. What Facebook sees as the control of its own private domain actually has a vast impact on democratic space. Leaving regulation of this space in the hands of a private company’s internal rules or opaque algorithms is therefore only going to lead to further problems.
Balanced data protection and privacy laws and regulations are necessary to protect both privacy rights and freedom of expression, but when it comes to controlling online content, regulation by governments can be a flawed model.
Legislative efforts to deal with social media have often focused on content issues, such as hate speech. Whilst existing laws should normally apply to online communication, including through social media platforms, public policy initiatives that seek to control online content have failed to protect free speech. The legislative approach easily results in vague legal concepts that can translate into abuse, or be accompanied by disproportionate sanctions, which encourage excessive censorship. Germany’s recent law on illegal speech and social media is just one example of the risks to free expression that this sort of instrument creates.
Ultimately, in the world of social media, legislation is heavy machinery, unable to quickly adapt to the ever-changing conditions of online publication and distribution of content. What’s needed, therefore, is a model of regulation that can deal with the myriad of challenges the world of social media creates. However, contrary to recent debate, deciding whether social media platforms are publishers or not should not be a prerequisite to approaching discussion on their regulation. Social media platforms are “a kind of hybrid beast that does not fit into any of the traditional categories”, and as such, an adapted model of self-regulation could provide more flexibility while protecting free speech.
The self-regulation model is often understood to mean leaving things in the hands of private companies, with a varying degree of implicit or explicit pressure from public authorities. However, much like legislation, in practice this sort of system incites companies to excessive removal, and means an over reliance on private companies to protect human rights. The best way to ensure genuine protections and a transparent approach to content regulation is an independent regulator, as is used to promote accountability and ethical standards in the traditional print media. Voluntary regulation at the industry level, which includes the adoption of a charter of ethics and the creation of a body that will ensure its application, provides a much more effective, accountable and flexible system to address this new challenge.
In order to play this positive role, self-regulatory bodies need to be independent, open to participation from stakeholders and civil society, transparent and accountable to the public it serves. Instead of a press council, there should be a Social Media Council, which would become the guardian of a new charter of ethics for online moderation and distribution of content. The Council could be able to receive public complaints, and help to fill the gap in accountability. While the creation of this Council would be no easy task, with questions around scope, jurisdiction and funding in need of further thought, the benefits to democracy, social media users, and in many ways to social media companies themselves, that such a system would bring are worthy of consideration.
If the latest in a seemingly endless series of revelations on social media’s failings has made anything clear, it’s that we can no longer operate on outrage and response when it comes to this essential part of so many of our lives. The power and influence of social media, whether it’s Facebook or Twitter or any number of smaller platforms, is only likely to grow. Before we hit the next scandal, we need to rethink how we regulate online content and develop transparency around the regulation of these vital spaces for interaction and expression.