UK: Online Safety Bill is a serious threat to human rights online

UK: Online Safety Bill is a serious threat to human rights online - Digital

Image by: Priscilla du Preez, Unsplash

ARTICLE 19 remains concerned about the Online Safety Bill, which is currently being debated in the UK Parliament. The Bill is an extremely complex and incoherent piece of legislation that will undermine freedom of expression and information, privacy and the rule of law and will ultimately be ineffective in achieving its stated goal of making the Internet a safer place. We urge UK legislators to not shy away from the sweeping overhaul that this Bill requires in order to fully protect all human rights online.

The long-awaited Online Safety Bill, published by the UK Government on 17 March 2022, progressed to the second reading last week. The Bill is one of the most far-reaching attempts to date to regulate how tech companies deal with users’ content on their platforms. It is a mammoth piece of legislation comprising 213 pages, plus 126 pages of Explanatory Notes. Put simply, the Online Safety Bill applies a duty of care to providers of user-to-user services and search services that have links to the United Kingdom. It requires regulated services to perform risk assessments and to adopt mitigation measures (‘safety duties’). The specific scope of these duties varies significantly depending on the nature of the service and the nature of the content.

Some of the most consequential duties established by the Online Safety Bill (the Bill) include the following:

  • Duty to address illegal content: All user-to-user services and search services are required to take proportionate steps to mitigate and effectively manage the risks of harm caused by different categories of illegal content. The safety duties to tackle these risks include designing processes to prevent individuals from encountering illegal content; minimising the length of time illegal content is present; and swiftly taking down such content.
  • Duty to address content harmful to children: User-to-user services and search services that are ‘likely to be accessed by children’ have an additional duty to protect children from content that is legal but considered to be harmful to children by preventing children from encountering such content.
  • Duty to address content harmful to adults: Providers of so-called ‘Category 1 services’ (currently described in the Government factsheet as those user-to-user services ‘with the largest audiences and a range of high-risk features’) have an additional duty to protect adults from content that is legal but harmful to them. Mitigation measures may include the taking down of such content, restricting users’ access to it, or limiting its recommendation and promotion.

ARTICLE 19 recognises that the largest platforms should be held to account for their continued shortcomings in upholding human rights online. We consider that many of these companies operate on business models that present a threat to freedom of expression and information – they are not conducive to healthy public debate and often silence minority voices. However, throughout the legislative process, we – together with other civil society actors – have warned that the Online Safety Bill fails in effectively addressing the threat to human rights.

ARTICLE 19’s key concerns with the Bill

Our main concerns about the Bill are:

1. The Bill fails to address the problematic business model of platforms

ARTICLE 19 has repeatedly raised concerns that the business model of tech companies raises challenges in terms of power imbalances and the protection of human rights. Their business model is based on advertising and monetising users’ attention, and as a result, the companies often amplify radical, false or unpleasant content.

The fundamental flaw of the Bill, therefore, lies in its exclusive focus on content moderation and its complete failure to address the business model. In fact, if passed in its current form, the Bill would likely further consolidate the market power of the biggest online platforms and have a chilling effect on freedom of expression. We believe that solutions based on transparency, data protection and sound competition policies, including the unbundling of hosting from content curation and interoperability of large platforms, would be far more effective in making the Internet safer for children and adults in the UK.

2. The Bill outsources the decisions on illegality to online platforms

The Bill requires companies to assess and decide whether their users’ speech is legal or not. This is deeply problematic as only independent judicial authorities should be given the power to make such a determination. In addition to the legitimacy concerns of outsourcing decisions on the legality of users’ speech to private actors, we note that in the majority of cases, these assessments are extremely complex and context-dependant and should therefore be made by trained individuals. The reality is, however, that online platforms deploy algorithmic moderation systems, such as automated hash-matching and predictive machine learning tools, to conduct content moderation. As these technologies are currently not advanced enough (and may never be) to distinguish legal from illegal content in a reliable manner, they routinely identify content as illegal and remove vast amounts of legitimate content.

3. The Bill requires censorship of protected speech

The Bill, like its draft version, seeks to impose an obligation on large platforms to take down and restrict access to content that is entirely legal, but considered ‘harmful’. It is extremely disappointing that despite strong opposition from ARTICLE 19 and other human rights organisations, as well as the explicit recommendation of the Joint Committee on the Draft Online Safety Bill, the government has not abandoned the concept of ‘legal but harmful’ content. We remind the UK Government once more that legal speech is protected speech and that legislation requiring online platforms to censor legal speech fails to comply with international freedom of expression standards.

The problem is further exacerbated by the fact that the definitions in the Bill are so vague that they fall short of the legality requirement under international human rights law. For instance, ‘content that is harmful to adults’ under the Bill includes content ‘of a kind which presents a material risk of significant harm to an appreciable number of adults in the United Kingdom’ (section 54). We note for instance that the Government has removed the reference found in the draft version that the potentially impacted adult be of ‘ordinary sensibilities’. This provided a lacking but at least somewhat objectivised standard. It is near impossible for users to understand whether and how their speech could cause harm, in particular if they do not know their audience. Coupled with the heavy sanctions regime the Bill is seeking to introduce, it is expected that companies will opt for a zero-risk strategy and generously remove vast amounts of legal content if there is even the slightest chance that it could be considered harmful to what is an undefined number of adults with subjective sensitivities.

4. The Bill weakens encryption and anonymity online

In practice, online platforms will only be able to comply with their duty to prevent individuals from encountering a wide range of content if they monitor all content on their platforms. Indeed, the Bill allows Ofcom to impose a ‘proactive technology’ requirement on platforms to identify and remove any kind of illegal content or content that is harmful to children (section 116). This approach stands in stark contrast to the one adopted in many other jurisdictions, as well as the EU, which have opted for an explicit prohibition of general monitoring (a prominent example is the EU E-Commerce Directive; the Digital Services Act currently negotiated in the EU is also expected to contain a general monitoring prohibition).

The general monitoring obligation effectively imposed by the Bill is all the more problematic as the latter includes in its scope not only public but also private channels (section 2), such as WhatsApp, Signal or Telegram. In addition, the Bill explicitly gives Ofcom the powers to order a provider of a user-to-user service to use ‘accredited technology’ to identify child sexual exploitation and abuse (CESA) content – whether such content is communicated publicly or privately (section 103(2)(b)). The complete failure of the Bill to make any meaningful distinction between the requirements on public platforms as opposed to private messaging services means that there is a real risk that offering end-to-end encryption will constitute a violation of the Bill.

The Bill also attempts to curb anonymity online. For instance, it requires larger platforms (Category 1 services) to introduce optional identity verification for all users and to give adult users the option to filter out any content from unverified accounts (section 14(6)). While we do not dispute that there are individuals who use anonymous accounts for harmful purposes, creating such a two-tier system is deeply problematic. Both encryption and anonymity tools have become vital for individuals to exercise their right to freedom of expression and to receive and impart information. This is particularly true for human rights defenders, journalists, whistleblowers, victims of domestic abuse or individuals from minority groups, whose participation in the public debate may well be restricted if the Bill is adopted in its current form. Such individuals should not have to choose between keeping their identity safe and being able to freely exercise their freedom of expression and information rights.

5. The Bill contains insufficient safeguards for freedom of expression

The Bill purports to counter the significant risks to freedom of expression that it causes by introducing a set of special provisions. We note that none of these protections provides for a meaningful counter-balance to the safety duties imposed by the Bill.

First, all services have a duty to ‘have regard to the importance’ of protecting users’ freedom of expression and privacy rights (Sections 19 and 29) when deciding on, and implementing, safety measures and policies. When contrasted with the more robust safety duties established in the Bill (and the heavy sanctions regime enforcing its compliance), this wording will be insufficient to offer meaningful protection and only seemingly creates the appearance of a balance being struck in the Bill. In fact, the Bill goes on to specify that if service providers adopt the ‘relevant recommended measures’ to comply with their safety duties (as recommended by Ofcom), they are also considered to have complied with their freedom of expression and privacy duties. This removes any incentive for a company to duly consider the impact of its safety measures on freedom of expression and privacy and will likely reduce this duty to a box-ticking exercise.

Second, the Bill exempts news publisher content, being content generated by a ‘recognised news publisher’ or content that reproduces or links to the full version of an article originally published by a recognised news publisher. To qualify as such a recognised news publisher under the Bill, a publisher is required to hold a broadcasting licence under the Broadcasting Act 1990 or 1996 and publish news-related material in connection with the activities authorised under the licence. Alternatively, a publisher is required to meet a number of criteria, such as having a business address in the UK and publishing news-related material (subject to editorial control and in accordance with a standards code) as its principal purpose (section 50). It is notable that foreign news publishers will likely not be able to avail themselves of that exemption.

As we noted in our review of the draft Bill, we are concerned about carve-outs for established media actors that come at the expense of citizen journalists who do not fulfil the exemption criteria, even though they may engage in vital journalistic activity. In other words, this leads to different standards in which the speech of some actors is more valued than others.

Third, there is a set of additional duties on larger platforms (Category 1 services) to protect so-called ‘content of democratic importance’ (section 15) as well as ‘journalistic content’ (section 16) when making content-related decisions.

  • Platforms will be required to apply processes designed to ensure that the ‘importance of the free expression of content of democratic importance’ is considered when making decisions on whether to take the content down, restrict access to it or take action against a user of the service. ‘Content of democratic importance’ is further defined as news publisher content or regulated (meaning user-generated) content that ‘is or appears to be specifically intended to contribute to democratic political debate in the United Kingdom’. The Bill further specifies that these processes need to be applied in the same way to ‘a wide diversity of political opinion’.
  • The Bill also provides for a separate set of duties to protect ‘journalistic content’ defined as news publisher content or user-generated content that is produced for the purposes of journalism and is UK-linked. With respect to such content, platforms will be required to create a dedicated and expedited complaints procedure.

While it is laudable that in this instance, the Bill does not limit the notion of journalistic content to traditional media – in theory, any user-generated content is capable of being classed as journalistic – the protections of ‘journalistic content’ and ‘content of democratic importance’ are flawed and will not make any meaningful contribution to the protection of free speech. For instance, it is unclear what exactly is capable of contributing to democratic political debate or why such content qualifies for special consideration over any other types of protected speech. Furthermore, having automated systems decide whether content contributes to democratic political debate or qualifies as journalistic is as unlikely to produce satisfactory results as the illegality assessment. In fact, we fear that in practice only content published by established journalists or politicians will be considered to fall within these categories focusing on the nature of the speaker rather than the content of the speech. The special reference in these provisions to content published by ‘news publishers’ reinforces those concerns.

Overall, ARTICLE 19 notes that the carve-out for ‘journalistic content’ and the content of ‘democratic importance’ in the Bill is indicative of the central problem with the proposed regulation – that it will have a serious chilling effect on freedom of expression. The proposal could also go against international freedom of expression standards on ‘hate speech’. In particular, we note that under international freedom of expression standards on incitement, the position and influence of the speaker is one of the key factors courts have to consider if and when certain speech reaches the level of incitement to hatred. In our view, it would be highly problematic if social media were asked to remove hateful content posted by ordinary users but would have to protect the same content if published, for example, in the Daily Mail or posted by a politician.

We recall that the protection of speech is not a privilege and should not depend solely on the nature of the speaker. Anyone should be able to engage in public debate and heated exchanges of views. Rather than proposing carve-outs for the ‘journalistic content’. the legislators should rethink the entire scope of the proposed Bill.

6. The Bill introduces a number of deeply flawed communication offences

The Bill also introduces new communications offences directed at online users (Part 10 of the Bill), implementing some of the Law Commission recommendations from its ‘Modernising Communications Offences’ report. These include:

  • The harmful communications offence (section 150), which criminalises sending a message if there is a ‘real and substantial risk’ that said message would cause harm (defined as psychological harm amounting to at least serious distress) to a ‘likely audience’, and the sender intended to cause such harm, and there is no ‘reasonable excuse’ (namely the intention to contribute to a matter of public interest) for sending the message.
  • The false communications offence (section 151), which outlaws sending a message that the sender knows to be false, that the sender intends to cause ‘non-trivial psychological or physical harm to a likely audience’ and for which the sender has no ‘reasonable excuse’.

The wording of both these offences is much too broad to comply with international standards on freedom of expression. We further note that criminalisation should be reserved for the most serious forms of speech-related crimes and that criminalising speech that could cause psychological harm without even the requirement of any actual harm to the victim is extremely problematic. Indeed, the harmful communications offence can effectively criminalise speech merely for being offensive or provocative. The false communication offence, for its part, goes one step further – it does not even include a likelihood of ‘harm’ occurring and it will be up to the prosecuting authorities to define what ‘non-trivial psychological or physical harm’ means. The Bill exempts ‘recognised news publishers’ from liability in both the harmful and the false communications offence – again making an unjustifiable distinction between traditional media and other actors.

In addition, we believe that criminalising malicious disinformation through a false communication offence is misguided. Such disinformation is in fact more likely to come about as a result of state-sponsored operations. Concerns about widespread misinformation and conspiracy theories would be better countered by promoting high-quality information and digital media literacy as well as diverse, independent media sources to ensure that online users hear a plurality of political or scientific views.

In ARTICLE 19’s response to the UK Law Commission’s consultation on reform of communications offences we already highlighted these serious concerns and invited the Law Commission to completely re-think the scope of the proposed offences. We are disappointed to see that these offences have now been introduced in the Online Safety Bill with almost identical wording.

What is more, in combination with the duty of care, these communication offences will fall into the ‘illegal content’ category under the Bill. Platforms will therefore be required to take down content merely on the basis that it might cause harm to a theoretical audience.

7. Disproportionate sanctions are likely to chill freedom of expression

The Bill establishes severe sanctions for online platforms that fail to comply with their duties under the Bill. Specifically, Ofcom will have the power to fine companies up to ten per cent of their annual global turnover and block UK users from accessing non-compliant platforms. In addition, senior managers may be held criminally liable for the failings of their company, particularly when it comes to responding to Ofcom’s information and data requests.

The severity of the sanctions will provide a strong incentive for companies to over-censor their users to avoid any risk of breaching the law. Given how complex the Bill is and how vaguely defined some of the duties and criminal offences in it are, any online platform will be legally advised to reduce its liability exposure and simply remove any controversial content with little regard to users’ human rights. In addition, blocking a website or service because of their failure to comply with the Online Safety Bill would most likely signify the blocking of completely legal and therefore protected content, penalising users for the failure of the platform on which they chose to express themselves. Such a sanction would therefore almost always constitute a disproportionate measure under international freedom of expression standards.

8. The Bill gives too much power to the Secretary of State and endangers the independence of Ofcom

The Bill gives the Secretary of State extraordinarily broad powers over the implementation of the Bill. This includes a high level of control over Ofcom. For instance:

  • The Secretary of State is to define priority content considered harmful to children and adults in secondary legislation (section 53). While secondary legislation needs to be put before Parliament, we are concerned that in practice this might be nothing more than a rubber-stamping exercise without the scrutiny required.
  • Ofcom will be required to submit its code of practice to the Secretary of State. The Secretary of State has the power to direct Ofcom to modify the draft on vaguely-defined grounds such as ‘for reasons of public policy’ (sections 39 and 40).
  • The Secretary of State has the power to set out a statement of the Government’s strategic priorities in relation to online safety matters, which Ofcom will have to consider when carrying out its online safety functions (section 78).
  • The Secretary of State may provide guidance about how Ofcom should exercise its powers (section 147).

We note that the Government has once again ignored one of the key recommendations of the Joint Committee on the Draft Online Safety Bill, namely to constrain the Secretary of State’s power over the Bill. The Bill is still way too dependent on the discretion of the Secretary of State. The degree of government control over the UK’s supposedly independent regulator, Ofcom, is unprecedented. The Secretary of State’s ability to interfere with Ofcom’s regulatory oversight also completely undermines the idea that Ofcom will be independent in the performance of its duties.

ARTICLE 19’s recommendations

As the Bill is now undergoing the usual legislative process with readings in both Houses of Parliament, ARTICLE 19 calls on all legislators to address the concerns raised by ARTICLE 19 and other human rights organisations that promote freedom of expression and privacy rights online. As the situation stands, the Online Safety Bill will not deliver its promises of online safety, and likely have profound implications for freedom of expression, privacy and other human rights online.

Bringing the Online Safety Bill in compliance with international human rights standards will require substantial amendments. We stand ready to engage about further revisions of the Online Safety Bill and ensure that the final version fully meets freedom of expression standards.