UK: House of Lords must reject the Online Safety Bill

UK: House of Lords must reject the Online Safety Bill - Digital

Photo by: Jonas Leupe

The Online Safety Bill will enter its next legislative stage in the House of Lords with the second reading on 1 February 2023. The proposed legislation, which is already several years in the making, continues to pose an unprecedented threat to free expression online. We urge the House of Lords to properly scrutinise the Bill and give it the overhaul it requires to be in line with human rights standards.

According to the Government, the Online Safety Bill (the Bill) ‘delivers the government’s manifesto commitment to make the UK the safest place in the world to be online while defending free expression’. For years, ARTICLE 19 and other human rights organisations have emphasised that the opposite is true: the Bill will not make the UK the safest place in the world and it will certainly not protect freedom of expression and other human rights online.

Despite the countless changes made to the Bill over the years, the Government and the House of Commons have not yet taken the comprehensive steps necessary to bring the Bill in line with international human rights standards. If anything, the Bill has become even more complex and incoherent. Numerous amendments to the Bill have been responding to the political concerns of the day with little regard for whether they will actually make the internet safer or protect people’s human rights online. The vast majority of ARTICLE 19’s key concerns with the Bill thus remain. These include the Bill’s focus on censorship instead of addressing the problematic business model of some of the biggest tech companies; the outsourcing of decisions on legality of content to private entities; or the requirement to restrict protected speech.

ARTICLE 19 recognises that some of the most recent changes brought certain improvements to the Bill, such as the removal of the ‘legal but harmful’ provision when referring to adults and the removal of the harmful communications offence. Despite these recent changes to the Bill, threats to freedom of expression remain.

In addition to the threats we identified in our previous statement, we are in particular concerned with the following issues:

  1. The ‘legal but harmful’ clause is replaced by State enforcement of terms of service

In a widely-publicised change to the Bill, the government has removed the imposition of a duty of care to shield adult users from ‘legal but harmful’ content. The required mitigation measures ranged from the taking down of such content, restricting users’ access to it, or limiting its recommendation and promotion. While ARTICLE 19 welcomes this step – we had warned that a requirement to censor legal speech fails to comply with international freedom of expression standards – this obligation has been replaced by a new and equally problematic provision.

Platforms are now required to enforce their terms of service – or face sanctions by Ofcom (clause 65 of the Bill). These terms of service often go well beyond restrictions permitted under international human rights standards, enabling companies to censor many categories of lawful speech that they – or their advertisers – may consider harmful, inappropriate or controversial. In that sense, Ofcom would still have enforcement powers to ensure that ‘legal but harmful’ speech is restricted, with the only exception that it is now up to each company to decide which types of lawful speech are to be suppressed online.

The risk for over-removal of protected speech thus remains in place. Experience shows that companies will want to limit their liability exposure and generously remove vast amounts of legal content if there is even the slightest chance that it could be in breach of their terms of service. When their liability exposure is increased, platforms’ incentives to rely on automated content moderation tools is also increased. Reliance on such tools routinely leads to over-removal of legitimate content as these technologies are unable to understand nuance and context in users’ speech and to correctly identify speech that may be illegal or in breach of platforms’ terms of service.

  1. New sanctions regime increases risk of censorship

The risk of over-removal of protected speech is further exacerbated by the toughened sanctions regime in the current version of the Bill. Criminal liability has been expanded for senior managers of regulated entities by introducing a prison sentence of up to two years if they fail to comply with the child protection duties under the Bill.  Platforms are required to mitigate and manage risks related to categories of harm and to prevent children from encountering so-called ‘primary priority content that is harmful to children’. These categories will not be defined by Parliament but by the Secretary of State, who will have the power to expand the list without adequate legislative oversight. As we have warned, the threat of criminal liability, and the vague concepts and definitions that underlie the child protection duties under the Bill, increase the risk of platforms relying on automated tools and removing protected speech in a precautionary and censorious manner. 

  1. The Bill threatens encryption and secure communication

One of the most worrying aspects of the Bill remains that, despite strong opposition from civil society groups, Ofcom could require internet services to monitor all user-generated content, including all individuals’ private messages. The Bill explicitly gives Ofcom the powers to order a provider of a user-to-user service to use ‘accredited technology’ to identify child sexual exploitation and abuse (CSEA) or terrorism content.

First, the possibility to impose a general monitoring obligation on service providers stands in stark contrast to the approach adopted in many other jurisdictions, including the European Union. The latter recognised the dangers such an obligation poses to human rights online, and has explicitly prohibited Member States in the Digital Services Act from mandating general monitoring.

Second, as laid out in our detailed joint second reading briefing for the House of Lords, clause 110 also poses a significant threat to encryption. When it comes to the identification of CSEA content, Ofcom may mandate the use of ‘accredited technology’ whether such content is communicated publicly or privately. The only way for service providers that offer end-to-end encryption to comply with such duties would be to remove or weaken encryption by introducing scanning technology onto their platforms.

As we acknowledge in our briefing, it is undisputed that the serious human rights issues of CSEA need to be tackled. However, legislators tend to forget that undermining encryption may also pose a threat to childrens’ rights, including their right to privacy, and expose them to harm. Indeed, a recent report by Child Rights International Network and Defend Digtal Me found that there are vital ways in which encryption can also protect children from violence, promote their privacy and encourage their expression, including for particular children who are marginalised and vulnerable. The UN Committee on the Rights of the Child has also noted that measures enabling the detection and reporting of CSEA content must be ‘strictly limited according to the principles of legality, necessity and proportionality’ and suggested further that routine and indiscriminate digital surveillance of children may not be necessary and proportionate (CRC/C/GC/25, 2 March 2021, paragraph 70).

It should also give pose that a recent legal opinion commissioned by Index on Censorship found that ‘[t]he provisions in the Online Safety Bill that would enable state-backed surveillance of private communications contain some of the broadest and powerful surveillance powers ever proposed in any Western democracy’.

  1. ‘Disinformation’ continues to be criminalised

The list of communications offences, originally proposed in the Bill, has also undergone some changes. As a positive, the ‘harmful communications offence’ – which sought to criminalise the sending of a message that risked causing harm to a ‘likely audience’ without there being a ‘reasonable excuse’ – has been removed from the Bill. What has, however, remained is the so-called ‘false communication offence’ (clause 160 of the Bill), which criminalises sending a message that the sender knows to be false, that the sender intends to cause ‘non-trivial psychological or physical harm to a likely audience’ and for which the sender has no ‘reasonable excuse’. In essence, the Bill seeks to criminalise disinformation. Should it come to pass, it would not only pose a serious threat to free speech in the UK. There is no sugar-coating the fact that the United Kingdom will join a list of States such as Turkey or Tunisia that have recently introduced similar offences with the intention to limit civic space and crack down on dissidents, and that the Bill may well help to legitimise those governments’ actions and render advocacy against them more difficult.

How can the House of Lords protect free expression online? 

ARTICLE 19 believes that the problematic Bill will bring not only harm to human rights but also to the digital economy. The obligations imposed on companies by the Bill are likely to stifle competition and innovation in the UK digital market. In fact, the Bill will further increase the market share of the very few platforms it is supposed to reign in.

Here once again, the Bill’s approach stands in stark contrast with the approach adopted by the European Union in the Digital Services Act and the Digital Markets Act. While the latter regulations could also have been more ambitious, their focus on transparency, procedural rights for users, systemic risk assessments and market conditions are far ahead of the current state of discussions in the UK.

The House of Lords now has the important task to scrutinise the Bill. The aspects highlighted here and in our last statement are only some of many in a Bill that remains fundamentally flawed and that will fail in ensuring user’s safety online. It is difficult to see how anything but a complete rejection of the Bill could safeguard freedom of expression. Given the fundamental importance of adopting an effective and rights-respecting regulatory regime for digital platforms, Peers should not shy away from this step. At the very least, we urge them to carefully consider and address the key concerns raised repeatedly by ARTICLE 19 and other human rights organisations focusing on freedom of expression and human rights online.