Submission to UN Special Rapporteur on freedom of expression on ‘content regulation in the digital age’

ARTICLE 19 has submitted our response to the UN Special Rapporteur on freedom expression’s call for comments on the issue of ‘content regulation in the digital age’, which will form the subject of the Special Rapporteur’s 2018 report to the UN Human Rights Council. This forthcoming report will be the latest in a series of reports by the Special Rapporteur, which focus in on the promotion and protection of freedom of expression in the digital age, addressing both the roles and responsibilities of the State and the private sector.

We consider the Special Rapporteur’s focus on this issue to be extremely timely, in light of ongoing policy and legislative developments in this area. More than ever, we find that the principles of immunity from liability for third-party content are under threat. At the same time, Internet companies are facing considerable pressure from governments, and their own users, to take proactive action to address online harassment and “hate speech”, “fake news”, and terrorist or extremist content appearing on their platforms, and are removing more content than ever before, including through the use of algorithms.

In our submission, we address risks to freedom of expression posed by developments in, and increased resort to, online content regulation, including in the following areas:

  • Companies’ compliance with state laws, including companies dealing with “legal requests”, i.e. court orders or orders from government agencies/public authorities; companies acting on the basis of their Terms of Service at the request of law enforcement or other ‘trusted flaggers’; and search engine operators dealing with “right to be forgotten” requests.

 

  • Companies’ self-regulatory initiatives to deal, in particular, with “terrorist” or “extremist” content, as well as “fake news”. Such initiatives include a joint initiative by Facebook, Microsoft, Twitter and Youtube to combat “terrorism” online, using a “hashes” database allowing participating companies to identify and remove digitally fingerprinted content, deemed “terrorist” under their Terms of Service, from their networks. [i] Several networks have taken steps to remove ‘fake accounts’ and reduce the circulation of “fake news”.

 

  • Global take-down orders, whereby search engines have been obligated to de-list the specific respondent URL addresses containing objectionable material from its search results globally, beyond the territory in which a judicial order has been issued.

 

  • Insufficient reflection of the interests of users who face particular risks in companies’ Terms of Service, in particular a lack of clarity regarding how companies take into consideration context, language, religion, culture, politics etc. in their content moderation practices and guidelines; the effect of overbroad policies, combined with algorithmic bias, on the undue censorship of minority and at risk groups; and certain companies’ practice of striking deals with certain governments, with specific risks for human rights defenders, journalists and minority groups.

 

  • Availability of appeals and remedies, in particular the absence of clear complaints mechanisms for the wrongful removal of content, and means for users to challenge decisions to remove content, as well as the failure to systematically notify users when their content has been removed.

 

  • Algorithmic decision-making, which is increasingly the primary tool used by Internet companies in online content regulation, often with limited human oversight. Over­blocking, bias and lack of transparency, and opportunities to seek redress are recurrent problems related to the use of algorithmic decision-making.

 

  • Lack of transparency in regards to Internet companies notifying users of content removals, and the reasons for such removals; as well as the failure to include information about content removed on the basis of their Terms of Service in their transparency reports.

We recommend that the Special Rapporteur highlight the following in his report:

  • Companies should demonstrate their commitment to respect human rights. As such, they should challenge government and court orders that they consider to be in breach of international standards on freedom of expression and privacy. In practice, this means that as a matter of principle, companies should:
  • Resist government legal requests to restrict content in circumstances where they believe that the request lacks a legal basis or is disproportionate; this includes challenging such orders before the courts;
  • Resist individuals’ legal requests to remove content in circumstances where they believe that the request lacks a legal basis or is disproportionate
  • Appeal court orders demanding the restriction of access to content that is legitimate under international human rights law;
  • Moreover, as a matter of principle, companies should resist informal government requests to restrict content on the basis of their Terms of Service. In this regard, companies should make clear to both governments and private parties that they are under no obligation to remove content when a takedown request is made on the basis of their Terms of Service.
  • Companies’ decisions to block or remove content must follow the minimum standards set out in ARTICLE 19’s policy on blocking, filtering, and free speech[ii] and the Manila Principles on Intermediary Liability. In particular, companies’ Terms of Service should comply with international standards on freedom of expression.
  • States should not oblige a search engine operator to remove the results displayed on all of the domain names used by its search engine worldwide. Search engine operators should only be required to de-reference results for searches made from within the State where a national court or data protection authority is satisfied that such a step is necessary and proportionate in all the circumstances.
  • Internet companies should develop clear redress mechanisms for individuals whose content has been taken down in circumstances where that content is legitimate under international human rights law. Such mechanisms must meet a due process threshold as defined by international human rights law. At the very least, companies should notify users that their content has been removed and give them the basic reasons for their decision. Companies should also provide users with an opportunity to challenge those decisions, particularly when the content at issue is lawful under national or international human rights law. These basic due process safeguards should be put in place in practice and made clear in companies’ policies. At a bare minimum, companies should provide an email address to enable individuals to complain about wrongful removals of their content.
  • Automated decision-making should include a sufficient level of human oversight. The level of human intervention and nature of human input must be made more easily understandable, and accountable.
  • Individuals should have a right not to be subjected to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. In practice, this means that they should have a right to challenge such decisions.
  • Companies should provide information about content removed on the basis of their Terms of Service, whether at the request of governments, private parties, including ‘trusted flaggers’, or on a proactive basis by the company itself, in disaggregated format. Equally, they should publish their internal guidelines for the removal of content.

As we eagerly anticipate the UN Special Rapporteur’s report on content regulation in the digital age, we look forward to continuing to support his work in this important area.

Read the submission

 

[i] See e.g. Twitter Public Policy, Global Internet Forum to Counter Terrorism, 26 June 2017; or Google, Update on the Global Internet Forum to Counter Terrorism, 4 December 2017.

[ii] See ARTICLE 19, Freedom of Expression Unfiltered: How blocking and filtering affect free speech, 8 December 2016.