Internet Intermediaries: Dilemma of Liability Q and A

ARTICLE 19’s policy brief ‘Internet Intermediaries: Dilemma of Liability’ examines the challenges that intermediary liability regimes pose for the exercise of freedom of expression online.

We also suggest solutions to deal with those challenges. In particular, we propose a model for intermediary liability that would be compatible with international standards on freedom of expression.

Who are internet intermediaries?

Internet intermediaries is a broad term that refers to the entities that enable people to connect to the Internet and transmit content. There are a number of different types of intermediaries, including internet access providers, web hosting providers, social media platforms and search engines. Intermediaries are distinct from ‘content producers’, which are the individuals or organisations who are responsible for producing information and posting it online.

Why should we care about what the law says about intermediaries?

Given the amount of potentially unlawful or harmful content that is transmitted through their services and their technical capabilities to control access to that content, internet intermediaries are under increasing pressure from governments and interest groups to act as ‘gatekeepers’ of the Internet. This is often done through the adoption of laws that hold intermediaries financially or criminally responsible if the intermediary fails to filter, block or remove content which is deemed illegal.   This results in private companies censoring content on behalf of the state without appropriate safeguards or accountability mechanisms.  But that’s not the end of the story.  

 But do you know that intermediaries censor content they do not like anyway?

Exactly. Some intermediaries take down content under their terms and conditions (T&Cs). This can be a problem for freedom of expression because T&Cs frequently do not respect international standards on freedom of expression. Also, intermediaries do not respect due process when doing this: there is often a lack of clear guidelines for users to refer to, and an absence of appeal mechanisms that users can use to challenge intermediaries’ decisions to remove their content.  But ARTICLE 19 deals with this in another policy document.

How does imposing liability upon intermediaries threaten the exercise of freedom of expression online?

There are several reasons for this being deeply problematic:

  • Experience shows that intermediaries tend to err on the side of caution and remove or block the content that is perfectly lawful.
  • The intermediaries remove content without a court or independent body determed whether the content complained of was legal or not.
  • There is also no transparency of their decision making and no right to appeal.

What are the different models of intermediary liability?

There are several models around the world:

  • Strict liability – for example in China, where intermediaries are required to actively monitor content or face sanctions
  • Safe harbour regime – for example in Singapore, Ghana, Uganda, South Africa and the EU, where intermediaries are effectively immune from liability if they comply with ‘notice and take down’ procedures.
  • Near-absolute immunity from liability for the content produced by others – ARTICLE 19’s preferred model – for example in the US and Chile, the intermediary is not responsible for content produced by others so long as it does not intervene in that content. The intermediary is only required to remove content when ordered to do so by a court or other independent adjudicatory body (‘court-based model’).

Why does ARTICLE 19 prefer a court-based model?

ARTICLE 19 believes that as a matter of principle, intermediaries should not be held responsible for content produced by others. The fact that intermediaries have the technical means to prevent access to content does not mean that they are the best placed to evaluate whether the content in question is “illegal” or not. Such a determination should be primarily a matter for an independent judicial body, and not a private company.

But surely, this must be a very time-consuming and costly process?

Yes. We recognise that a court-based model can be burdensome and costly. This is why we’re advocating for a notice-and-notice model to deal with complaints that do not involve allegations of serious criminality such as defamation, copyright or privacy. We explain that process further below.

What is “notice-and-takedown”?

Notice-and-takedown systems encourage intermediaries to remove the content once they are put on ‘notice’ by private parties or law enforcement agencies that a particular content is allegedly unlawful.

It means that  anyone can complain and ask intermediary to remove a particular statement or content, claining for example that it is defamatory or an infringement of copyright or privacy. In many countries, this is sufficient for the intermediary to remove the content at issue regardless of whether it is indeed illegal or not. If the intermediary does not remove the content upon notice, it can be held liable for that content.

Do notice-and-takedown procedures violate freedom of expression?  

Yes. ARTICLE 19 advocates for the abolition of notice-and-take down procedures because they are deeply unfair.  The person who originally posted the disputed content is usually not informed that a complaint has been made or given an opportunity to respond. Intermediaries usually proceed on the assumption that the content is unlawful and remove it despite the fact that it may be perfectly legitimate. This especially problematic because intermediaries do not have the independence required to balance the competing interests at stake. They also usually don’t give reasons for their decision to remove. Finally, internet users whose content has been removed usually have little or no remedy to challenge the takedown.

What is an alternative model?

ARTICLE 19 advocates for replacing notice and take down with a notice-to notice procedure. We believe that it would give content providers the opportunity to respond to allegations of unlawfulness before any action was taken. It would also reduce the number of abusive requests by requiring a minimum of information about the allegations, and provide an intermediate system for resolving disputes before matters are taken to court.

How would a notice-to-notice procedure operate?

A notice-to-notice procedure would go like this:

  • The person who wants to complaint about a particular content would have to pay a fee and complete a standardised notice explaining among other things why that content is unlawful and where that content is located.
  • The intermediary would then pass on the complaint to the person identified as the wrongdoer.
  • Upon receipt of the notice, the alleged wrongdoer could then decide whether to remove the material at issue or dispute the complaint by means of a counter notice which would have to be submitted within a reasonable time.
  • If the alleged wrongdoer decides to send a counter-notice, the complainant would have a choice to either drop the complaint or take the matter to court or other independent adjudicatory body.
  • If the alleged wrongdoer fails to respond or file a counter-notice within the required time limit, the intermediary would have a choice to either take the material down or decide not to remove it, in which case it may be held liable for the content at issue if the complaint proceeds to court.

The key feature of this system is that the intermediary is just a conduit between the maker of the content at issue and the person complaining about it. The intermediary is not put in a position where it has to make a decision about the legality of the content immediately upon notice of a complaint. Rather, this system puts the resolution of the dispute primarily in the hands of the maker of the statement and the complainant.

What about content that amounts to incitement to violence or child pornography? Should notice-and-notice procedures be used for this too?

No. Notice-and-notice is not be appropriate for all types of content. In cases involving allegations of serious criminality, law enforcement should be able to order the immediate removal or blocking of content. However, this should be clearly laid down in the law and the law should contain appropriate safeguards. In particular, the order to remove or block content should be subsequently confirmed by a court within a specified period of time. Hotlines can also play a useful role to alert authorities of suspected criminal content posted online provided the way they operate is transparent.