UK: ARTICLE 19’s response to Ofcom consultation on the implementation of the Audio-Visual Media Services Directive

UK: ARTICLE 19’s response to Ofcom consultation on the implementation of the Audio-Visual Media Services Directive - Digital

On 24 September 2020, ARTICLE19 responded to Ofcom’s consultation about the implementation of the (Revised) EU Audio-Visual Media Services Directive (AVMSD). The Revised AVMSD introduced a form of regulation on video-sharing platforms (VSPs), under the supervision of national regulatory authorities for audio-visual media. In our response, ARTICLE 19 calls on the UK regulator to define key terms and to ensure both meaningful transparency of content moderation practices and effectiveness of redress mechanisms so as to protect freedom of expression on VSPs.

Background

The Revised AVMSD effectively requires broadcasting regulators to ensure that Video-Sharing Platforms (VSPs), such as YouTube, adopt a series of measures to protect:

  • minors from programmes and user-generated content ‘that may impair their physical, mental or moral development;
  • the general public from online content inciting to violence or hatred directed against a group or member of a group on the basis of certain discriminatory grounds;
  • the general public from child sexual abuse material and online content inciting to acts of terrorism or that amounts to offences against racism and xenophobia.

The Directive lists a number of measures considered appropriate, including terms of services in line with the requirements of the Directive, flagging mechanism to report ‘hate speech’, ‘terrorist’ content and other harmful content to minors, age verification systems, rating systems, parental controls, internal complaints mechanisms about wrongful removal of content, and media literacy measures.

It is anticipated that the results of this consultation will inform Ofcom’s approach to its potential role as regulator under the government’s Online Harms proposals.

Among other things, the consultation asks about what measures have been effective at protecting minors and the general public and about indicators of potential harm that Ofcom should be aware of as part of its ongoing monitoring and compliance activities on VSPs. It also asks how VSPs should balance their users’ right to freedom of expression and the metrics that should be used to monitor this.

ARTICLE 19’s response

In our response, ARTICLE 19 highlights the following:

  • Effectiveness must be defined by reference to its objective: Major platforms (e.g. YouTube) have long had terms and conditions and used measures such as flagging, content removal and reporting mechanisms. Whether or not these measures are considered ‘effective’ very much depends on the definition of effectiveness and how they are assessed. By and large, effectiveness has been assessed by reference to the volume of content taken down. In our view, this is a mistake. Whether or not companies report a significant volume of takedowns is also a function of how they write their terms of service. If the definition of ‘harmful’ content is expanded, it is more likely that the volume of removed content will go up. It is also more likely that legitimate content will be removed.
  • ‘Harm’ must be defined: It is unclear what content may be considered ‘harmful’, or by reference to whom. For instance, VSPs could potentially create ‘harm’ to users’ by violating data protection rules while collecting or processing their data or by unduly removing the content they want to share on the platform. They could also create harm to business users if platforms deal with them in an unfair manner. We also warn against the use of terms such as ‘potential’ as opposed to ‘actual’ harm. It only highlights that the regulator has in its sight undefined ‘harms’ that are also highly speculative.
  • Monitoring violations of freedom of expression: The main indicator of wrongful removal is the number of successful appeals. However, it is generally unclear if these are available against decisions made on the basis of filters and how users can argue against the decision that was made without being given any reasons for it. Moreover, the incentives for users to use appeals mechanisms are inexistent. It creates friction and in practice, it is unlikely that users will seek to challenge, e.g. the wrongful removal of ‘terrorist’ content for instance. Therefore, it is almost impossible to know how much legitimate content is removed. Over-removal of content may be addressed to some extent through human rights impact assessments of filters that might help identify the extent to which filters are biased and remove content from particular groups. Greater transparency would at least help establish the scale of the problem when it comes to unduly zealous enforcement of community standards.
  • Social media councils as redress mechanism: ARTICLE 19 has been advocating for setting up a new multi-stakeholder mechanism – Social Media Council (SMC). We believe that SMC could also provide a forum where ‘appropriate measures’ under the AVMSD could be discussed, fine-tuned, assessed or reviewed by representatives of VSPs and all stakeholders. When looking at the best approach to the societal challenges related to content moderation, Internet companies cannot be expected or even encouraged to take the place of sex educators, therapists, social workers, researchers, media literacy experts, journalists and other voices in society. Since the creation of an SMC would enable broad participation from business and civil society, it could be used as forum to elaborate a common understanding not only of the types of content that ought to be moderated but also of the appropriate and realistic technical approaches to moderation. The SMC would also serve as an appeals mechanism: users would have access to an independent, external body that can make decisions on disputes related to content moderation.

Read the full response