YouTube Community Guidelines: Analysis against international standards on freedom of expression

YouTube Community Guidelines: Analysis against international standards on freedom of expression - Digital

In this analysis, ARTICLE 19 reviews the compatibility of YouTube’s Community Guidelines with international standards on freedom of expression. Our analysis is based on the YouTube Community Guidelines as accessed in August 2018.

YouTube’s Community Guidelines are divided into several sections, including nudity or sexual content; hateful content; harassment and cyberbullying; threats; privacy; child safety; harmful and dangerous content; violent or graphic content; spam, misleading metadata and scams; copyright; impersonation; and additional policies. The latest version of the Community Guidelines is generally easy to navigate and appears to contain more detailed information than in the past, such as a new Harassment and Cyberbullying policy, which is a welcome development. However, our analysis shows that YouTube’s Community Guidelines fall below international standards on freedom of expression in a number of areas. This includes YouTube’s content policies on ‘terrorism’ and ‘harassment and cyberbullying,’ and its complaint mechanisms.

ARTICLE 19 encourages YouTube to bring its Community Guidelines in line with international human rights law and to continue to provide more information about the way in which those standards are applied in practice.

Summary of recommendations

  1. YouTube should set out in more detail the factors it relies on in assessing ‘hate speech.’ In addition, it should provide case studies or more detailed examples of the way in which it applies its policies on ‘hate speech’;
  2. YouTube’s ‘hate speech’ policy could be further developed so that it would differentiate between different types of ‘hate speech’;
  3. YouTube should align its definition of terrorism and incitement to terrorism with that recommended by the UN Special Rapporteur on counter-terrorism and human rights. In particular, it should avoid the use of vague terms such as ‘celebrate’ or ‘promotion’ of terrorism;
  4. YouTube should give examples of organisations falling within the definition of ‘terrorist’ organisations. In particular, it should explain how it complies with various governments’ designated lists of terrorist organisations, particularly in circumstances where certain groups designated as ‘terrorist’ by one government may be considered as legitimate (e.g. freedom fighters) by others;
  5. YouTube should provide case studies explaining how it applies its ‘terrorism’ standards in practice;
  6. YouTube should elaborate its policy on nudity and sexual content, including giving clearer examples of the types of content that are likely to be removed under the policy;
  7. YouTube should explain what constitutes sufficient information for the purposes of providing context under its Nudity and Graphic Content policies. In practice, YouTube should not place too high a burden on users to provide contextual information. In particular, the absence of contextual information should not lead to automatic removal of content that may otherwise be legitimate under international standards on freedom of expression;
  8. YouTube should define what constitutes a “malicious” attack and explain what factors are taken into account to distinguish “offensive” from “abusive” content. It should also consider adding a reference to causing “alarm or distress” in its definition of harassment. Harassment should be more clearly distinguished from bullying;
  9. YouTube should provide exceptions to its Harassment and Cyberbullying policies so as to protect freedom of expression, in particular legitimate criticisms that may be deemed offensive by the individuals concerned;
  10. YouTube should provide examples or case studies of how its Harassment and Cyberbullying policy is applied in practice;
  11. YouTube to explain the relationship between its Harassment and Cyberbullying policy and its Hate Speech policy where appropriate;
  12. YouTube’s policy on Threats should make clear that threats of violence must at least be credible;
  13. YouTube should clarify what falls within “encouragement” of “dangerous” or “illegal activities” in its Harmful and Dangerous Content policies;
  14. YouTube should provide more examples of the way in which it applies its policies on Threats and Harmful and Dangerous Content;
  15. YouTube should make reference to the more detailed criteria developed, inter alia, in Principle 12 of the Global Principles on the Protection of Freedom of Expression and Privacy as part of its assessment of privacy complaints. It should also provide examples or case studies of the way in which it applies those standards in practice;
  16. YouTube should explain more clearly how its policies on spam and “deceptive practices” are related to the broader policy debates on ‘fake news’ or the dissemination of false information;
  17. YouTube should be more transparent about the extent to which it might remove “false information” or “fake accounts” in practice;
  18. YouTube should explain what it considers to be an “authoritative” source of news and how its algorithm promotes such sources;
  19. YouTube should ensure that its appeals process complies with the Manila Principles on Intermediary Liability, particularly as regards counter-notices and the giving of reasons for actions taken;
  20. YouTube should provide disaggregated data on the number of appeals filed and their outcome in its Transparency Report;
  21. YouTube should be more transparent about its use of algorithms to detect various types of content, such as ‘terrorist’ videos, ‘fake’ accounts or ‘hate speech;
  22. YouTube should provide more details about the members of its Trusted Flagger Program.

Read the analysis