Twitter Rules: Analysis against international standards on freedom of expression

Twitter Rules: Analysis against international standards on freedom of expression - Digital

In this analysis, ARTICLE 19 reviews the compatibility of Twitter’s Rules, policies and guidelines (as of August 2018) with international standards on freedom of expression.

The Twitter Rules are complemented by a range of policies on issues such as “hateful conduct,” “parody, newsfeed, commentary and fan account” as well as “General guidelines and policies” covering, for instance, the company’s policy development and enforcement philosophy (‘the Twitter Rules, policies and guidelines’). While the Twitter Rules, policies and guidelines attempt to deal with a wider range of content issues than was previously the case, our analysis shows that they are hard to follow, both in terms of presentation and application. They also generally fall below international standards on freedom of expression, particularly in relation to ‘hate speech’ and ‘terrorism.’ Although Twitter’s appeals process for the closing of accounts contains a number of positive features, it is unclear whether these policies are consistently applied in practice.

ARTICLE 19 encourages Twitter to bring its Rules, policies and guidelines in line with international human rights law and to continue to provide more information about the way in which those standards are applied in practice.

Summary of recommendations

  1. The Twitter Rules, policies and guidelines should be re-organised and consolidated so that the company’s rules in relation to particular types of content can be easily found in one place. Consideration should be given to making the Twitter Rules, policies and guidelines available in one consolidated document.
  2. Twitter should make clear when the Twitter Rules, policies and guidelines were last updated and identify which parts were amended.
  3. Twitter’s policies of “hateful conduct” should be clearly presented and should be more closely aligned with international standards on freedom of expression, including by differentiating between different types of prohibited expression on the basis of severity. Importantly, it should provide case studies or more detailed examples of the way in which it applies its “hateful conduct” policies;
  4. Twitter should align its definition of terrorism and incitement to terrorism with those recommended by the UN Special Rapporteur on counter-terrorism. In particular, it should avoid the use of vague terms such as “violent extremism”, “condone,” “celebrate,” “glorification” or “promotion” of terrorism;
  5. Twitter should give examples of organisations falling within its definition of “violent extremist groups.” In particular, it should explain how it complies with various governments’ designated lists of terrorist organisations, particularly in circumstances where certain groups designated as ‘terrorist’ by one government may be considered as legitimate (e.g. freedom fighters) by others. It should also provide case studies explaining how it applies its standards in practice;
  6. Twitter should explain in more detail the relationship between “threats,” “harassment,” and “online abuse” and distinguish these from “offensive content” (which should not be limited as such). Further, Twitter should provide detailed examples or case studies of the way in which it applies its standards in practice, including with a view to ensuring protections for minority and vulnerable groups;
  7. Twitter should state more clearly that offensive content will not be taken down as a matter of principle unless it violates other rules;
  8. Twitter should make more explicit reference to the need to balance the protection of the right to privacy with the right to freedom of expression. In so doing, it should refer to the criteria developed, inter alia, in the Global Principles on the Protection of Freedom of Expression and Privacy;
  9. Twitter should be more transparent and explain in more detail how its algorithms detect ‘fake’ accounts, including by listing the criteria on the basis of which these algorithms operate;
  10. Twitter should explain how the measures it is adopting to fight fake accounts, bots etc. are different from removing false information.
  11. Twitter should also define or further define what it considers to be “suspicious activity,” “bad actors” or “platform manipulation.”
  12. Twitter should ensure that its appeals process complies with the Manila Principles on Intermediary Liability, particularly as regards notice, the giving of reasons, and appeals processes;
  13. Twitter should be more transparent about its use of algorithms to detect various types of content, such as ‘terrorist’ videos, ‘fake’ accounts or ‘hate speech;’
  14. Twitter should clarify whether it relies on a trusted flagger system, and if so it should provide information about its system, including identifying members of the scheme and the criteria being applied to join it;
  15. Twitter should provide case studies of the way in which it applies its sanctions policy;
  16. Twitter should provide disaggregated data on the types of sanctions it applies in its Transparency Report;

Read the analysis