Twitter: Proposed dehumanisation policy fails to address concerns

Twitter: Proposed dehumanisation policy fails to address concerns - Digital

Summary

The following submission was provided by ARTICLE 19 in response to Twitter’s call for input on its proposed policy on dehumanising language on the platform.

ARTICLE 19 welcomes the opportunity to provide input on Twitter’s proposed amendment to the Twitter Rules (the Rules) to address “dehumanising speech.” We also appreciate Twitter’s continuing efforts to address abuse on its platform, and in particular, its attempts to differentiate between various types of abusive behaviour.

However, in ARTICLE 19’s opinion, the proposed changes unfortunately fail to address the shortcomings we have previously identified in Twitter’s policies on “hateful conduct.” Instead, we find that these changes serve to increase the potential for the overly broad interpretation of the Rules that apply to content and behaviour on the platform, and the imposition of restrictions that go contrary to international standards on freedom of expression.

Furthermore, while the efforts to gather users’ feedback on the proposed Rule change is a welcome development, we believe that shortcomings in this consultation process limit its potential for meaningful engagement. While we hope that Twitter will continue to consult with its users, we recommend adopting consultation methods which would allow for adequate input from users and thus, ensure that data collected is representative of users’ experiences and useful to Twitter’s policy development process.

ARTICLE 19 further reiterates its recommendations for Twitter that they should more closely align their policies with international standards on freedom of expression. It is further recommended that Twitter also clarifies its policies in line with these standards. In particular, we recommend that Twitter provides examples of case studies, or at the very least, more detailed information on the way in which it applies its policies; explaining in more detail the relationship between “threats”, “harassment” and “online abuse” and where it distinguishes these from generally “offensive content”, which should not be limited as such.

Twitter’s proposed dehumanisation policy

On 25 September 2018, Twitter launched a consultation of its userbase with the view of “creating new policies together.” In the announcement, Twitter stated it was “asking everyone for feedback on a policy before it is part of the Twitter rules”, insisting that “we consider global perspectives and how this policy may impact different communities and cultures.” The provisions at hand concern a new policy to address “dehumanising” language.

The objective of the new policy is “to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target.” Twitter explains its proposal as an effort to address the gap between content that already falls within the hateful conduct policy and “Tweets that many people consider to be abusive, even when they do not break our rules”, and refers to research by Susan Benesch and Herbert Kelman on the relationship between dehumanisation and violence.

The proposal states that users will not be allowed to “dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.” It provides two definitions:

  • “Dehumanization” is defined as “language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of their human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to a tool for some other purpose (mechanistic);” and
  • “Identifiable group” is defined as “any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.”

The concept of “dehumanisation”

ARTICLE 19 notes that the term “dehumanisation” does not have a strict definition in jurisprudence or academic writing. In Kelman’s conceptualisation it concerns a process that undermines the individuality of others, through the denial of the target’s agency and community embeddedness. Nick Haslam has defined two forms of dehumanisation which are echoed in Twitter’s proposed definition: animalistic, i.e. when uniquely human characteristics are denied to an outgroup; and, mechanistic, when features of human nature are denied to targets. Benesch defines dehumanising speech as “describing other people in ways that deny or diminish their humanity” and the Dangerous Speech Project she founded argues it is a hallmark of a wider category that is called “dangerous speech”, which covers any form of expression that can increase the risk that its audience will condone or participate in violence against members of another group.

By way of illustration, Twitter provides two examples, one showing “animalistic dehumanising speech” used to target a group based on their religious affiliation, and the other showing mechanistic dehumanising speech used to target a group based on their gender identity or characteristics:

ARTICLE 19 notes that in international criminal law jurisprudence, various tribunals have discussed the role of dehumanisation in facilitating group-based violence. For instance:

  • The Nuremberg Tribunal found Julius Streicher, the founder and editor of the virulently anti-Semitic tabloid Der Stürmer, guilty of crimes against humanity, noting that “in his speeches and articles, week after week, month after month, he infected the German mind with the virus of anti-Semitism and incited the German people to active persecution”. According to the Tribunal, “typical of his teachings was a leading article … which termed the Jew a germ and a pest, not a human being but ‘a parasite, an enemy, an evildoer, a disseminator of diseases who must be destroyed in the interest of mankind’”.
  • The International Criminal Tribunal for Rwanda noted the dehumanisation of Tutsis in media broadcasts, and the use of terms such as inyenzi (literally meaning cockroaches), as contributing to the breakout of violence and genocide, in particular, as part of incitement to genocide.

Recently, the Independent International Fact-Finding Mission on Myanmar also referred to “a campaign of hate and dehumanization” against Muslims and Rohingya people dating back years, and noted that “military operations on the ground are often accompanied by deeply insulting slurs and outright threats linked to ethnicity and religion” and that such language is used by soldiers while committing human rights violations. For example, rape victims said that the perpetrators compared them to dogs. The members of the Fact-Finding Mission stressed specifically that “the language used was not only insulting; it often also revealed an exclusionary vision”. In their research, the role of the Internet is highlighted as they “encountered over 150 online public social media accounts, pages and groups that have regularly spread messages amounting to hate speech” which are “often accompanied by cartoons, memes or graphic content, amplifying the impact of the message”.

ARTICLE 19 finds that dehumanising language will often fall under the concept ‘hate speech’; a concept that itself is not uniformly defined in the international law. For the purposes of responding to ‘hate speech’ in freedom of expression in a compliant way, ARTICLE 19 have proposed to distinguish between the different types of ‘hate speech’ according to its severity. We suggest that this distinction applies, mutatis mutandis also to ‘dehumanising speech’:

  • ‘Hate speech’ that must be prohibited: International criminal law regarding incitement, and Article 20(2) of the International Covenant on Civil and Political Rights (ICCPR), require States to prohibit certain severe forms of “hate speech”, including through criminal, civil and administrative measures;
  • ‘Hate speech’ that may be prohibited: States may prohibit other forms of ‘hate speech’, provided the interference with freedom of expression strictly complies with the requirements of Article 19(3) of the ICCPR (i.e. the interference must be prescribed by law, pursue a legitimate aim, and be necessary in a democratic society); and,
  • Lawful speech which should be protected from restriction under Article 19(2) of the ICCPR, but nevertheless raises concerns in terms of intolerance and discrimination, and merits a critical response by the State.

ARTICLE 19 also recalls that under the UN Guiding Principles on Business and Human Rights, Twitter has a responsibility to respect all human rights, which includes taking concrete steps to avoid causing or contributing to human rights abuses and to address human rights impacts with which they are involved. While Twitter as a private actor would not be expected to adopt the same types of measures as States, ARTICLE 19 considers that the above categories should guide its response to ‘hateful conduct’, including dehumanising speech.

Proposed amendment fails to address concerns on regulation of ‘hate speech’

In August 2018, ARTICLE 19 reviewed the compatibility of Twitter’s Rules, Policies and Guidelines with international standards on freedom of expression. In our review, we have recommended, inter alia, that “Twitter’s policies of “hateful conduct” should:

  • Be clearly presented and should be more closely aligned with international standards on freedom of expression, including by differentiating between different types of prohibited expression on the basis of severity.
  • Provide case studies or more detailed examples of the way in which it applies its ‘hateful conduct’ policies;” and
  • Explain in more detail the relationship between ‘threats’, ‘harassment’ and ‘online abuse’ and distinguish these from ‘offensive content’ (which should not be limited as such).

Regretfully, the proposal on dehumanising language does not address any of the above concerns. Rather, the proposal serves to add to the potential for an overly broad interpretation and restrictions beyond international standards on freedom of expression.

ARTICLE 19 recognises the potential harm of ‘dehumanising’ speech. However, without additional information regarding the enforcement of the proposed new policy and providing more case studies and examples, it remains unclear how the proposed amendment is conducive to realising Twitter’s responsibility to respect human rights.

Further, with regard to the formulation of the proposed amendment, ARTICLE 19 notes the definition used by Twitter characterises “dehumanisation” as “language that treats others as less than human”. Subsequent to the specific use of the word “language”, it is unclear how the proposed policy would apply to graphics (including memes) and videos.

Accordingly, ARTICLE 19 reiterates that Twitter should:

  • Clearly present its policies on “hateful conduct” and align them more closely with international standards on freedom of expression, including by differentiating between different types of prohibited expression on the basis of severity;

  • Provide users with case studies or more detailed examples of the way in which it applies its “hateful conduct” policies. Additionally, it should also provide detailed examples or case studies of the way in which it applies its standards in practice, including with a view to ensuring protections for minority and vulnerable groups.

  • Explain in more detail the relationship between “threats”, “harassment” and “online abuse” and distinguish these from “offensive content” (which should not be limited as such).

Consultation process requires more transparency and diverse participation

ARTICLE 19 welcomes Twitter’s openness to engaging with its users and the recognition that “historically we have been less transparent than, frankly, I think is ideal about our policies and how we develop them.”

While this consultation represents a step in the direction towards more transparency, several shortcomings, however, inhibit its potential for meaningful engagement:

  • We note that the online questionnaire is rather limited in space for responses and in the ways in which the users can give their opinion. There is no scope to provide detailed responses or analyses. This does not provide sufficient information to allow for a proper assessment of the proposed amendment’s impact.
  • Given the subject matter, specific consultation of users who belong to groups that are the target of dehumanising language is essential. While we welcome Twitter’s efforts to actively reach out to some of these groups, more transparency regarding the methodology is needed to allow for an assessment of these efforts, and how they adapt to different local contexts.
  • It appears that this “global consultation” is only possible in eight languages. Again more transparency would be welcomed regarding Twitter’s efforts to remedy this limitation, beyond the company’s statement that “for languages not represented here, our policy team is working closely with local non-governmental organizations and policy makers to ensure their perspectives are captured”.

Moreover, more transparency on the processing of the input gathered and its role in policy development would be appropriate in order to generate a true two-way conversation. Twitter in this regard only provided the following statement:

“Once the feedback form has closed, we will continue with our regular process, which passes through a cross-functional working group, including members of our policy development, user research, engineering, and enforcement teams. We will share some of what we learn when we update the Twitter Rules later this year”.

In order to develop the engagement with its users beyond good intentions, ARTICLE 19 recommends the above flaws in the process are addressed in future consultations. We stand ready to further assist Twitter in improving their policies on this and other subjects.