Iran: Meta must overhaul Persian-language content moderation on Instagram

Iran: Meta must overhaul Persian-language content moderation on Instagram - Digital

ARTICLE 19, Access Now and the Center for Human Rights in Iran (CHRI) have come together to make recommendations to Meta and Meta’s Oversight Board in an effort to streamline processes to ensure freedom of expression is protected for users who rely on their platform in Iran, especially during protests. 

On 7 June 2022, ARTICLE 19 hosted a RightsCon session that included Meta’s Content Policy manager Muhammad Abushaqra, a member of of Meta’s Oversight Board, Julie Owono, and BBC Persian’s Rana Rahimpour. The event covered Instagram’s Persian-language content moderation processes and problems. Because Instagram, which is owned by Meta, is now the main platform for communication in Iran given its status as the last remaining uncensored social media in the country, the discussion focused on Meta’s regulation of this platform. According to polls, Instagram hosts 53.1% of all Internet users in Iran on its platform, the second most-used app in Iran after WhatsApp.  

Instagram suffers from a deficit in trust and transparency when it comes to content moderation practices for the Persian community. Despite acknowledgments by the company, the case loads remain high through the communication and network mechanisms that organisations like ours alone maintain, which represent problems from only a small fraction of the community. Many Iranians are abandoning Instagram because they distrust the platform’s policies and/or enforcements. 

Following the discussion, ARTICLE 19, Access Now and the Center for Human Rights in Iran (CHRI) present the following recommendations to Meta with a view to helping the company better understand the complexities of Persian-language content, and ensure its content moderation practices uphold and protect the human rights and freedom of expression of everyone using Instagram.

Blocking hashtags, blocking access to information 

Among the key issues discussed with Meta were mistakes and technical errors that prohibited people from accessing certain information, including blocks placed on hashtags. Reference was made in particular to the sanction placed on the hashtag #IWillLightACandleToo used in support of the campaign for the families of the victims of the Ukraine International Airlines flight PS752 shot down in 2020, resulting in the deaths of all 176 people on board. The hashtag was temporarily blocked from being shared at the height of the campaign.

Speaking specifically about the censorship of #IWillLightACandleToo, the Meta representative explained that while the hashtag and the campaign did not violate any policies, the ability to search for the hashtag was temporarily locked because there was content being shared using the hashtag unrelated to the main message of the campaign, and this content did violate Meta’s policies. The Meta representative suggested that the violating content was part of counter-narratives to refute the campaign and was immediately removed because it violated the company’s Dangerous Organisations and Individuals standards. However, in addition to such individual removals, locks have been placed on the searchability of hashtags that have multiple violations. Meta said when it learned about the issue following a huge raft of complaints, Meta took down the violating content and restored access to the hashtag. The Meta representative also asserted there are systems in place to stop coordinated inauthentic behaviour. In the case of the #IWillLightACandleToo campaign, Meta further claimed that such behaviour would have been detected, even though, as the BBC’s Rana Rahimpour noted during the event, this problem was only rectified because of substantial media attention to the campaign and because the association of families of victims of flight PS752 have such strong networks and platforms.


ARTICLE 19, Access Now and CHRI urge Meta to create better rapid response systems to ensure that those without platforms or access to international human rights organisations can escalate the removal of erroneous locks and similar mistakes through more equitable processes. We further urge Meta to actively follow similar campaigns on Instagram to understand their context, prevent these mistakes and eliminate the ability for bad actors, either nation-state affiliated or not, from using Meta’s policies to the detriment of Persian language human rights communities, such as those behind the PS752 association. 

Death to Khamenei’ and Meta’s consistent inconsistencies

One of the recent issues raised within the Persian language community has been the takedowns of content on Instagram containing the protest chant ‘Death to Khamenei’ or other ‘death to’ slogans against institutions associated with the dictatorship in Iran. Meta previously issued a temporary exception for ‘Death to Khamenei’ chants in July 2021 following explanations from the Iranian community that clarified the specific cultural context of such chants. The Meta spokesperson has stated, however, that issuing exceptions goes against consistent policies the company maintains against allowing violent calls against leaders, even if they are rhetorical slogans. Meta had, however, granted this kind of exception to allow for hate speech in the context of Russia’s invasion of Ukraine. While Meta has repeatedly declared it must be consistent on these global applications, there has been no further clarification on when and why in some instances – namely the case of Ukraine – such exceptions were granted, but in others not. Meta did clarify after the backlash regarding the exceptions that the company will not allow exceptions on statements calling for the death of any leaders, particularly calls to death for leaders in Russia or Belarus. However, the rules on their exceptions granting more carve-outs for Ukrainian speech in relation to self defence against Russian invaders have remained vague. 

Our organisations, however, are worried that this lack of nuance in content that is documenting protests causes problematic takedowns of newsworthy protest posts, or in other cases, posts that could help directly or indirectly corroborate human rights abuses. The lack of exception for this kind of speech is in direct contradiction to the six-part threshold test set out in the United Nations’ Rabat Plan of Action on the threshold for freedom of expression in its consideration of incitement to discrimination, hostility or violence, which requires the context and intent of a statement to be taken into account, among other considerations.


We propose that Meta clarifies in what cases it is making exceptions to allow for violent calls and why, beyond the communications on leaders. We also encourage Meta to reinstate Meta’s Oversight Board’s ability to deliberate on content decisions in connection with the Russian invasion of Ukraine, especially to ensure oversight of these exceptions processes. We also ask Meta, at the very least, to preserve violating content, as offensive content from crisis zones is needed as corroborating human rights evidence.

We also urge the Meta Oversight Board to deliberate on protest content removed because of the slogan ‘death to Khamenei’ to ensure consistency when it comes to granting exceptions, as well as to ensure its compliance with international human rights norms. We urge the Meta Oversight Board and Meta to work together to ensure appeals to the Oversight Board can be made beyond the two-week timeframe, as this has proven to be insufficient time for users who need to navigate the Oversight appeals applications in the midst of conflicts. 

Lack of Transparency on Meta’s moderation processes 

Two issues surrounding transparency were discussed during the RightsCon session: automated moderation processes and human review processes. 


Participants discussed automated review processes whereby media banks or lexicons are used to remove phrases, images or audio automatically. While Meta confirmed its use of these automated lists, we continue to ask for clarity on the content of these banks and lexicon lists. We note, for instance, that automated removals of non-supportive content of entities listed on the Dangerous Organizations and Individuals (DOI) are rampant. Most relevant in the Iran context is the listing of the Revolutionary Guards on the DOI list. Even after recommendations for the policy to better enforce nuance for contexts that don’t constitute support or praise the entities highlighted by the Oversight Board, and its acceptance by Meta, we continue to see this problem perpetuated. This includes satire, criticism, news and political art expressing criticism of or objection to the Revolutionary Guards, which forms a large chunk of the news or expression content that Persian language and even Arabic language users (Syrian activists, for example) post on Meta-related platforms. 


We call for more transparency on the automated processes. We further ask for clarifications about the content of the media banks that are used for automated takedowns for the Persian market. Erroneous takedowns continue on the platform, as the processes in place do not allow for nuance on content that references DOI-listed entities. These takedowns are in violation of the commitments that Meta has made in reaction to the two DOI-related Oversight Board recommendations that require nuance in their interpretations. 

Human Review Processes 

Revelations by the BBC that Iranian officials have tried to bribe Persian-language moderators for Meta at a content moderation contractor, the Telus International center in Essen, Germany, have also raised concerns about the oversight of human moderation processes. Another BBC report revealed a series of allegations that included a claim that there is not enough oversight of the moderators to neutralise their biases. One anonymous moderator alleged that Meta only assesses the accuracy of 10% of what each moderator does. Meta clarified during the RightsCon event that audits of reviewers’ decisions are based on a random sample, meaning it would not be possible for a reviewer to evade this system or choose which of their decisions are audited. Meta made public assurances to the BBC it would conduct internal investigations, and during our event, representatives assured us that those investigations have informed them there are high accuracy rates. 


We worry that the random samples Meta explained it uses to prove accuracy might not disprove claims there is still a large gap in oversight of ensuring no bias can impact decisions. Additionally, we are worried claims of ‘high accuracy’ do not reconcile against the documentation and experiences of the Persian language community, which is experiencing constant streams of takedowns or unexplained ‘mistakes’. There must be clarity over what data is used to determine these accuracy statistics. 

ARTICLE 19, Access Now and the Center for Human Rights in Iran (CHRI) urge Meta make this investigation transparent and that it clarifies and supplies better quality information around training and review processes that maintain the integrity of the moderation processes. We also call for there to be investment and resources to buy in trust from these impacted communities.

Please email [email protected] for any questions or comments.