Stop Hate for Profit and making Facebook accountable for its impacts on human rights

Stop Hate for Profit and making Facebook accountable for its impacts on human rights - Digital

Facebook’s business model is predicated on advertising revenue. That is why the decision by dozens of companies including Unilever, Ford, Patagonia and Mozilla, as well as civil society organisations, to join the Stop Hate for Profit campaign and remove advertising from the platform has forced the company to pay attention. And once again, the impact of Facebook on our human rights is in the spotlight at a key time for tackling racism and inequality in the US and elsewhere. 

ARTICLE 19 welcomes, and indeed supports, many of the campaign’s calls. Facebook needs to be far more transparent about the way in which it addresses ‘hate speech’ and misinformation on the platform. That means giving researchers, academics, civil society and other relevant stakeholders much greater access to data. On issues of ‘hate speech’, where Facebook boasts increasing reliance on algorithms, it is especially important that the company provides greater information on the algorithms it uses to assess content, including their error rates and trends in their decision making.

We also agree that commissioning regular independent human rights impact assessments and proper follow-up are essential for Facebook to demonstrate its commitment to respect human rights. Those assessments should rightly focus on the impact of Facebook’s products and services on the human rights of groups at risk of discrimination. Calls for content reviewers to be trained on consistently identifying ‘hate speech’, discrimination and harassment according to Facebook’s rules are also useful. To be meaningful, however, it is critical that internal structures are put in place to ensure that the recommendations made in human rights impact assessments are properly followed-up and mainstreamed within the company at all levels and across different teams. 

At the same time, we urge caution against the use of overly broad terms in Facebook’s and other social media rules that would give even greater power to censor political speech. Calls for common sense changes to Facebook’s policies to help stem radicalization and hate don’t expand enough on the details of those changes or how they might be implemented. Both ‘radicalisation’ and ‘hate’ are broad terms. We are concerned about the unintended consequences that could arise from the implementation of those measures, including the removal of speech from the very groups that the Stop Hate for Profit recommendations seek to protect. Calls for removing groups ‘related to militia’ could end up accidentally censoring human rights activists in countries beyond the US trying to expose atrocities against minority groups. Similarly, calling for action on content ‘associated with hate’ could instead lead to the censorship of journalists reporting on hate groups, or activists trying to counter hateful ideology. Instead of broadening the terms under which Facebook acts on content, we should be urging greater clarity on the company’s existing standards and enforcement. 

Having faced similar criticisms for many years, Facebook’s response to the campaign has been limited so far. In the meantime, it has published the Civil Rights Audit report conducted by civil liberties and civil rights expert Laura W. Murphy and Megan Cacace, partner in the civil rights law firm Relman Colfax, PLLC. Its conclusion is clear: Facebook still has a long way to go. We agree. 

However, it is important not to lose sight of the need for systemic change beyond the calls of a single campaign or those made in the Civil Rights Audit. The reason the Stop Hate for Profit campaign has been effective is that it has targeted Facebook’s profit-making – seeking action from Facebook’s real customers, its advertisers. While the advertising industry engaging with human rights principles is welcome, it is no substitute for systemic change. For human rights to be better protected on the platforms, users and other stakeholders must focus on demanding radical transparency and accountability on ‘hate speech’ issues. They must also challenge Facebook’s business model. Short term, it is important that companies are made to be more transparent and take action to address their human rights impacts, but longer term the protection of user rights to privacy, free expression and equality are better served by a reduction in the power of huge companies like Facebook. Business models are the real issue here and for real change to happen, we need policy-makers to step in and embrace measures like unbundling content moderation from platform hosting, to empower users and give them more choice.