Algorithms and automated decision-making in the context of crime prevention

In this briefing paper, ARTICLE 19 evaluates the human rights impact of algorithmic or automated decision-making (algorithmic decision-making) in the area of crime prevention, with a particular focus on terrorism/extremism. From a regional persepective, we analyse recent policy developments in this area in the USA and Europe, where these issues have been particularly high on policy makers’ agenda, and several initiatives have been put forward to tackle online extremism.

In the US, for example, the Obama administration has advocated for the use of “hashes” for the detection and automatic removal of extremist videos and images from the Internet. Additionally, there have been proposals to modify search algorithms in order to “hide” websites that would incite and support extremism from results pages. The hash mechanism has been adopted by Facebook and YouTube for video content; however no information has been released concerning the level of human input or the criteria used to establish which videos are “extremist”.

In Europe, while similar projects to the above are undergoing scrutiny, Interpol has created a regional organization monitoring online extremist content called the “Internet Referral Unit”. The Unit identifies content in breach of the terms and conditions of each online platform. The companies can then voluntarily act upon the Unit’s report. The system will be automated in the next few months with the introduction of the “Joint Referral Platform”.

Algorithmic decision-making is also currently being deployed in the US for the prevention of crime. An algorithm is currently used to create a risk assessment predicting the likelihood of a person reoffending in the future. There have been reports on the inherent bias of the algorithm against people of colour and minorities, which are particularly troubling as the risk assessment might have far-reaching repercussions on the conditions of the sentence given. Other algorithmic-based software is currently used by the police in order to foresee where and when criminal acts are likely to take place. This software has also been criticised for reflecting and perpetuating bias: the algorithm is based on historical data which reflects systematically biased police practices.

One of the most widely accepted uses of algorithmic decision-making is in the removal and filtering of child sex abuse images/videos by online platforms and ISPs. This filtering system is now automated in both the US and the UK, based on the use of “hash” technology to identify content for removal and filtering. This practice has been criticised due to the lack of judicial oversight mechanisms, lack of transparency and the risks of over-blocking.

Algorithmic decision-making is also widely used in the context of copyright removals. In these cases there is usually human input in the process, as copyright owners are asked to upload their material within a specific program and to decide what consequence a breach of copyright should have. These programs have been widely criticised; in particular in the context of YouTube, whose appeal system can be extremely complex and where the copyright owners are seen as “playing judge, jury and executioner”.[1]

Algorithmic decision-making has also been implemented in the context of abusive messages on social media. On Twitter, for example, “abusive messages” are filtered out of the recipient’s notifications, while still remaining visible on the platform. Staff members then make a decision regarding possible bans and suspensions for the abusive user.

Overall, over-blocking and a lack of clarity on appeals procedures are recurrent problems in the context of algorithmic decision-making. Such initiatives are usually based on “self-regulatory” mechanisms which are, therefore, placed beyond the scope of the law and judicial oversight. Moreover, in practice, online intermediaries decide what available redress mechanisms should be, if any, and the level of human oversight over the automated decision-making.

This briefing paper provides an initial snapshot of some of the human rights implications of automated decision-making, in particular regarding the right to freedom of expression. ARTICLE 19 believes that it is important to ensure that human rights are protected in the context of algorithmic decision-making. As such, this briefing paper provides an overview of some of the important issues and policy developments that should be considered when developing policy responses in this area; it is an area of work we will develop further in the future.


[1] C. Hassan, What about all that copyright takedown abuse, YouTube?, Digital Music News, 29th February 2016, available at http://bit.ly/2fW8lnR.