EU: Artificial Intelligence Act must do more to protect human rights

EU: Artificial Intelligence Act must do more to protect human rights - Digital

Photo: Gorodenkoff / Shutterstock

Ahead of the European Parliament’s vote on the EU Artificial Intelligence Act, ARTICLE 19 reiterates our calls for strengthening the human rights considerations in the Act, with a full ban on the use of remote biometric surveillance and emotion recognition technologies. We also caution against the reliance on standard-setting bodies to guide the implementation of the crucial aspects of the Act. 

Commenting on the upcoming vote, Vidushi Marda, Senior Programme Officer at ARTICLE 19, said: 

“The EU AI Act is a critical opportunity to draw some much-needed red lines around the most harmful uses of AI technologies, and put in place best practices to ensure accountability across the lifecycle of AI systems. This is an opportunity the EU cannot afford to miss – countries around the world will be looking to this legislation when considering their own ways of dealing with emerging technologies. 

“ARTICLE 19 remains profoundly concerned by the way in which key decisions around technical standards, which will have huge human rights implications, have effectively been outsourced by policymakers to the European Standardisation Organisations (ESOs), which are not inclusive or multistakeholder, and have limited opportunities for human rights expertise to meaningfully participate in their processes. Ceding so much power to them risks undermining the democratic legitimacy of the process.” 

Remote biometric identification 

In the EU, there has been a rise in the number and kinds of AI systems being deployed to surveil our movements in public spaces on a mass scale, tracking and gathering scores of sensitive information about us, infringing on our privacy, and potentially deterring us from engaging in civic activities, including peaceful protest. Such technologies tend to have the most pervasive impact on those already most marginalised in the society, including people from ethnic minorities. 

The EU AI Act classifies law enforcement use of ‘real-time’ remote biometric identification systems in publicly accessible places as ‘unacceptable risk’: meaning, in theory, that the use case is banned. However, the biggest harm caused by remote biometric identification often occurs when such technologies are used ‘post hoc’ rather than in ‘real time’. When using the data ‘post hoc’, governments, law enforcement, or private companies have the ability to combine multiple sources at their disposal to produce even more intrusive inferences about individuals and communities. It is therefore unclear why the Act makes references only to ‘real-time’ use. 

Over the past year, civil society has made great progress in influencing the European Parliament to support amendments banning public facial recognition and other mass surveillance uses. 

ARTICLE 19 calls on MEPs to ensure that the Act includes a full ban on real-time and post-hoc remote biometric identification in public spaces, by all actors (not just law enforcement), and with no exceptions. 

Emotion recognition 

Emotion recognition technologies, such as the ones used in China to persecute Uyghur people, claim to be able to infer people’s emotional states based on physical, physiological and behavioural markers (like facial expressions or vocal tone) are based on discriminatory and pseudo-scientific foundations. ARTICLE 19’s 2021 report on emotion recognition also asserts that those technologies are fundamentally inconsistent with international human rights standards, and have no place in democratic societies. They strike at the heart of human dignity, and their assumptions about human beings and their characters endanger our rights to privacy, freedom of expression, and the right against self-incrimination.

Emotion recognition has already been deployed in the European Union, which in the past used automated systems to ‘analyse’ people’s emotions at EU borders – in an attempt to ‘verify’ from people’s facial expressions whether they are lying about their immigration claim. Around the world, such systems are deployed to ‘determine’ whether people are good employees or students, or whether they are likely to be violent. When used in a policing context, the system may unfairly target people belonging to communities that do not conform with the arbitrary norms embedded in these systems. 

Evidence from around the world has shown that such technologies fail to deliver on what they claim they can do – they are unreliable as indicators, and increase the risk of racial profiling. In 2019, a group of experts reviewed over a thousand scientific papers studying the link between facial expressions and inner emotional states and found no reliable relationship between the two.

The EU AI Act originally classified emotion recognition technologies as ‘low or minimal’ risk, and more recent reports suggest that these are now under the list of prohibitions. Ahead of the Committee vote later this month, ARTICLE 19 reiterates that these technologies have no place on the European market. We’re calling on legislators to impose a total ban on all emotion recognition technologies. 

Standardisation

The AI Act puts a strong emphasis on the development of technical standards to provide detailed guidance to clarify and implement the Act’s essential requirements, including those around the protections of fundamental rights. The task of developing those standards will fall on two European Standardisation Organisations: CEN (the European Committee for Standardisation) and CENELEC (the European Committee for Electrotechnical Standardisation). Those bodies, dominated by private sector companies, will have the de facto power over shaping the final rules, without the necessary democratic accountability.

Standards developing organisations (SDOs) are not structured to meaningfully engage with the human rights considerations and the implications of these technologies. They are largely composed of individuals and organisations from technical communities that have limited (if any) knowledge of human rights or understanding of the societal implications of technologies. ARTICLE 19 has a long track record of working on embedding human rights considerations in technical standardisation bodies such as the Internet Engineering Task Force (IETF), the Institute for Electrical and Electronics Engineers (IEEE), and the International Telecommunication Union (ITU). Our experience shows that those processes are too often shaped and influenced by technical considerations and corporate lobbying, and rarely by a careful consideration of rights-based approaches. 

Although the European Commission has made specific references to the need for ‘adequate fundamental rights expertise and other public interests’ to be represented in the standards-setting process, this will be hard to achieve in practice. Developing standards is a long, opaque and highly complex process. While civil society advocates can bridge the knowledge and expertise gap regarding human rights considerations, the long-term commitment required for meaningful engagement is very expensive and resource intensive – something that is not easily available to many civil society organisations. 

Standardisation, far from a purely technical exercise, will likely be a highly political one, as SDOs will be tasked with answering some of the most complicated legal and political questions raised by the essential requirements. At the same time, the European Parliament will not have the ability to veto the standards mandated by the European Commission. This could lead to a situation where the mandated standards end up looking significantly different from what was originally set under the Act, undermining its democratic legitimacy. 

ARTICLE 19 calls on legislators to refrain from tasking ESOs with elaborating the implementation of essential requirements that have implications for fundamental rights, and additionally, put in place a regulatory requirement for users of AI systems to conduct and publish fundamental rights impact assessments prior to deployment.