EU: AI Act passed in Parliament fails to ban harmful biometric technologies

EU: AI Act passed in Parliament fails to ban harmful biometric technologies - Digital

Credit: https://www.vecteezy.com/

Today, 13 March 2024, the European Parliament adopted with an overwhelming majority (523 in favour, 46 against, 49 abstentions) the Artificial Intelligence (AI) Act. The legislation classifies AI systems based on the risk they pose to individuals’ and groups’ autonomy, civil liberties, and safety. Although ARTICLE 19 welcomes the Act’s prohibition of many practices that pose significant risks to human rights, we are disappointed that the Act does not go further and does not impose complete bans on some of the most rights-infringing uses of biometric technologies.* Given the central role that standard-setting organisations will play in operationalisation of many aspects of the Act, it is crucial that these organisations engage meaningfully with civil society and  that technical decisions are grounded in human rights considerations and  respect for human rights standards.

ARTICLE 19 is pleased to acknowledge that the AI Act explicitly prohibits dangerous and damaging uses of certain AI systems, including predictive policing, social scoring, and surreptitiously influencing human behaviour. ARTICLE 19 and other civil society organisations have long argued that these uses pose an unacceptable level of risk to the rights to privacy and freedom of expression, among others. We welcome the recognition of this position in the final version of the legislation.

However, ARTICLE 19 is concerned that the AI Act fails to completely ban the use of emotional recognition technologies and real-time remote biometric identification in publicly-accessible spaces. We believe that emotion recognition technologies are fundamentally irreconcilable with human rights, and their design, development, deployment, sale, export, and import should be banned entirely. We also believe that to ban the use, in public spaces, of real-time remote biometric identification only (and not also ‘post’) is not sufficient to protect fundamental rights; in addition, the exceptions provided to this already too-narrow ban are wide and do not include sufficient safeguards, creating dangerous loopholes for law enforcement uses. 

Additionally, ARTICLE 19 observes that for high-risk AI systems, the AI Act lays out a presumption of conformity with safety norms and human rights if the AI application complies with notified technical standards. This presumption demonstrates an over-reliance on, and excessive delegation to, technical standard-setting organisations, which are not meaningfully structured to incorporate human rights considerations into the design of technologies. Given this development, standards bodies and civil society organisations should bolster their efforts to systematise technical decisions that centre human rights in the development of technologies. 

Finally, ARTICLE 19 urges the European Commission to actively seek and encourage the involvement of civil society organisations in the implementation of the AI Act, ensuring that artificial intelligence protects human rights. The Advisory Forum, which will be established to advise the Commission, represents an opportunity to engage civil society, academia and affected persons in a meaningful way, and to offer democratic legitimacy to the AI Act’s implementation and enforcement. 

*This statement focuses on the issue of biometric technologies in the AI Act. Not commenting on all provisions of the AI Act does not mean that we are endorsing all other provisions of the Act.