EU: New proposal on artificial intelligence must protect human rights

EU: New proposal on artificial intelligence must protect human rights - Civic Space

A surveillance camera and an artificial palm tree in front of the newly built headquarters of the Bundesnachrichtendienst BND, the German intelligence service agency.

New plans for regulating artificial intelligence (AI) recently made  public by the European Commission must go much further to protect human rights, including the rights to freedom of expression and privacy and the right to non-discrimination. 

“For the EU to provide a roadmap for the regulation of AI that recognises the far-reaching impact of AI systems on our way of life and democratic values is to be welcomed,” said Barbora Bukovska, Senior Director of Law and Policy at ARTICLE 19. “But if the EU wants to be a leader in AI regulation, it must go much further in protecting rights. As it stands, many types of extremely invasive biometric mass surveillance and other unacceptable uses of AI systems would be allowed, with a significant impact on human rights.”

On 21 April, the European Commission published its proposal for Regulation on a European approach for Artificial Intelligence, its first ever legal framework on AI.

As AI becomes increasingly prominent in people’s lives, ARTICLE 19 welcomes this proposal. It is important that the Commission recognises the potential negative impact AI could have on democratic values and fundamental rights.  It is also positive that  the Commission recognises that certain AI practices should never be allowed because they are simply incompatible with democratic values and fundamental rights.

However, ARTICLE 19 finds the Commission’s proposal does not go far enough to protect human rights.

The language used throughout the text to define prohibited AI systems, or the criteria used to assess high risk AI systems, are unduly vague – making it hard to predict how they might be applied or operationalised. For example, the level of permitted risk or the kinds of harms envisaged as part of these risk-assessments remain unclear. And there is little in the way of safeguards for the unintended consequences or uses of AI systems that may have a detrimental impact on human rights. The document raises issues of health, safety and rights, but does not go into any detail when talking about the levels of these risks.

Disappointingly, the proposal does not impose an outright ban on biometric mass surveillance in publicly accessible spaces. Instead, there is a limited prohibition on the use of ‘real-time’ remote biometric identification systems for law enforcement, subject to a number of exceptions. In other words, ‘real-time’ biometric identification systems remain available to police in some circumstances and other types of biometric systems could still be deployed and used for other purposes, such as migration control. These include the assignment of people into categories based on sex, age, eye colour, political or sexual orientation, among other groupings — and these categorisations are not actually prohibited but merely flagged up as high risk. 

The proposed regulation therefore fails to address how biometric mass surveillance has a chilling effect on freedom of expression and violates the right to privacy and other human rights. Worryingly, emotion recognition is also not prohibited, despite its discredited scientific basis and fundamental inconsistency with human rights. Instead, it is subject to weak ‘transparency obligations’ that are largely ineffective for protecting human rights. 

In the coming weeks, ARTICLE 19 will be working to ensure the proposal creates much-needed human rights protection, including its possible impact on the public’s right to free expression and the tech industry’s advertising business models. We will also continue to campaign against biometric mass surveillance and ensure the highest level of protection for people’s fundamental rights in future AI regulation.

 

Background

The European Commission’s Proposal for a Regulation on a European approach for Artificial Intelligence aims to lay down a uniform legal framework for the development, marketing and use of AI in the European Union in conformity with EU values. In the following months and possibly years, the proposal will be discussed by the European Parliament and the Council, which will be required to agree on a final text.   

The proposal adopts a risk-based approach to AI systems and divides them into the following: prohibited systems, systems that present a high risk, and systems that are low-risk. Before AI products emerge on the market, providers must comply with a number of obligations for high risk AI systems, including documentation and record-keeping, transparency, users’ access to information, human oversight, robustness, accuracy and security. The draft regulation also provides for an EU coordination mechanism with direct oversight carried out by designated public authorities among the Member States, with the Commission retaining a key role in designating high risk AI systems.

ARTICLE 19 has previously commented on the EU AI White Paper and is a member of the EDRI Reclaim Your Face campaign. Read about ARTICLE 19’s position on biometric technologies and freedom of expression here.