Open letter: European Commission must ban biometric mass surveillance

Open letter: European Commission must ban biometric mass surveillance - Digital

Leszek Leszczynski / CC BY 2.0

Summary

ARTICLE 19 and 50 other civil society organisations across Europe sent an open letter to the European Commissioner for Justice, Didier Reynders. The coalition calls for enhanced fundamental rights protections in the upcoming artificial intelligence (AI) legislative proposal. The protection of fundamental rights must be at the core of this legislative proposal. It must prohibit uses of AI that violate human rights, particularly when it enables biometric mass surveillance. Biometric mass surveillance seriously impedes freedom of expression and open public participation as the feeling of being watched leads individuals to alter their behaviour and self-censor. In the open letter, we call for a specific ban on indiscriminate or arbitrarily-targeted use of biometrics in public or publicly-accessible spaces, as existing general prohibitions in the EU data protection framework are not proving efficient. We call for red lines to be drawn on uses of AI that are not compatible with human rights.

Thursday, 01 April 2021

Re: Seeking your support for a specific ban on biometric mass surveillance practices on fundamental rights grounds

Dear Commissioner Reynders,

cc: Executive Vice-President Vestager

cc: Vice-President Jourová

cc: Commissioner Dalli

cc: Acting Director-General Saastamoinen, DG-JUST

cc: Reynders’ cabinet

 

We, the signatories of this letter, represent a coalition of 51 digital rights, human rights and social justice organisations working for people’s fundamental rights across Europe. We are writing to ask you to support our call for enhanced fundamental rights protections in the upcoming artificial intelligence (AI) law, in particular relating to facial recognition and other forms of biometric mass surveillance.

The European Commission has set itself the important task of carving out a European way forward with AI that puts trust, excellence, and the protection of fundamental rights at its core. To achieve that goal, the upcoming legislative proposal on AI must take the necessary step of prohibiting applications of AI that irremediably violate fundamental rights, such as remote biometric identification technologies that enable inherently undemocratic mass surveillance.

62 civil society organisations have already issued a call for regulatory limits on applications of artificial intelligence that unduly restrict human rights, such as uses of biometric technologies that enable mass surveillance, and that call was further strengthened by a letter from 116 MEPs. Research has shown that biometric mass surveillance practices unduly infringe on the rights enshrined in the Charter of Fundamental Rights of the European Union (CFEU) – in particular, rights to privacy, data protection, equality, non-discrimination, free expression and association, due process and good administration, the presumption of innocence, social rights, workers’ rights, dignity, as well as the fundamental principles of justice, democracy and the rule of law.1‘Ban Biometric Mass Surveillance’, EDRi, May 2020: https://edri.org/wp-content/uploads/2020/05/Paper-BanBiometric-Mass-Surveillance.pdf

We would thus like to call your attention to three important points:

  1. Civil society is calling for a specific ban on inherently unnecessary and disproportionate biometric mass surveillance practices because the existing general prohibitions in the EU data protection framework are not proving sufficient:
  • The processing of biometric data is prohibited under the General Data Protection Regulation (GDPR). Yet exemptions, such as on the basis of consent (Article 9(2)(a)), are being invoked by public and private actors deploying biometric systems for mass surveillance purposes. Because of these misuses of consent as a legal basis and the subsequent lack of enforcement, we strongly believe that the EU needs a specific legal ban to build on the existing general prohibition in the GDPR;2
  • The processing of biometric data for law enforcement purposes is restricted to ‘only where strictly necessary’ and proportionate and on the basis of Union or Member State law (Data Protection Law Enforcement Directive (LED) Article 10 and 10(a)). However, this exemption – coupled with the inadequate application of the necessity and proportionality test – has led to entities in Member States deploying some of the most harmful uses of biometric technologies for mass surveillance purposes – those by law enforcement – despite such uses contradicting the CFEU;3One example of this is in Denmark, where the state has introduced national legislation which has led to the “legal” biometric mass surveillance of people attending football matches.
  • This means that in reality, biometric mass surveillance is rife across the EU, and civil society and individuals are bearing the burden of trying to stop harmful and discriminatory uses of biometric data for mass surveillance purposes through a combination of investigations, campaigning, litigation and complaints to data protection authorities (DPAs). Instead, we are asking for a specific EU legal instrument to ensure that biometric mass surveillance uses are never deployed in the first place;
  • We therefore call for a ban on the indiscriminate or arbitrarily-targeted use of biometric applications in public or publicly-accessible spaces (i.e. biometric mass surveillance) without exceptions, due to the fact that the many risks and harms involved make such uses inherently unnecessary and disproportionate for the aim sought. Where uses do not inherently lead to the undue infringement of fundamental rights, but they still limit fundamental rights, they must be strictly and demonstrably necessary and proportionate.
  1. Civil society is calling for red lines specifically on the dangerous uses and applications of artificial intelligence:
  • We welcome that the Commission has chosen to focus on how AI technologies are used. For example, the use of indiscriminate or arbitrarily-targeted facial recognition in public spaces is an application of AI that our research has shown unduly restricts fundamental rights;
  • Ensuring that the EU becomes a leader in a trustworthy approach to AI development and deployment (i.e. the use of AI that is in line with the protection of fundamental rights) will mean making decisions about which applications have a place in a society committed to fundamental rights, and which do not. While other countries may pay no heed to protecting fundamental rights in their pursuit of innovation at all costs, the EU can and must lead by example by ensuring that the innovative AI developed and deployed within its borders is always developed and deployed in accordance with fundamental rights;
  • Over 43,500 EU citizens have already officially added their voices to the civil society call for a ban on biometric mass surveillance practices through the new European Citizens’ Initiative run by the Reclaim Your Face We think it is vital that their views, expressed through this powerful democratic initiative, are taken seriously in the upcoming regulatory proposal. This is especially important given the ambition set out by the Commission in the White Paper on AI for a broad public debate.
  1. AI innovation in Europe can thrive globally by respecting fundamental rights:
  • With the GDPR, the EU proved that our advantage in a digitalised world can be in ensuring that innovation always respects people’s fundamental rights, and that the EU can set standards that protect people’s rights while keeping markets competitive;
  • We have learned that both within and beyond the EU, the unfettered development and deployment of biometric technologies has severe consequences for the human rights of marginalised people and groups, who are often disproportionately subject to discriminatory deployments of such technologies whilst also seriously underrepresented in EU decision-making;
  • China’s biometric mass surveillance of the Uighur population and the US’s disproportionate police surveillance of Black communities with inherently discriminatory facial recognition are not models to strive for – and US cities are increasingly taking moves to ban such uses as a result. For the EU to base its regulatory model on competition with these practices would compromise the very principles and values on which the EU is built.

We appreciate that the European Commission has so far agreed with the principle of a ban on biometric mass surveillance practices. To further protect fundamental rights in Europe, the signatories of this letter therefore call for:

  1. The legislative proposal on artificial intelligence to include an explicit ban on the indiscriminate or arbitrarily-targeted use of biometrics in public or publicly-accessible spaces which can lead to mass surveillance, on fundamental rights grounds;
  2. Legal restrictions or legislative red-lines on all uses which contravene fundamental rights;
  3. The explicit inclusion of marginalised and affected communities in the development of EU AI legislation and policy.

For a truly human-centric legislation on AI, we reiterate that there must be some uses that the EU does not allow in a democratic society. We look forward to working with you to make a ban on harmful and rights-violating biometric mass surveillance in the EU a true reality.

Read open letter

 

  • 1
    ‘Ban Biometric Mass Surveillance’, EDRi, May 2020: https://edri.org/wp-content/uploads/2020/05/Paper-BanBiometric-Mass-Surveillance.pdf
  • 2
  • 3
    One example of this is in Denmark, where the state has introduced national legislation which has led to the “legal” biometric mass surveillance of people attending football matches.