Europe: Artificial Intelligence Act must protect free speech and privacy

Europe: Artificial Intelligence Act must protect free speech and privacy - Digital

Ivan Marc/Shutterstock

In April 2021, the European Commission published a Proposal for a Regulation on a European approach for Artificial Intelligence (“the AI Act”). The AI Act contemplates a risk-based approach to regulating AI systems, where: 

(i) AI systems that cause unacceptable risks are banned and prohibited from being placed on the market;

(ii) AI systems that cause high risks can be placed on the market subject to mandatory requirements and conformity assessments;

(iii) AI systems that pose limited risks are subject to transparency obligations.

ARTICLE 19 has previously stressed the need for the proposal to go further in protecting human rights, including freedom of expression, privacy and the right to non-discrimination, while welcoming the proposal’s commitment to putting in place a legal framework to regulate AI. In this follow-up piece, we will delve into two particular aspects of the proposed regulation to explain why and how policymakers must go further in protecting these rights: first, its classification of various biometric technologies across the spectrum of risk; and second, its reliance on the role of standardisation bodies to co-regulate high-risk AI systems. 

On Biometrics

The Act assumes that biometric technologies can be categorised across a spectrum of risk. For instance:

  • “Real-time” remote biometric identification systems in publicly-accessible spaces for the purposes of law enforcement are classified as unacceptable risks (subject to exceptions, which we will discuss below). 
  • Biometric categorisation systems and emotion recognition technologies are classified as limited-risk. However, emotion recognition could also fall under the high-risk category, if used by law enforcement. 
  • Biometric ID systems, and migration, asylum and border control management are classified as high-risk. 

However, there are two fundamental issues with taking this kind of approach to biometrics:

The definition of what constitutes an “unacceptable” risk is flawed. At present, the standards that must be met in order to prohibit the use of certain applications is unreasonably high. For instance, the Act states that if AI systems deploy techniques to materially distort a person’s behaviour in a manner that “causes or is likely to cause” physical or psychological individual harm, it could be considered an unacceptable risk. This threshold will not prevent harmful or exploitative outcomes from the use of AI systems. Why are uses deemed unacceptable only if certain arbitrary standards of physical or psychological harm are met — even when some technologies, by their very nature, are fundamentally dangerous to democratic societies? 

First, AI systems often impact entire communities in unpredictable, intangible ways that are difficult to prove or predict on an individual basis. For instance, emotion recognition systems can be used to make inferences about a person’s inner emotional state, and are based on years of discriminatory pseudoscience. When used in a policing context, the system may unfairly target people belonging to communities that do not “conform” with the arbitrary norms embedded in these systems, and people may not even know they are being placed on watchlists or being flagged for further scrutiny; these are generalisations made at the level of the community. By solely defining harm in terms of the individual, the Act wilfully ignores the fundamental nature of AI systems and the opacity and power dynamics built into their structure and use. Second, the societal and systemic implications of AI systems have begun to be recorded, and recent research — including ARTICLE 19’s own investigation in 2021 — overwhelmingly indicates that societal and systemic harms  are inevitable outcomes of certain types of AI systems, such as emotion recognition, even if they are not always known or explicitly felt by individuals. 

The problem isn’t only that the standard for prohibition is too exclusive; even in the case where a system is classified as posing an “unacceptable” risk, there are a menagerie of exemptions that undermine what few restrictions are in place. For example, the Act classifies law enforcement use of “real-time” remote biometric identification systems in publicly accessible places as posing unacceptable risks, except when it is “strictly necessary” to use these systems for “targeted search for specific potential victims of crime, including missing children, “prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack”, or for the “detection, localisation, identification, or prosecution of a perpetrator or suspect of a criminal offence” under relevant and specified law. 

Although it would appear that these exceptions are limited to what is “strictly necessary”, they  are so broad and vaguely worded that they could conceivably be applied at any point in time. These exceptions must be granted via prior authorisation by a competent judicial or administrative authority, except in instances of a “duly justified situation of urgency” — another vague and extremely broad loophole. 

Additionally, this emphasis on “real-time” remote biometric identification ignores the critical dangers posed by “post hoc” remote biometric identification technologies, which compare and identify after a “significant delay”, even though the harms arising from identifying people in public spaces would be significant and unacceptable in either case. In fact, in some cases, the use of “post hoc” remote biometric identification technologies can be even more dangerous, given that government authorities may combine data from various sources at their disposal to produce even more intrusive inferences about individuals and communities — especially if identification is carried out to suppress free speech, freedom of assembly, freedom of information or the right to privacy. The Act fails to acknowledge that the infrastructure enabling both types of identification is the same, and thus ensuring purpose limitation between “real-time” and “post hoc” remote biometric identification becomes virtually impossible. It is also concerning that the Act does not include a provision for adding new prohibitions to the list of unacceptable risks in the future, indicating both an arbitrary and short-sighted approach to the regulation of AI systems. A democratic, inclusive and rigorous process to include additional prohibitions is the bare minimum requirement if the Act is to be truly future-proof.

The logic behind risk classifications isn’t clear. How and why certain technologies are classified at certain points across the spectrum of risk is perplexing. For instance, “real-time” remote biometric identification systems are classified as an “unacceptable” risk only in the context of law-enforcement actors, but this classification does not extend to the design, development, testing and use of the same technology by the private sector. This is a costly oversight, given increasing instances of public-private partnerships and the host of harms that could arise from private actors building and using this technology. While proponents of the Act have argued that purposes other than law enforcement would be sufficiently regulated under the General Data Protection Regulation (GDPR), this argument sorely overlooks the issues with current law: the GDPR has carved out wide exceptions to safeguards against processing personal data, including broadly defined terms like “substantial public interest”, that, as a result, would not block private sector development or use of such dangerous technologies. 

There are cases where technologies are assigned risk levels that do not correspond with the actual threats they pose to fundamental rights. For instance, emotion recognition technologies are classified as only limited-risk technologies — a curious choice, given the dangerous uses and assumptions that these technologies embody. ARTICLE 19’s recent report on emotion recognition asserts that emotion recognition technologies are fundamentally inconsistent with international human rights standards, and have no space in democratic societies. Their premise, uses and nature are intrusive and based on discriminatory assumptions and pseudoscience that has been discredited. Our findings revealed that emotion recognition technologies strike at the heart of human dignity, and threaten freedom of expression, privacy, the freedom of assembly, the right to protest, the right to non-discrimination and the right against self-incrimination. Under the current classification, emotion recognition technologies must only comply with transparency requirements such as labelling, so that people are notified when they are exposed to this technology. It is unclear how this would neutralise the very real risks posed by these technologies, and this lack of clarity necessitates a reckoning with the status, assumptions and understanding of emotion recognition under the AI Act. Given the problematic use cases that emotion recognition technologies can and have been used for, including in consequential areas of people’s lives such as hiring, employment and education, it is important that they are reclassified as AI systems that pose unacceptable risks regardless of whether they are used by law enforcement or private actors. 

There is also internal inconsistency. Emotion recognition systems fall into the limited-risk category under the Act; however, they may also fall into the high-risk category when the use case relates to law-enforcement purposes.  This assumes a distinction between the levels of harm that occurs in both cases, despite the fact that the outcome and impact on individuals is equally grave. 

Another strange choice is the classification of biometric categorisation systems as limited-risk. Defined in the Act as AI systems “for the purpose of assigning natural persons to specific categories such as sex, age, hair colour, eye colour, tattoos, ethnic origin, sexual or political orientation, on the basis of their biometric data”, these technologies promote a resurgence of erroneous physiognomic scientific assumptions under the guise of “sophisticated” technologies. As such, these technologies fundamentally threaten human rights and have a disproportionate impact on historically marginalised groups, including ethnic and racial minorities and transgender communities. The fact that a transparency obligation is the sole safeguard mediating their use signals a gross misunderstanding of the implications of these systems. Categorisation systems have no redeeming qualities that can or should be optimised for use in democratic societies.

Overarchingly, these instances all demonstrate faulty assumptions, and collectively point towards the urgent need for a reckoning of how systems are classified, what justification is offered and, crucially, meaningful consultation and review of such justification. 

On Standardisation

At present, the AI Act requires that, prior to being offered in the European market, providers of “high-risk” AI systems fulfil a conformity assessment to determine whether certain essential requirements, as laid down in the Act, are met. Examples of these requirements include: 

  • A risk-management system that shall be “established, implemented, documented and maintained” as part of a “continuous iterative process run throughout the entire lifecycle of a high-risk AI system”. 
  • Training, validation and testing datasets that fulfil “data quality criteria” as set out in the Act, in the case of high-risk AI systems that involve training data. 
  • Technical documentation of high-risk AI systems to be drawn up before products are placed on the market.
  • Record-keeping in the form of automatic logs to ensure traceability of the system’s functioning. 

There are also other obligations related to transparency, human oversight, accuracy, robustness and cybersecurity of systems. 

However, providers of “high-risk” AI systems can choose a different route: if European Standards Organisations (ESOs) like CEN (the European Committee for Standardization) and CENELEC (the European Committee for Electrotechnical Standardization) adopt harmonised standards related to AI systems following a mandate from the European Commission, providers can choose to simply follow these standards instead of complying with the essential requirements laid out in the Act.  

This is a worrying route for the Act to prescribe, as standardisation per se does not guarantee meaningful safeguards or deliver on the proposal’s intention of creating a framework of trustworthy AI based on EU values and fundamental rights. There are three major reasons: 

Standards developing organisations (SDOs) are not inherently structured to meaningfully engage with the human rights considerations and implications of these technologies. A closer look at the reality and functioning of standardisation bodies indicates that they are largely composed of individuals and organisations from technical communities that have limited (if any) knowledge of human rights or understanding of the societal implications of technologies. ARTICLE 19 has produced pioneering work on embedding human rights considerations in technical standardisation bodies like ICANN (the Internet Corporation for Assigned Names and Numbers), IETF (Internet Engineering Task Force), ITU (International Telecommunication Union) and IEEE (Institute of Electrical and Electronics Engineers). Our experience over the last seven years has demonstrated that standardisation processes do not inherently lend themselves to careful consideration of rights-based issues. Instead, they are framed, shaped and influenced by technical considerations that are mediated by corporate lobbying and fraught power dynamics that lean in favour of incumbent power structures and business interests. 

This is particularly worrying from a rights-based perspective. First, human rights considerations, principles and standards often do not align with business interests that operate on the premise of maximal profitability. For instance, in ARTICLE 19’s recent work, we showed how the burgeoning industry for emotion recognition products is fundamentally a non-starter from a rights-based perspective; nevertheless, even in the presence of evidence of harm and a shaky scientific foundation, the market for emotion recognition seems to be growing. Second, even if there is a legitimate appetite in technical communities to respect human rights, they don’t have the expertise to interpret the human rights framework and translate those standards and principles into technical considerations. This risks the creation of outputs that actually fall short of these standards and principles, whether by accident or design. 

Standardisation is an exclusionary process. Developing standards is both a long and unpredictable process, in which developing even a single standard can often take two years or more. While civil society advocates can bridge the knowledge and expertise gap regarding human rights considerations in SDOs, the long-term commitment required to engage in private standardisation is expensive, requiring dedicated staff time, travel costs to participate in the physical meetings, and membership fees. These resources are not easily available to civil society organisations; neither is meaningful access to these processes, which may be directly or indirectly hostile to civil society participation.

It is easy to overlook these issues if you think of standardisation as one of several routes to regulatory compliance. However, as scholars Michael Veale and Frederik Borgesies have argued,  it is often cheaper and easier for producers to follow harmonised technical standards rather than interpret stricter legal or regulatory requirements. As such, following technical standards is not one of two equal choices, but rather the “safer” option that most technology producers will opt to follow. 

Standardisation is not a silver bullet: The European Parliament does not have the ability to veto harmonised standards mandated by the European Commission. This means that mandated standards can end up looking significantly different from the essential requirements originally set under the Act. If developed under a technocratic lens (as is often the case in SDOs), these standards also run the risk of being divorced from their context of use. This is a dangerous situation given that AI systems and their impact on societies are deeply subjective and contextual. The fundamental risks to human rights that AI systems pose can only be thoughtfully addressed and mitigated if they are studied, anticipated and analysed within the unique context, institutional reality, power structures and societal conditions surrounding their use. For example, a standard for credit scoring AI systems must take into account the societal conditions including historical discrimination, access to credit, income inequality, and nature of work. These contexts would all determine how the AI system should be built and calibrated. A credit-scoring algorithm in Denmark, for instance, would necessarily have to be built differently from an algorithm in Bulgaria. 

By outsourcing the task of ensuring that essential requirements under the Act are fulfilled by developers of high-risk AI systems to SDOs, the Act cedes the responsibility of governments to private organisations. While the Act can legitimately impose safeguards and standards, outsourcing of this kind brings into question the legitimacy of specific outcomes from private standardisation bodies, particularly as they relate to AI systems in the public sector.