EU: Better human rights protections needed in HLEG Guidelines on AI

In April 2019, the European Commission’s High Level Expert Group (HLEG) on Artificial Intelligence (AI) issued the final version of its Ethics Guidelines for Trustworthy AI (Guidelines).

ARTICLE 19 submitted comments on the first draft in February 2019.

We welcome efforts made by the HLEG to address concerning elements of previous drafts, and note the positive changes introduced in the final text. However, we believe that stronger references to international human rights laws and standards, as well as to the existing framework on business and human rights, would reinforce the effectiveness of the Guidelines, and would have provided the HLEG with an opportunity to create more clarity for stakeholders.

Law and ethics should complement each other 

Previous drafts focused efforts on ethics, rather than on human rights and related existing legal frameworks. The current version of the Guidelines presents two major improvements on this point.

First, it clarifies that “Trustworthy AI” has three components: it should be lawful; it should be ethical; it should be robust. It also explains that the Guidelines do not deal with the first component, but with the other two, which should complement and not substitute compliance with existing or future rules at the national, regional or international level, in that order. ARTICLE 19 supports this approach to ensure a proper development of a “Trustworthy AI grounded in fundamental rights” and, in recent work, has explained how a rights based approach can constructively inform ethical frameworks.

Second, the nebulous concept of “ethical purpose” has been removed, eliminating a source of uncertainty and one that could have undermined existing legal rights and duties applicable in the field. 

Accountability and remedies in case of human rights’ violations

These positive changes provide momentum for the HLEG’s work. ARTICLE 19 notes that a stronger grounding in international human rights standards could however benefit this current effort. Specifically, ARTICLE 19 notes that ethical principles are not enforceable protections, and therefore reiterates its call for more concrete accountability requirements. Section 1.7, dedicated to accountability, contains vague prescriptions concerning auditability, minimisation and reporting of negative impacts and trade-off assessments. . In addition, ARTICLE 19 believes that the reporting of negative impacts is not a sufficient safeguard, unless followed by the adoption of concrete measures to minimize or avoid such impacts.

Moreover, as recently endorsed, among others, by the Commissioner for Human Rights, the impact assessment performed on AI systems should be based on human rights and have specific features. Section 1.1 of the Guidelines follows a similar approach, and additionally suggests putting in place mechanisms to receive external feedback regarding AI systems that potentially infringe fundamental rights. However, it is not clear who should provide this feedback, i.e. whether it should be consumers, civil society, researchers or others that provide input. The consequences of this feedback, and the extent to which it is binding is also unspecified. Due to the extensive information asymmetry in AI systems, and the sector more generally, clearer recommendations are key to the protection of human rights.

The Guidelines also fall short on remedies. They recommend redress mechanisms when unjust adverse impact occurs, but do not specify how the latter should work. First, it is debatable that redress should be limited to cases of “unjust” adverse impact only; second, there is no mention of due process safeguards, which should be an essential guarantee for all remedies. 

Among the non-technical methods to ensure Trustworthy AI, the Guidelines make reference to codes of conduct. ARTICLE 19 believes that the pursuit of best practices among stakeholders should be encouraged, with a view to establishing standards in the sector. However, the current approach relies on  individual actors’ initiatives only, which does not go far enough.

Another suggested non-technical method concerns standardization. ARTICLE 19 supports this approach, and has stressed the key role that standard setting organizations can play in ensuring that the development of AI is grounded in human rights.   By way of example, ARTICLE 19 has welcomed the inclusion of human rights as the first general principle of the IEEE Global Initiative on the Ethics of Autonomous and Intelligent Systems  while at the same time raising important challenges that are yet to be addressed.

A third non-technical method refers to stakeholder participation and social dialogue. ARTICLE 19 regrets that the Guidelines do not mention civil society among the potential stakeholders to be involved, notwithstanding the key role it could play while working on a human-centric approach.

Although the Guidelines emphasise that AI systems should be “fair in their impact on people’s lives”, ARTICLE 19 notes that the document lacks any reference to dual-uses of AI technologies. It also does not engage with the issue of building appropriate and robust mechanisms for dealing with these challenges.

The way forward

While references are made to equality, non-discrimination, the right to vote, etc., the Guidelines fail to mention a range of other human rights affected by, or potentially affected by, AI systems, including inter alia the right to freedom of expression . As previously reiterated by ARTICLE 19, all human rights that may be strongly impacted by AI systems and technologies should be explicitly referred to.

Finally, under section 2.2 the HLEG informs that they will consider in a second deliverable, consisting of AI Policy and Investment Recommendations, the need to revise or adapt existing regulation and/or introduce new rules with the aim of guaranteeing Trustworthy AI in the EU. 

ARTICLE 19 looks forward to this process and reiterates its commitment to contribute, with the view of strengthening a human-rights based approach to AI in the EU and elsewhere.