EU: Proposed AI guidelines fail to protect human rights

ARTICLE 19 has submitted comments on the draft Artificial Intelligence Guidelines (the Guidelines) prepared by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG). In our opinion, the draft AI Guidelines contain several major flaws, resulting in the proposal of an ambiguous and inadequate framework on AI, which undermines human rights safeguards already in place within the EU and international system.

ARTICLE 19 is critical of the disproportionate role attributed to ethics in the Guidelines, which contributes little towards establishing protections/safeguards for individuals against potential harm stemming from AI. Consistent references to ethics and the goodwill of companies that engage in the development of AI, rather than citation of relevant legal rules, undermines  the proposed system of protection. In fact, the language contained in the Guidelines falls short of clarifying  legal duties for AI developers and key rights, obligations and limitations that companies that design, deploy and develop AI must take into account in their business conduct. We believe that this uncertainty allows companies to avoid accountability for their actions.

We are also concerned by the general lack of attention to dual-use AI technologies in the Guidelines, which in fact can pose immense challenges. In the absence of Guidance, companies are unaware of the duties and restrictions regarding these, and States do not have in place sufficient safeguards to protect their citizens against misuse.

Furthermore, ARTICLE 19 is sceptical of the notion of “Trustworthy AI”, which underpins the Guidelines. While trustworthiness may be pertinent with regard to the institutions that hold AI accountable, the focus of the proposed Guidelines should be the establishment of mechanisms ensuring accountability of AI, as opposed to relying on AI’s “trustworthiness”.

ARTICLE 19 notes that the Guidelines contain repeated references to “data subjects”, and recalls that these “data subjects” are individuals whose rights are guaranteed under the EU Charter. ARTICLE 19 therefore stresses that the Guidelines should call for adequate safeguards for the fundamental rights of these individuals. In our view, a human rights based approach, with particular emphasis on and explicit reference to, among others, freedom of expression, would better establish an effective mechanism of protection.

In addition, we believe that the Guidelines should make it mandatory for standard-setting bodies to conduct human rights impact assessments. The reference in the Guidelines to “agreed standards for design, manufacturing and business practices” does not constitute a sufficient protection for individuals’ fundamental rights. Moreover, any interferences, through the use of AI identification technologies, with the rights to privacy, anonymity and freedom of expression must satisfy the three-part test of legality, proportionality and necessity.

Finally, ARTICLE 19 is critical of the lack of reference to due process in the “Regulation” section of the Guidelines. To ensure legal certainty, there is also a need to expand upon the types of remedies AI developers may consider designing as, at present, this too remains open-ended.

We hope that the Commission, and the High Level Expert Group members will consider this contribution carefully and address our concerns and improve protection for freedom of expression and other rights under the Guidelines.

Read the submission