IEEE report recognizes human rights as first principle for ethically aligned AI

Artificial Intelligence (AI) is a crucial consideration in the debate over freedom of expression and access to information online. In a recent report ARTICLE 19 and Privacy International, assess the impact of AI technologies on freedom of expression and privacy. Many major digital companies – search engines and social media alike – rely on AI systems for content moderation and information retrieval, which poses real risks for freedom of expression. To address these risks, ARTICLE 19 works with a broad range of partners, governments, and companies to ensure the protection of international human rights standards, including at the Institute of Electrical and Electronics Engineers (IEEE).

The IEEE sets the technical standards that drive modern telecommunication and ICT hardware. In early 2016, it convened a multistakeholder initiative to develop ethical guidelines for AI. One of the first of its kind, the initiative identifies needs and builds consensus for standards, certifications, and codes of conduct regarding the ethical implementation of AI. It was established to achieve three specific goals: draft a document discussing how AI intersects with ethical concerns, educate technologists about the societal impact of the technology they build, and make recommendations for the development of technical standards based on the ethical concerns identified.  

Since 2016 ARTICLE 19 has actively participated in developing this seminal work, which included debating some of the most salient issues related to human rights and AI. Today, the IEEE Global Initiative on the Ethics of Autonomous and Intelligent Systems released its Ethically Aligned Design (EAD) recommendations. Entitled “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent System,” the document contains 11 chapters covering a wide array of topics from personal data to sustainable development, as well as a number of legal and policy recommendations. Crucially, the EAD document names human rights as the first general principle for AI design, manufacturing and use.

ARTICLE 19’s entry into AI and human rights started with our participation in the IEEE Global Initiative which fostered the development of our current extensive AI and human rights program. Over the past three years, we participated in multiple AI consultations, including expert consultationson AI policy with the UK, EU, and Canadian governments, as well as the Council of Europe. We also provided expert opinions to UNSR David Kaye and to UNESCO’s World Commission on the Ethics of Scientific Knowledge. Additionally, we became a member of the Partnership on AI (PAI) – a technology consortium that brings together tech companies with civil society actors to formulate best-practices for AI.Across these various initiatives and consultations, a similar issue arises: there is a need for more careful and deliberate consideration of the role human rights.

 Human rights have an important role to play in the discussion on AI. In comparison to ethical frameworks, and other non-binding standards, the human rights framework is uniquely focused on the perspective of the individual and their rights. It also provides concrete guidelines regarding responsibilities and duties of care in cases of adverse impact, as well as providing protection and redress for victims of human rights violations resulting from AI systems. 

The IEEE EAD recommendations are meant to provide guidance for standards, certification, regulation or legislation for the design, manufacture and use of A/IS and to serve as a key reference for the work of policymakers, industry members, technologists and educators. These recommendations are important because they go beyond functional and technical goals by focusing on the importance of human rights. This makes the document unique in the broader discussion about ethical AI. Human rights are rarely recognized as a first principle for the development of AI related systems. Most current industry frameworks focus on ephemeral principles like “ethical, fair, accountable, and transparent AI”, preferring broad voluntary approaches over enforceable standards. While the IEEE Initiative also considers these principles, its focus on human rights as a guiding principle for technology development paired with their global standing as a standards body, is significant.

The EAD document explicitly recognizes the responsibilities of non-state actors to respect human rights in AI technology. Its process was open and run following the multistakeholder model, allowing NGOs to participate on an equal basis with corporate, government, and academic participants, setting an important example for other technical bodies. More standards bodies and private sector companies should follow the Initiative’s lead when it comes to open collaboration. Yet, there were also some challenges along the way.

Specific to the IEEE Initiative process, but representative of the broader debate about AI, are four major challenges. First, the discussion on responsibility is heavily focused on what individual engineers should do. Less attention is paid to equally relevant structural problems, like the freedom of expression concerns arising from the data-driven business models underpinning many digital companies. Second, many private actors actively avoid taking positions on the most contentious discussions regarding the military application of AI. For example, an early version of the EAD document included a chapter on the ethics around autonomous weapon systems. This chapter has been left out from the current version. Third, industry-led ethical frameworks and principles can be internally inconsistent or difficult to combine. And fourth, the processes set up by industry can lack legitimacy. The Initiative did not create publicly accessible records of the work, progress and debates – making it difficult for new entrants to join and limiting the transparency and accountability of the work. Human rights advocates should continue to monitor the IEEE and broader industry work in this area, especially regarding their work on human rights and the growing trend of the development of industry-led “ethics certification schemes” for AI companies. 

This IEEE initiative is indicative of both the opportunities and the challenges faced by human rights advocates working on AI. Our work on human rights and AI in the IEEE highlights the promises and perils of working in this emerging area. Much of our work involves convincing industry and governments alike that human rights are important to incorporate and ensuring rights are operationalised on par with international human rights law. These struggles are endemic to our work with private actors, on AI as well as other topics, and are a continued point of concern. There is a growing tendency to silo stakeholders along sector lines and frame the debate as ethics versus human rights. Our work indicates that such conjectures are unhelpful. In order to ensure that AI systems are human rights respecting stakeholders need to collaborate and find common ground based on transparent multistakeholder procedures.

The inclusion of human rights as a first general principle in IEEE Global Initiative’ work is a landmark development. It will support AI advocacy efforts rooted in a human rights approach, as it provides a template for how to consider the technical and functional of AI systems in light of human rights. It is an important step forward – spurred on by NGOs like ARTICLE19 and others – and will prove to be an invaluable experience as we advocate with a variety of corporate and academic stakeholders.