Ethical approaches to artificial intelligence and autonomous systems at IEEE SEAS 2017

Artificial intelligence (AI) and automated systems (AS) no longer belong to the realm of science fiction. Machines are increasingly capable of approximating human intelligence, and this development has the potential to positively impact human rights. However, depending on how it is designed and implemented, it can also lead to human rights violations, including of the right to freedom of expression.

The study of AI/AS is relatively modern but by no means new – the term was coined in 1956 [1]. The present AI/AS momentum is, therefore, a curious one. The gradual development of AI/AS principles, applications, and technologies over more than half a century is now contrasted with mainstream hype around its potential. AI/AS are increasingly embedded in various aspects of everyday life – from the information we consume and engage with, to critical domains like healthcare, credit, and governance. This makes it imperative to question the social, legal, ethical, and economic implications of such adoption, as well as their relationship with the society in which they operate.

Symposium on Ethics of Autonomous Systems (SEAS)

The Institute of Electrical and Electronics Engineers (IEEE) Symposium on Ethics of Autonomous Systems (SEAS), is one of a variety of forums that focus on discussions about ethics, human rights, and AI/AS. Held on 5 and 6 June 2017 in Austin, Texas, the meeting enabled the members of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (the Initiative) to work on the next iteration of their guiding document [2] on the role of ethics in the design and development of AI/AS.

ARTICLE 19 believes that the IEEE Global Initiative’s work on Ethically Aligned Design (EAD) for AI/AS is both crucial and timely. We are taking active part in the process and are keen to support the development of guidelines on best practices for ethical approaches to AI/AS. We particularly welcome the Initiative’s goal of enhancing human rights protection and ethically motivated innovation, as it goes well beyond merely reaching functional and technical goals. However, we urge that the Initiative ensure that human rights considerations are at the core of the document’s development before moving forward with this work.

A human rights framework is a must

In discussions during and prior to the meeting, some argued for the inclusion of the term “human wellbeing” as a General Principle of the document, instead of using the term human rights.

We believe that this is a mistake, and that human rights must remain a Guiding Principle of the IEEE’s Initiative. Any move away from human rights terminology, towards “human wellbeing” is concerning and problematic. While it is important to account for human emotions and fulfillment when critiquing AI/AS, we feel that there is not a shared understanding of what the term “wellbeing” means. The concept can mean vastly different things to different people and lacks a framework to measure against. Human rights law, on the other hand, provides us with an established comprehensive framework that has concrete grounding in international law. Human rights are universally recognized, and established in globally adopted international human rights treaties and other standards. In addition to this, there can be a tendency to conflate human “wellbeing” and human rights, as was made clear at the meeting., which can be damaging to the protection of human rights and counterproductive. In the event that the Initiative decides it wants to keep “wellbeing” as a principle, it should be made clear that “wellbeing” indicators are not to be conflated with or replace measurements on the protection of human rights.

Looking ahead

While the Initiative is in its early stages, we believe more can be done to enhance the overall transparency of the Initiative, ensuring the development of the guiding document is carried out in a multi-stakeholder fashion and reflects the importance of human rights considerations in this area. In order to do this, more can and should be done to improve the representation of diverse of views and perspectives in these discussions. ARTICLE 19 remains keen to further support and engage with the work of the Initiative and we hope that these issues will be addressed in future deliberations.

References:

[1] Stuart J Russel & Peter Norvig, Artificial Intelligence – A Modern Approach, Englewood Cliffs, NJ: Prentice Hall, 1995: 27.

[2]“Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems”. http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf