ARTICLE 19 welcomes the strong report of the Working Group, which rightly affirms the need to conduct robust human rights impact assessments and due diligence throughout the entire life cycle of artificial intelligence systems, and importantly calls for “red lines” on those technologies that are fundamentally incompatible with international human rights standards.
We have long warned of a potential race to the bottom in the competition to embrace artificial intelligence, where the promotion and protection of human rights are side-lined in the name of innovation. We have documented the pervasive threat of artificial intelligence-enabled surveillance, such as biometric technologies like facial recognition, on the right to freedom of expression and media freedom. It creates a climate of fear and self-censorship among journalists and media workers – awareness of potential monitoring deters journalists from covering sensitive topics, engaging in investigative reporting, or communicating with vulnerable sources. This chilling effect undermines the diversity, independence, and long-term viability of the media sector.
We call on Member States to fully implement the recommendations of the report, including prohibiting the procurement and deployment of artificial intelligence systems that cannot comply with international human rights standards. We also call on Member States to embed a multi-stakeholder, human rights-based approach in the functioning of both the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on Artificial Intelligence Governance, which are expected to play a pivotal role in setting out global artificial intelligence governance frameworks.
We call on businesses to carry out and require robust human rights impact assessments and due diligence processes throughout the whole life cycle of artificial intelligence systems, with specific measures to ensure the protection of journalists, human rights defenders, and other civil society actors.