Europe: ARTICLE 19 welcomes the Council of Europe recommendation on the human rights impacts of algorithmic systems

ARTICLE 19 welcomes the recommendation issued by the Committee of Ministers of the Council of Europe on 8 April 2020 on algorithmic automated systems and human rights. ARTICLE 19 was a member of the expert committee (MSI-AUT) that provided the initial draft for the recommendation.

The recommendation provides a blueprint for how human rights should be respected in the context of algorithmic decision-making. Algorithmic processes have become ever more ubiquitous in our everyday life. They offer the promise of greater efficiency and problem-solving across many different areas, including communication, transportation, education and health. At the same time, they also present great challenges to human rights, including the rights to freedom of expression, privacy and data protection, freedom of assembly, freedom of religion, freedom from discrimination, the right to a fair trial and economic and social rights among others. This is especially so given that they operate at scale and speed in ways which are both incredibly complex and often inscrutable.

For this reason, the recommendation outlines several key steps that States and the private sector should follow in order to ensure that the design, development and deployment of such technologies comply with human rights standards.

  • Ongoing assessment and review of human rights impact: States and private actors should assess, monitor and review the human rights impact of algorithmic systems from their inception through to their deployment and interaction with other technologies on a regular basis. In particular, they should assess the risk of any adverse effects on human rights as a result of the provenance and quality of the data being put into or extracted from algorithmic systems. They should also consider the potential for the inappropriate and de-contextualised use of datasets, as well as other risks such as the de-anonymisation of data and the generation of new, inferred, potentially sensitive data.
  • Risk management: Once identified, States and private actors should take appropriate action to prevent and effectively minimise those adverse effects. In practice, this means that at a minimum States should ensure that algorithmic design, development and ongoing deployment processes incorporate safety, privacy, data protection and security safeguards by design. If a human rights impact assessment identifies significant human rights risks that cannot be mitigated, the algorithmic system should not be implemented or otherwise used by any public authority.
  • Accountability: States should put in place independent and adequately resourced institutions that set and oversee the implementation of general or sector-specific benchmarks and safeguards to ensure the compatibility of the design, development and ongoing deployment of algorithmic systems with human rights. Effective redress measures should also be created both by States and the private sector for both individuals and groups negatively impacted by algorithmic based decisions.
  • Transparency: Actors that use algorithmic processes should be able to provide easy and accessible explanations with respect to the data that is used by the algorithm, the procedures and criteria the algorithm uses to make its decision. Data collection methods should be made accessible in order to spot the potential biases that may be embedded in the algorithm’s design. More generally, appropriate levels of transparency should be maintained throughout the process of procurement, design, development and use of algorithmic systems, notwithstanding claims of intellectual property or trade secrets.
  • Public awareness and stakeholder consultation: States and the private sector should engage in regular consultation with the public and all relevant stakeholders to discuss, debate and listen to concerns raised about the human rights impacts of algorithmic systems. In particular, States should foster general public awareness of the capacity, power and consequential impacts of algorithmic systems, including their potential use to manipulate, exploit, deceive or distribute resources, with a view to enabling all individuals and groups to be aware of their rights and to know how to put them into practice, and how to use digital technologies for their own benefit.

ARTICLE 19 encourages States, the private sector and all others working in the field of algorithmic decision-making to follow the recommendations laid out in this new standard-setting instrument issued by the Council of Europe.