EU: AI Act must protect prioritise fundamental rights

EU: AI Act must protect prioritise fundamental rights - Civic Space

Photo: Steve Jurvetson / Shutterstock

Ahead of the AI Act vote in the European Parliament, civil society calls on Members of the European Parliament (MEPs) to ensure the EU Artificial Intelligence (AI) Act prioritises fundamental rights and protects people affected by artificial intelligence systems.

Increasingly AI systems are deployed to monitor and identify us in public spaces, predict our likelihood of criminality, re-direct policing and immigration control to already over-surveilled areas, facilitate violations of the right to claim asylum and the presumption of innocence, predict our emotions and categorise us using discriminatory inferences, and to make crucial decisions about us that determine our access to welfare, education and employment.

Without proper regulation, AI systems will exacerbate existing societal harms of mass surveillance, structural discrimination, centralised power of large technology companies, the unaccountable public decision-making, and environmental extraction.

The EU’s AI Act can, and should, address these issues, ensuring that AI development and use operates within a framework of accountability, transparency and appropriate, fundamental rights-based limitations.

The signatories call for the following to be ensured in the AI Act vote:

  1. Empower people affected by AI systems:
    This includes ensuring horizontal and mainstreamed accessibility requirements for all AI systems, a right for lodging complaints when people’s rights are violated by an AI system, a right to representation and rights to effective remedies.
  2. Ensure accountability and transparency for the use of AI
    This can be achieved by including an obligation on users of high-risk AI systems to conduct and publish a fundamental rights impact assessment before deployment, requiring all users of high-risk AI systems and users of all AI systems in the public sphere to register their use before deployment, ensuring no loopholes for high-risk AI system providers to circumvent legal scrutiny, and more.
  3. Prohibit AI systems that pose an unacceptable risk for fundamental rights
    There must be a full ban on all types of remote biometric identification, predictive and policing systems in law enforcement, emotion recognition systems, biometric categorisation systems using sensitive attributes or being used in public spaces, and individual risk assessments and predictive analytic systems in migration contexts when used to curtail and prevent migration.

The signatories call on MEPs to vote to include these protections in the AI act and ensure the Regulation is a vehicle for the promotion of fundamental rights and social justice.

Read the full statement

 

Drafted by: 

European Digital Rights
Access Now
Algorithm Watch
Amnesty International
ARTICLE 19
Bits of Freedom
Electronic Frontier Norway (EFN)
European Center for Not-for-Profit Law(ECNL)
European Disability Forum
Fair Trials
Homo Digitalis
Irish Council for Civil Liberties (ICCL)
Panoptykon Foundation
Platform for International Cooperation on the Rights of Undocumented Migrants (PICUM)