Emotion Recognition Technology Report

Emotion recognition technology is pseudoscientific and carries enormous potential for harm. 

ARTICLE 19’s latest report, Emotional Entanglement, provides evidence and analysis of the growing market for emotion recognition technology in China and its detrimental impact on human rights.

The report demonstrates the need for strategic and well-informed advocacy against the design, development, sale, and use of emotion recognition technologies.

Above all, we emphasise that the timing of such advocacy – before these technologies become widespread – is crucial for the effective promotion and protection of people’s rights, including their freedoms to express and opine.

The report provides a factual foundation that researchers, civil society organisations, journalists, and policymakers can build upon to investigate people’s experience of this technology and illustrate its harms.

What is emotion recognition technology?

Unlike facial recognition or biometric applications that focus on identifying people, emotion recognition technology purports to infer a person’s inner emotional state. 

The technology is becoming integrated into critical aspects of everyday life such as where law enforcement authorities use it to label people as ‘suspicious’;  where schools use it to monitor students’ attentiveness in class, and where private companies use it to determine people’s access to credit.

Why should we be concerned about it?

Firstly, by design, emotion recognition technology seeks to impose control over people. Given the discredited scientific foundations on which it is built, the use of this intrusive and inaccurate technology encourages the use of mass surveillance as an end in and of itself.

Secondly, the technology is being developed without consulting people and with no regard for the immense potential it carries for harm.  As a result, it is likely to disproportionately affect minorities and those who are already deprived of their rights.

Most importantly, by analysing and classifying human beings into arbitrary categories that touch upon the most personal aspects of their being, emotion recognition technology could restrict access to services and opportunities, deprive people of their right to privacy, and threaten people’s freedom to express and form opinions.

Emotion recognition technology is deeply problematic because

Emotion recognition’s application to identify, surveil, track, and classify individuals across a variety of sectors is doubly problematic – not just because of its discriminatory applications but because it fundamentally does not work.

Toward an approach for responsible technology

Shouldn’t genuine innovation actually solve problems as opposed to creating them?

Opening Pandora’s box

Rather than placing technology at the service of human beings or designing solutions that solve existing problems, the push to develop tools and products for their own sake is fundamentally flawed.

If we’re not people centred, we fail to reflect sufficiently on the risks these technologies pose to people or the harm they can cause.  We have no rules or laws in place to govern their use let alone any minimum safety standards (common to all other industries). This means we’re ill-equipped to deal with any fall-out that happens.

We need proper consultation and risk assessment for the development of tech.   In other words, we need to look beyond what the tech is (or what it claims to do) and consider how it will be used (or abused), who will have a vested interest in the data, where the data will come from, and who it’s likely to hurt most. 

In short, we need an approach for responsible tech – towards technology that does no harm.

Recommendations

This report has covered vast terrain: from the legacy and efficacy of emotion recognition systems to an analysis of the Chinese market for these technologies. We direct our recommendations as follows.

Ban the design, development, sale, and use of emotion recognition technologies with immediate effect. 

Download our report