Coronavirus apps and human rights: what you need to know

Coronavirus apps and human rights: what you need to know - Civic Space

The public debate around how to protect society and individuals from coronavirus has become  increasingly focused on digital tools, especially on the development and the use of various coronavirus apps. 

In this briefing, we answer some of the key questions: What are these apps? Can they really help to fight the pandemic? And what impact will they have on the right to freedom of expression and human rights overall? Our main concern is that the widespread use of these apps could lead to a level of digital surveillance that people would not accept under ordinary circumstances. Moreover, using digital technology as the main instrument to tackle the pandemic poses more risk to human rights than solutions.

What do these apps do? 

There is no one type of coronavirus app. At present, we are aware of four types of apps that are being developed, deployed or proposed (importantly, this is a fast moving situation and specific apps mentioned in this document may change):

  • Quarantine apps are supposed  to help monitor people who are in quarantine after being diagnosed with coronavirus or being in contact with a positive coronavirus person. For instance, people in quarantine are given the choice either to receive unexpected visits from the police, or to download this app and use it on a daily basis; they can take selfies and upload them to the app as proof that they’re not outside. Developers have already announced that future contact-tracing apps will integrate additional functionality, such as recording temperature scans or test results.
  • Contact tracing apps are designed  to use mobile technology to trace people thought to have coronavirus, and those they may have been in contact with during a certain period of time. One of the ways they can do this is through Bluetooth signals (see How do the apps work, below). These apps are being considered as a way of monitoring and controlling the spread of the virus.
  • Symptoms’ reporting apps that enable people to report their health conditions and symptoms of coronavirus (or having no symptoms at all); or to report temperature scans so that the progression of the disease can be tracked in real time. The aim is generally to help researchers to determine how fast the virus is spreading in a particular area, identify high-risk areas in the country, who is most at risk, or better understand symptoms linked to underlying health conditions.
  • Combined purpose apps combine one or more of the above-mentioned functions. Many of them are developed for use in private working spaces. For instance, one app prompts workers to answer daily questions, take notes on who they come into contact with outside the workspace, and tracks worker interactions through the use of mobile location technology. Another requires workers to wear a device that tracks their movements while on the employer’s premise; if a worker shows symptoms or becomes sick, employers can identify which areas of the business need to be sanitised and which workers may have been exposed. Another app asks employees to notify their employer of their health status.

Who is creating these apps?

The majority of these apps are developed by private companies or start-ups, either with or without the support of public actors such as government bodies.

Some of them are, or will be, a part of a series of measures that States are putting in place to prepare to lift lock-downs while keeping the spread of coronavirus under control. In such cases, there will likely be a private-public partnership for the development and distribution of the app, which will then be placed under public governance. In other words, the app will be deployed by public institutions for public purposes. Users’ data will be then collected, processed and stored by public actors, or by private actors under public control (if the State does not have the technical resources to do it itself).

It is important that States are transparent about these public-private contracts, which should contain adequate guarantees of accountability for all parties involved. In particular, as part of the conditions for the public procurement procedure, States should set clear requirements in terms of the human rights compliance track record of companies, and they should ask for adequate human rights impact assessments (HRIAs) of the services and systems the private actor intends to develop and deploy.

These apps can also be used by private actors without any intervention or any role for the State.

Should apps be voluntary or mandatory?

In some countries, States have strongly pushed for the adoption of these apps, or made them mandatory. In China, for example, the AliPay HealthCode app determines whether people can go out and mingle or if they are identified as coronavirus positive. This app is part of the country’s broader social credit system, and a part of the everyday data gathering done by the Chinese government. Other countries have however declared that they will leave people free to decide whether or not to use the app.

At a minimum, the use of these apps should be taken up on a genuinely voluntary basis. But a truly voluntary approach is more than not making the app mandatory. Under a truly voluntary system, the State should not provide any benefits to the  people who use the app. Equally, private companies should not require people to use an app to access premises or receive certain benefits.

However, when a government says that, for example, people can leave their home, or access public premises or public transport,  only if they use an app, they will in practice not be free to choose. People who refuse to use the app should not experience any negative consequences. Moreover, people who decide to use the app should be able to temporarily deactivate it as well as permanently remove it whenever they want.

Can these apps really help to fight the pandemic and protect us?

It is claimed that these apps are largely being considered or deployed as part of efforts to protect public health. While efforts to track and prevent the spread of coronavirus are extremely important, there are a number of issues which mean that these apps are unlikely to be the panacea they are often claimed to be.

First, the effectiveness of these apps will largely depend on the vast majority of a population using them. Studies show that to have epidemiological significance and usefulness, contact tracing apps must be used by at least 60% of the population. This will be difficult to achieve on a voluntary basis, so apps’ effectiveness is questionable to begin with.

Second, if any contact-tracing app is to play a useful role in the fight against the pandemic, it must command the trust and confidence of both the public, -so that they install it and use it –  and the medical community, so that they are able to gather good quality data.

Technology penetration is another challenge. The usage of the apps relies on people having a certain level of digital skills, and an appropriate device and operating system. This might not be the case in places where digital literacy is low in general or for certain groups, such as older people. Data about the pandemic tells us that the highest number of deaths are in the poorest communities, who are usually also the most digitally excluded. This means that the apps will be less effective in tracking or preventing virus spread, or gathering health information, in the communities most affected. There is little doubt that such an outcome will entrench inequalities even further.

How do the apps work? 

There are numerous proposals for the technical implementation of these apps. These range from dystopian systems of full surveillance to targeted and anonymous methods of alerting potentially infected persons without the identification of a specific person.

With contact tracing apps, location tracking – which uses GPS and cell site information – is not suited to contact tracing because it does not reliably reveal the ‘relevant contact’, i.e. the close physical interaction that is likely to spread the disease.

Instead, proximity tracing – which measures Bluetooth signal strength – seems to be able to determine whether two devices were close enough for a sufficient period of time for their users to transmit the virus.

There are currently numerous different proposals for Bluetooth-based proximity tracking apps, which all seem to begin with a similar approach. The app broadcasts a unique identifier over Bluetooth that another, nearby device can detect. To protect privacy, many proposals, including the recent Apple and Google APIs, have each phone’s identifier rotated frequently to limit the risk of third-party tracking.

For an app relying on Bluetooth-based proximity, users’ location is not relevant, because what the application needs to know is whether users were close together for a sufficient length of time. However, when a user learns that they have coronavirus, other users should receive notification of their infection risk and this is where a crucial  difference in app design exists.

Notifications can either be made through a centralised system, where a central authority has access to and is able to process contact information contained in all users’ devices. Or it can happen through a decentralised architecture, where this information and the processing of it happens on users’ devices.

What are the implications of a centralised or decentralised model for apps?

Centralised models are a huge risk in terms of state surveillance, because users’ privacy is dependent on the trustworthiness and competence of the operator of the central infrastructure. The centralised recording of contact data could facilitate ‘mission creep’, and the State could later use the data for purposes other than coronavirus tracking.

Decentralised models run on distributed computing systems and are intended to give users more control over their information. As a completely anonymous and decentralised model is technically possible, there is no need to subject people to the risks linked to centralisation.

At the same time, neither centralised nor decentralised models are risk-free for users. They must be designed both to guarantee the security and privacy of user data through encryption and anonymisation.

App developers must be transparent about the model they adopt, make their source code open and verifiable, and publish details about the security practices they adopt. APIs should do the same, and adopt similar guarantees, or apps’ efforts can be nullified, because the app will necessarily run on APIs.

Every app developer should address and clarify the aim of the app, the risks it raises (for instance around privacy) and how to mitigate them.

Can the use of these apps meet human rights standards?

These apps raise serious concerns for the protection of human rights, in particular the rights to privacy, freedom of expression and protection from discrimination.

Quarantine, contact-tracing and symptoms’ reporting apps are all extremely intrusive instruments when it comes to users’ privacy rights. These apps collect a vast amount of personal information, such as biometric data, health related data, and data regarding peoples’ movements; in addition, they track users’ movements in private as well as in public space, substantially impacting their right to remain anonymous.

As these apps are so intrusive, in order to ensure users’ human rights are protected, public and private actors should carefully assess whether they constitute a legitimate, necessary and proportionate way of effectively achieving their goals, before developing them and promoting their adoption.

In order for this intrusion to comply with human rights protection, at minimum, a number of measures would need  to be in place, notably:

  • App developers would be required  to implement comprehensive data protection measures regarding the collection, processing and storage of data by the apps, and these measures would have to fully comply with international standards and regional and national data protection and privacy laws. For users to understand if and how much their data and their privacy is protected, app developers would have to communicate privacy guarantees in a comprehensive and comprehensible manner. Importantly, the country context matters, and it can make a tremendous difference. Some States already have in place strong frameworks for the protection of personal data, privacy and other human rights, while others don’t. Likewise, in some States there’s the capacity to develop apps that are secure by design and privacy friendly by design, while in others there isn’t.
  • Developers and implementers would have to narrowly specify the purpose of the app, and collect only the data needed for that purpose, and process them only for that purpose. For this reason, the development of apps that allow for the incremental collection of additional categories of data must be strongly opposed, as these apps constitute an open door to ‘mission creep’. The collected data would have to reside to the extent possible on the user’s device and be retained for the least possible amount of time.
  • App developers would have to guarantee anonymity and refrain from sharing data with third parties or among apps.
  • Informed, voluntary, and opt-in consent would have to be a basic requirement for any of these apps.

In addition, the deployment of these apps would have to be subject to thorough independent oversight, and there should be redress mechanisms in case users’ rights are violated. It is essential that whoever develops and deploys the apps remains accountable for abuses and misuses, as well as for any violation of users’ rights.

Although these are basic guarantees that could mitigate human rights risks for users, no app can be risk-free when it comes to the protection of human rights. For example, regularly rotating the identifiers used by individuals’ phones is a privacy enhancing solution; however, if an adversary can learn that multiple identifiers belong to the same user, it is still possible to tie activities to a real person. Also, app developers might force the limits of their apps functionality to collect additional users’ data, either by their own initiative or under government pressure. The list of examples is long; all in all, the adoption of these apps will always require users to have a certain degree of trust and confidence in the developer.

If these apps are deployed despite all these concerns and meeting the above minimum requirements, States should make sure that this infrastructure is not established on a permanent basis. To do so would normalise oppressive surveillance and undermine human rights. It is vital that the current pandemic is not used to create a permanent tool for mass surveillance of some sections of the population or the society at large. It is essential that the collection of people’s information to track coronavirus does not lead to its exploitation for any other purpose, and that this tracking apparatus is dismantled as soon as the pandemic is over.

Why is the use of these apps a free speech issue? 

While these apps have huge privacy implications, their impact on other human rights, including freedom of expression, should not be downplayed. Their use has a potential for creating a fundamental shift in how people interact with each other, and in their connection with the State.

Anonymity is key in the protection of freedom of expression as well as the right to privacy. Anonymity protects the freedom of individuals to communicate information and ideas that they would otherwise be inhibited or prevented from expressing, and also protects the freedom of individuals to live their lives without unnecessary and undue scrutiny. However, with the use of contact tracing apps, people’s movements will be constantly monitored, both in private and public spaces, and it will be difficult, if not impossible, for people to remain anonymous.

The collection of personal information and user data can have a chilling effect on freedom of speech. A constant surveillance can impact people’s behaviours, and make them feel less able to express discontent or dissent about political decisions or public policies, or less comfortable expressing their religion, sexuality, or other aspects of themselves which they do not want to be shared with the State.

The very personal and sensitive information collected by the apps also poses a risk to certain categories of people more than others. In particular, whistleblowers, journalists, political activists, and human rights defenders, are put at greater risk of censorship or repression by the State any time a surveillance system is in place, and the use of these apps are not an exception. The use of apps can also lead to discriminatory outcomes and it can make some people – even under anonymised data – vulnerable to harassment, attacks, and discrimination, for instance on the basis of their religion, race or sexuality.

Does ARTICLE 19 think that apps should be used to tackle coronavirus?

ARTICLE 19 finds that although the pandemic requires strong responses, the deployment of various coronavirus apps – that significantly impact users’ human rights – is extremely problematic and we warn against their deployment and use. We are concerned that the reliance on these apps will result in further entrenchment of a surveillance infrastructure by which public and private actors will be able to continuously monitor individuals’ movements and their interactions with others as well as gather and control a vast amount of personal data.

We also warn that  this rising enthusiasm for automated technology as a centerpiece of infection control appears to be totally unjustified. No apps can make up for shortages of a robust health care system, effective treatment, personal protective equipment, rapid testing, and various similar challenges which needed to be confronted during the coronavirus pandemic.

These apps can, at most, be a support or a complementary measure, but not the solution – and they must not be viewed as such.