Are you curious about the impact of disinformation on your rights and your ability to speak your mind on topics that matter? Do you want to have a say in how social media content is regulated in Ireland? 

You’re in the right place! We want to start a conversation with young people in Ireland about disinformation and social media, and make sure their voices are heard in debates around “fake news” and how decisions are made on what is allowed on social media. 

We want to offer ideas on how a pluralistic and tolerant society – one that listens to different perspectives, encourages dialogue and treats people equally  – should deal with the issues of disinformation and social media regulation.

What is disinformation?

When we talk about disinformation, we mean information that is 1)  false and 2) meant to mislead a population. This means that disinformation is intentionally designed and used to deceive the public. Disinformation isn’t the same as misinformation, which is false information that is shared without knowledge of its falsity. 

What is the right to freedom of expression?

Everyone has the right to freedom of expression: this is protected by both international law and by the Irish Constitution. Freedom of expression means the ability to voice your ideas and opinions – it is what enables journalists to write stories about issues of public interest, artists to create, activists to challenge discrimination and everyone to take part in various dimensions of life in society, including by posting on social media. Freedom of expression enables us to take part in public debate and challenge the status quo.

Key Facts

Disinformation spreads easily on social media, and contributes to a climate of distrust towards the media and independent experts. This is a serious threat for democracy: when we are not informed about what is going on in our country and the world, it becomes harder to make choices as citizens in a democracy and to hold the powerful to account.

Research by Fujo and the BAI has shown that social media companies have not done enough to tackle disinformation in Ireland, for example by labelling false and misleading information on their sites as ‘fake news’.

The Irish government has proposed a new law on social media (called the Online Safety and Media Regulation Bill). The law will create a new public regulator for media and social media.

In order to reinforce the protection of freedom of expression and to facilitate the adoption of flexible approaches to content regulation, ARTICLE 19 considers that the future law should be open to the creation of an independent Social Media Council.


Meet the Campaign Ambassadors

#Keepitreal ambassadors

Get to know our ambassadors

#KeepItReal on Social Media

Comic by Pan Cooke

Click on each image to see a larger version

Part 1

 Part 2

Useful resources

The BeMediaSmart campaign run by Media Literacy Ireland has resources to help you identify disinformation dedicate a FactCheck space debunking fake news and providing accurate information over false claims:

Watch a webinar organised by on “How COVID-19 brought misinformation to Ireland

The Irish Council for Civil Liberties has published an analysis on “why disinformation online is a rights-based issue” with examples from the COVID-19 pandemic

RTÉ Truth Matters campaign


Although they are not the producers of disinformation, social media platforms are often associated with current concerns around ‘fake news’. These platforms enable the dissemination of content - including disinformation - on a massive scale, thus amplifying the impact of potentially harmful disinformation.

Social media companies generate most of their income from selling online advertising, which means that they have a strong economic interest in keeping people on their platforms as much as possible: this is precisely the role of the personalised delivery of content served by algorithms.

Research shows that people are more likely to share disinformation because it is shocking, it sounds extraordinary or disturbing. Research also indicates that people may also share disinformation to reinforce their sense of belonging in online groups or communities. This means that disinformation is the sort of content that is likely to generate high income for social media.

In addition, in the online environment, users with the right knowledge can employ their technical skills to amplify disinformation through the use of a multiplicity of fake accounts and automated mechanisms (called “bots”).

Not really, but under international human rights law, government efforts to prevent harms that result from false information through legislation should always be precisely and narrowly written, to only apply in very specific situations to very specific types of expression. Free speech can only be restricted in very narrow circumstances identified by international human rights law. There are three conditions that must be met by any measure that seeks to restrict the circulation of information: there needs to be a clear and precise legal basis, there must be a legitimate aim, and the restriction must be necessary and proportionate. National laws that blanket prohibit broad and vague categories such as ‘fake news’ or “non-objective information” should always be abolished.

Governments do however have an obligation under international law to provide reliable and trustworthy information to the public.

While frequently demonised as spreaders of ‘fake news’, journalists and the media are also bound by standards to share accurate information. The professional ethical standards of journalism provide detailed guidelines on how to gather and communicate information in the public interest. Journalists have a responsibility to comply with high professional standards in order to produce accurate and reliable information. Press councils exist to ensure that the profession lives by the rules of accuracy of the information, independence of their work, fairness and impartiality, humanity and accountability. Provided that they have done their work according to such principles, journalists and the media should not be held liable if a minor inaccuracy or error is found in the information they publish.

Laws can legitimately protect the reputation of private individuals against lies that cause them harm. However, defamation laws are often misused by the powerful to silence critical voices by drowning media and journalists in long and costly legal proceedings – these are called SLAPPs (Strategic Lawsuits Against Public Participation).

For more information on ethical standards in journalism, check the Ethical Journalism Network:

Disinformation is a complex problem, and there is no simple fix for it. Countering disinformation through the restrictive legislation is really not the best approach to the problem. To combat the consequences of disinformation, we have to create an environment in which disinformation is less able to disrupt our public spaces and influence our view of truth. The best way to do this is by supporting an independent and diverse media, and increasing transparency. We need to build an environment in which people are able to access a diverse range of information. The wider the range of information people are exposed to, the better able they are to analyse and verify the accuracy of information.

Governments have an obligation under international human rights laws to foster an enabling environment for freedom of expression, which includes promoting, protecting and supporting diverse media, and ensuring that they disseminate reliable and trustworthy information, including about matters of public interest, such as the economy, public health, security and the environment.

This can be done in part by the creation of a strong, independent and adequately resourced public service media, which operate under a clear mandate to serve the overall public interest and to set and maintain high standards of journalism. Under international standards, public service media companies should be independent, publicly funded media houses that serve the public interest and pursue a series of objectives that would not be comprehensively fulfilled by private media actors.

States can also put in place a series of policy measures to increase citizens’ understanding of news, to foster quality journalism and trust in official sources.

And, of course, public officials should not disseminate disinformation themselves or seek to undermine trust in independent sources of information.

We live in a world where a big part of the population relies on social media as their main source of information. When social media companies make decisions on what content is allowed or not on their services, it has an impact on what a large number of us watch or read: such decisions may impact public debates.

Given their role in the dissemination of disinformation online, social media companies have tried different approaches to deal with disinformation:

  • Fact checking: Some social media platforms, such as Facebook, work with external fact checkers to verify the truthfulness of contentious information. Content that is identified as false can then be labelled as such, in order to send a warning to users, it can be blocked or deleted or it can be downgraded. Facebook is not fact-checking ads, which has been controversial.
  • Transparency is a fundamental principle to fight disinformation. It can be achieved by flagging content and its sources to enable users to better understand who is behind statements. The principle of transparency on social media platforms is particularly important for political advertisements, which should provide clarity on who is sending the message and on the criteria for targeting users; especially during elections.

Upgrading/downgrading content from official or trustworthy sources can be another tool that social media platforms use to stop disinformation. By upgrading trustworthy pages and downgrading unreliable ones, social media can help the dissemination of correct information. For example, in the current COVID-19 pandemic, some platforms have redirected users to official sources of information, such as to the national health service or government health advisory pages or to the World Health Organization.

While efforts by social media companies can be helpful, they aren’t a solution.

Social media companies use their community standards, rules which say what is allowed on their platforms, to make decisions on whether the content of a post should be removed or flagged – whether because of disinformation, or other reasons, such as incitement to violence, or nudity. Decisions on content are made by a combination of human judgement and algorithms, tasked with applying these rules to a vast range of content. Through the application of their content rules, these companies ultimately make the decision on what we can see and post on their platforms. Right now, there are very limited safeguards in place to check how content rules are being applied, and how social media companies are making decisions on content on their platforms. When it comes to disinformation, social media companies haven’t clarified enough what the process is when deciding what to flag or to remove from the platforms and rules that are currently applied do not entirely reflect international human rights standards.

So far, social media companies haven’t shown sufficient transparency on how decisions are taken in relation to content moderation and takedown on their platforms. Although they publish periodic transparency reports, the data provided is at times incomplete and does not give enough information on the decision making processes and policies applied. ARTICLE 19 is calling for more transparency and the right to appeal decisions that stifle the right to freedom of expression from social media companies through its #MissingVoices campaign. An appeal is not just about the substance of the content, but is a safeguard to protect the rights of a platform’s users. Essentially, it serves to protect everyone against arbitrary decision-making by social media companies in the enforcement of their content moderation policies.

So far, governments (and most recently the EU) have mostly tried to deal with disinformation on social media by pressuring social media companies to sort the problem themselves, under the threat of legal sanctions. However, this can result in increased censorship, and is taking place in the absence of clear and tangible commitments from social media companies that would translate into clear steps to counter online disinformation.

ARTICLE 19 wants to see the response in Ireland go one step further, and we are advocating for the establishment of a Social Media Council. This would be a self-regulatory mechanism – meaning a voluntary agreement that brings together the stakeholders of the social media sector with civil society and the public, to set up an independent, external body to support and promote ethical standards, through recommendations and the application of remedies, such as excuses, a right of reply, the publication of the decision in a relevant visible online space of the social media platform, or the re-upload of content.

This type of mechanism is inspired by the experience of press councils that would provide an open, transparent and accountable forum to address content moderation issues – such as disinformation – on social media platforms in Ireland, using international human rights law as a reference.

We believe the Irish Social Media Council, which would be the first initiative of its kind, could operate in complementarity and coordination with the future Media Commission, and would provide an excellent forum for an open a frank discussion of the appropriate response to disinformation on social media.


Media Coverage

 Irish Times

Read the story

Irish Tech News

Read the story


Read the story


Read the story

Bandwagon's podcast


Radio stations FM 104/ Q 102

Listen to Ross Boyd