Digital Religion(s): Der Blog

UN Debate on Countering Religious Hatred: Turning the Spotlight on Social Media Companies

15. März 2024 | Cristina Frei | Keine Kommentare |

Incidents of Qur’an burnings in 2023 lead to a draft of the UN Human Rights Council (HRC) resolution 53/1, which included the decision to organise a panel discussion during the 55th HRC session. Accordingly, on 8 March 2024 in Geneva, the HRC discussed the identification of drivers, root causes and human rights impacts of religious hatred that could constitute incitement to discrimination, hostility or violence. 

OHCHR High Commissioner Volker Türk made clear to everyone in his opening statement that religious hatred has been, and still is, a real problem. That xenophobia and discrimination on the basis on religion or belief are “rising to acutely disturbing levels today”. The discussion that followed largely revolved around two main areas of focus: One group of speakers emphasised that the desecration of sacred books, places of worship and religious symbols would constitute incitement to violence. Another group of speakers highlighted the negative human rights impacts of national laws that prohibit displays of lack of respect for a religion or belief, such as blasphemy laws[1].

The traditional perspective on international human rights centres on the obligations of states. It is also not companies but states that are eligible for membership of the HRC. Thus, it is not surprising that the participating representatives of states and NGOs rarely addressed the responsibilities of social media companies during the panel. However, their relevance is not least known since the serious human rights violations in Myanmar against Muslims, especially Rohingyas, including religious hatred online constituting incitement to violence.[2] Also in the context of the Qur’an burnings last year, content both in support of the burnings and of protesters spread on social media. It is therefore essential to include the companies’ role when identifying drivers, root causes and human rights impacts of religious hatred as well as examples of how to prevent it. Business enterprises have a responsibility to respect human rights too (UN Guiding Principles on Business and Human Rights (UNGPs) Principle 11). 

The Human Rights and Alliance of Civilizations Room of the Palais des Nations with the ceiling sculpture by Miquel Barceló a few minutes before the start of the panel

The Swiss ambassador asked the panellists for best practices to prevent and combat hate speech based on religion on social media. The UN Special Rapporteur on Freedom of Expression, Irene Khan, responded that pressure should be put on the companies to base their content moderation on human rights principles. She requested them to be more transparent about their policies and to grant their users the right to appeal. 

She named the Meta Oversight Board (OSB) as a positive example for using the threshold test of the Rabat Plan of Action.[3] Since 2020, the OSB’s international experts are selecting content cases for review, publishing their decisions and issuing policy recommendations for the social media company Meta. The Rabat Plan was the result of a series of expert workshops organised by the OHCHR to find a better understanding of the prohibition of incitement to national, racial or religious hatred (article 20(2) of the International Covenant on Civil and Political Rights (ICCPR)). The six-part threshold test[4] helps states to assess whether an incident reaches the threshold of article 20 ICCPR and therefore must be prohibited, while complying with the requirements of the right to freedom of expression (article 19 ICCPR). During the panel in Geneva, it was brought up many times. This test can also be adapted by social media companies to decide on appropriate content restrictions.[5]

The OSB is an exemplary body with international experts,[6] but is still far from being the industries’ standard. The question is whether other companies can be inspired, or whether users will be able to turn to external social media councils (SMCs) in the future, also in view of the pertinent provisions in the European Union’s Digital Services Act (articles 20 and 21). It would be worth considering even forming SMCs with a specific interreligious focus.

Whereas the possibility to have a content moderation decision assessed based on human rights principles by a review body is a substantial advantage for users’ rights online, most decisions do not reach such bodies. And a major part of content moderation is automated. Academic scholarship regarding AI-based automated content moderation has noted that currently, the technology lacks the ability to identify subtle components within certain speech contexts, like the six parts specified in the Rabat Plan.[7] And in general, we must remain realistic, because according to the current state of research, there will always be errors of rule enforcement when moderating content.[8]

The High Commissioner already mentioned in October, when he gave an oral update on resolution 53/1 in the 54th HRC session, that acts of religious hatred can spread like wildfire on social media platforms and in countries with deeply different cultural political and religious contexts. He announced that his office, in reaction to the resolution, intended to explore the adequacy of the companies’ existing policies and that they would focus on the implementation of the UNGPs by social media companies in particular. We have to wait for the publication of the OHCHR’s findings in the High Commissioner’s report at the next HRC session.

The UNGPs set out responsibilities that business enterprises should fulfil with regard to human rights. The social media companies’ implementation of the UNGPs is relevant not only regarding the content moderation based on their own policies but also for their compliance with national laws. The aforementioned national blasphemy bans and similar laws likewise have an impact online that should not be underestimated. States and users request social media companies to remove content which falls under these provisions. For instance, between January and June 2023, 2631 complaints based on §166 of the German Criminal Code, the revilement of religious faiths and religious and ideological communities, resulted in removal or blocking of content on Facebook.[9] Such laws are regularly in conflict with internationally recognised human rights. Consequently, the rights of religious minorities may be undermined with the involvement of social media companies. Implementing the UNGPs would mean respecting internationally recognised human rights (see UNGP Principle 12). If the companies can’t fulfil this entirely because of the applicable local law which they have to comply with and due to the local conditions, it is demanded of them to respect international human rights law “to the greatest extent possible in the circumstances, and to be able to demonstrate their efforts in this regard” (UNGP Principle 23 Commentary). 

At the end of the panel in Geneva, several speakers emphasised that the human rights framework to counter religious hatred already exists. However, there is a lack of political will and insufficient implementation. In the context of countering religious hatred, it will be valuable to turn the spotlight on the social media companies. As the High Commissioner has recognised this, his report might include concrete measures, offering potential avenues for progress. 

Aspects such as the extent and speed of distribution of content, the potential heterogeneity of the audience, but also the facilitated way for like-minded people to connect and incite each other and the ramifications of anonymity and automated content creation must be taken into account in the context of social media. How to deal with persistent errors in the rule enforcement and the balancing of rights such as freedom of expression and freedom of religion by social media companies requires further consideration. Content which incites discrimination, hostility or violence cannot be tolerated online, regardless of which social media platform it is published on. At the same time, strong resistance against the attempt to protect religion(s) as such on the basis of international human rights law remains important. 


[1] See UN Human Rights Committee, General comment No. 34 para. 48

[2] See UN HRC Report of the independent international fact-finding mission on Myanmar A/HRC/39/64 para. 74

[3] An example of the Meta Oversight Board’s references to the Rabat Plan can be found in its decision “Communal Violence in Indian State of Odisha” of 28 November 2023

[4] The six-part threshold test includes the context of the statement, the speaker’s position or status, the intent to incite audience against the target group, the content and form of the statement, the extent of its dissemination and the likelihood of harm, including imminence.

[5] See Lwin M, ‚Applying International Human Rights Law for Use by Facebook‘ (2020) 38 Yale Journal on Regulation Online Bulletin, p. 26; critical regarding determining the factor of intent Benesch S, ‚But Facebook’s Not a Country: How to Interpret Human Rights Law for Social Media Companies‘ (2020) 38 Yale Journal on Regulation Online Bulletin, p. 110

[6] Criticism with regard to the OSB is raised, for example, in Wilson RA and Land MK, ‚Hate Speech on Social Media: Content Moderation in Context‘ (2021) 52 Connecticut Law Review p. 1051 f. or in Douek E, ‚The Meta Oversight Board and the Empty Promise of Legitimacy‘ (2024 forthcoming) 37 Harvard Journal of Law & Technology

[7] Hatano A, ‚Regulating Online Hate Speech through the Prism of Human Rights Law: The Potential of Localised Content Moderation‘ (2023) 41 The Australian Year Book of International Law, p. 149

[8] Douek E, ‚Governing Online Speech: From „Posts-as-Trumps“ to Proportionality and Probability‘ (2021) 121 Columbia Law Review, p. 764

[9] NetzDG Transparency Report July 2023, p. 16


Blogbeitrag von Cristina Frei, UFSP-Projekt 7 Religion und der Digital Turn: Neue rechtliche Herausforderungen im Cyberspace


Abgelegt unter: Reflexion
Tags: