Countering online hate speech: How does human rights due diligence impact terms of service?

The Internet is a global forum largely governed by private actors driven by profit concerns, often disregarding the human rights of historically marginalised communities. Increased attention is being paid to the corporate human rights due diligence (HRDD) responsibilities applicable to online platforms countering illegal online content, such as hate speech. At the European Union (EU) level, cross-sector initiatives regulate the rights of marginalised groups and establish HRDD responsibilities for online platforms to expeditiously identify, prevent, mitigate, remedy and remove online hate speech. These initiatives include the Digital Services Act, the Audiovisual Media Services Directive, the proposed Directive on Corporate Sustainability Due Diligence, the proposed Artificial Intelligence Act and the Code of conduct on countering illegal hate speech online. Nevertheless, the HRDD framework applicable to online hate speech has focused mostly on the platforms ’ responsibilities throughout the course of their operations - guidance regarding HRDD requirements concerning the regulation of hate speech in the platforms ’ Terms of Service (ToS) is missing. This paper employs a conceptualisation of criminal hate speech as explained in the Council of Europe Committee of Ministers ’ Recommendation CM/Rec(2022)16, Paragraph 11, to develop specific HRDD responsibilities. We argue that online platforms should, as part of emerging preventive HRDD responsibilities within Europe, respect the rights of historically oppressed communities by aligning their ToS with the conceptualisation of criminal hate speech in European human rights standards.


Introduction
Around two thirds of the world's population are active Internet users. 1 While the Internet enables individuals to access information and exercise their freedom of expression, it also enables the proliferation of online hate speech.In this paper, we assess whether online platforms 2 could be required, as part of emerging European human rights due diligence (HRDD) responsibilities, to align their Terms of Service (ToS) 3 with the conceptualisation of criminal hate speech 4 in European human rights standards.
This paper contains language that some readers may find disturbing.Reader discretion is advised.This research was updated in June 2023; given the fast-developing nature of the regulation in the field of law and digital technologies, readers are advised to confirm legal sources beyond this timeframe.
* Corresponding author at: Eva Nave, Center for Law and Digital Technologies, Law School, Leiden University, Steenschuur 25, 2311 ES, Leiden, The Netherlands.E-mail addresses: e.v.r.nave@law.leidenuniv.nl(E.Nave), c.l.lane@rug.nl(L.Lane). 1 Number of internet and social media users worldwide as of January 2023 (2023) <https://www.statista.com/statistics/617136/digital-population-worldwide/>. 2 In alignment with the terminology in the Digital Services Act, this paper uses 'online platforms' to refer to social media platforms.Where we discuss the broader 'Online hate speech' broadly refers to discriminatory expressions shared through the Internet targeting historically marginalised5 people based on their inherent characteristics.Recommendation CM/Rec (2022)16 adopted by the Council of Europe (CoE) Committee of Ministers (CM) in May 20226 explains that hateful expressions represent a violation of human rights.When unaddressed, these can hinder peace and development by denying the values of pluralism, tolerance and broadmindedness essential in a democratic society.
The rise of online hate speech results from specific features of the Internet.First, unlike in traditional media, most content published on the Internet can be quickly shared with little to no monitoring, made available to large audiences, published under anonymity, and easily manipulated in ways that intensify hate (e.g.hate profiles, memes and deep fakes).Second, online content is hosted by businesses primarily driven by profit goals, often at the expense of human rights.The potentially negative impact of AI-driven content moderation by online platforms is under increasing scrutiny.For example, Meta Platforms, Inc. (formerly named Facebook, Inc.) faces legal action for alleged negligence in facilitating the genocide of Rohingya Muslims in Myanmar after its algorithm failed to remove hateful posts and amplified hate speech. 7Similarly, whistle-blower Frances Haugen alerted that Facebook neglected reports of accounts and hate speech content towards Muslims in India, potentially leading to offline violence. 8There are reportedly other situations of human rights abuses by different platforms. 9egal scholars have alerted to the growing impact of social media platforms on the application of regulatory frameworks for freedom of expression and democratic processes, and to the subsequent need to expand the legal scholarship focusing on the regulation of online platforms. 10In this context, it is relevant to consider that most of these online platforms are based in the USA and thus typically bound by the USA framework on freedom of expression, corporate human rights due diligence and intermediary liability.Conversely, to the extent that these online platforms operate in European Union (EU) territory, they must also abide by the regional human rights frameworks in Europe, which differ significantly from those applicable in the USA.The reconciliation of different regional standards has been challenging, not only for online platforms but also for judicial bodies in enforcing their decisions. 11egislators and policymakers at the international, regional and national level have made many efforts to prevent and address the negative impact of business on human rights, including through HRDD and through liability regimes.The HRDD regime includes the seminal United Nations Guiding Principles on Business and Human Rights (UNGPs), which are arguably the most authoritative international expression of the corporate responsibility to respect human rights through HRDD. 12At the European Union (EU) level, there are ongoing negotiations for a Directive on corporate sustainability due diligence (CSDDD). 13Businesses -including online platforms -falling under the scope of the proposal should identify, prevent, mitigate and bring an end to negative impacts on human rights.Furthermore, the EU is developing an Artificial Intelligence Act (AIA) emphasising the need for protection of human rights in the digital environment. 14oncerning HRDD and moderation of harmful content online, in November 2022 the Regulation for a Digital Services Act (DSA) entered into force. 15The DSA adds to the EU Audiovisual Media Services Directive (AVMSD) 16 and enhances cross-sector due diligence responsibilities for digital services to remove illegal content online.This includes hate speech. 17The due diligence framework in the DSA aligns with CM/Rec(2022)16 and builds on the Code of conduct on countering illegal hate speech online whereby IT companies commit to expeditiously review and remove hate speech and to promote transparency towards users. 18espite these advancements, the HRDD framework applicable to online hate speech has focused mostly on explaining the responsibilities of companies throughout their operations.Guidance regarding HRDD requirements for the regulation of hate speech in the ToS is missing.A key aspect remains unaddressed: how online businesses should define hate speech and how this should be communicated to their users.More (2022)  20 The emphasis on criminal hate speech is particularly important since the European Commission (EC) proposed an extension of the list of EU crimes to include hate speech. 21n Sections 3 and 4, this paper deals with the HRDD regime. 22Section 3 explores the HRDD framework applicable to AI businesses. 23The key instruments analysed are the UNGPs, Organization for Economic Cooperation and Development (OECD) initiatives, and the EU CSDDD and AIA proposals.International instruments are included because they provide a substantive understanding of corporate responsibility for human rights that has influenced the development of the CSDDD and can provide inspiration regarding how European instruments should be interpreted.Normative research is used to identify and address legal loopholes from a human rights perspective.
Section 4 delves deeper into preventive HRDD responsibilities in moderation of illegal content, such as criminal hate speech.The legal instruments examined are the DSA, the AVMSD, the CoC 24 and CM/Rec (2022)16.Emphasis is placed on HRDD responsibilities in the drafting or updating of the ToS as a means for online platforms to respond to the systemic risk of online hate speech.It is suggested that to improve legal coherence in countering online hate speech in the European context, online platforms should follow CM/Rec(2022)16′s conceptualisation of criminal hate speech in their ToS.
Section 5 presents an empirical qualitative analysis of three case studies: Facebook, Twitter and YouTube.We assess the compliance of the platforms' ToS with the European Court of Human Rights (ECtHR) jurisprudence on criminal hate speech, and with the conceptualisation of criminal hate speech in CM/Rec(2022)16.The platforms were selected because they: (1) fall under the scope of CSDDD; (2) are signatories to the CoC; and (2) qualify as very large online platforms (VLOPs) as defined in the DSA. 25 In summary, this paper applies the European HRDD framework of online platforms to the conceptualisation of criminal hate speech in ToS.The main finding is the proposal of a minimum HRDD legal standard that online platforms operating in Europe must align their ToS with the European human rights conceptualisation of the most serious cases of hate speech.The EC should issue a sector-specific guidance suggesting the adoption of such legal standard.

Online hate speech is always illegal, sometimes criminalised
This section lays the theoretical framework regarding the conceptualisation of hate speech grounding the subsequent discussions of corporate HRDD responsibilities. 26We clarify the key elements in the original conceptualisation of hate speech in critical race theory (Section 2.1) and explain key conceptual elements in the European regulatory framework countering hate speech (Section 2.2).

Original conceptualisation
The term 'hate speech' became prominent in the work of critical race scholars in reference to 'racist hate speech'. 27Scholars like Mari J Matsuda emphasised that racist hate speech is used to perpetuate the marginalisation of historically oppressed groups and thus should not be protected under the right to freedom of expression. 28Matsuda conceptualises three elements in racist hate speech: '1) the message is of racial inferiority and all members of the target group are considered alike and inferior; 2) the message is directed against a historically oppressed group and reinforces a historically vertical relationship; 3) the message is persecutory, hateful and degrading'. 29ate speech can cause different harms, including physical, psychological and socio-economic harm. 30For example, people targeted by hate speech may develop low self-esteem, post-traumatic stress disorder, psychosis or depression. 31As a regulatory approach distinct from that of HRDDas seen in the separate chapters on each regime in the DSA -, the EU liability regime for internet intermediaries falls outside the remit of this research.These regimes are nevertheless related in that liability may follow from non-compliance with HRDD responsibilities.For discussion of internet intermediaries liability regimes and recent case law, see e.g. 23AI businesses include inter alia online platforms and thus are a relevant framework for the analysis in this paper. 24Some EU instruments use the problematic expression 'illegal hate speech', which could lead the reader to think that there is legal hate speech.This is not the case.Hate speech is always illegal and, in its most serious forms, it can be criminalised.For legal coherence purposes, this paper will refrain from using 'illegal hate speech' unless referring to the title of an instrument. 25i.e. they have 45 million or more average monthly active recipients of their service in the Union: DSA, Recital 76. 26 This section summarises the main argument in Eva Nave, 'Hate speech, historical oppression and European human rights (2023 forthcoming) Buffalo Human Rights Law Review. 27Critical race scholars contest neutral viewpoints in research and highlight the impact of institutional inequalities deriving from moments when colonial and discriminatory doctrines were openly defended. 28

E. Nave and L. Lane
A key analytical framework presented by critical race scholars to understand the harms caused by hate speech is the intersectionality of systems of marginalisation.Kimberlé Crenshaw underlines the importance of examining the intersection between different types of discrimination by highlighting how the politics of race and gender marginalise racialised women, 34 and exposed the shortfalls of legal and political approaches that isolate systems of oppression.Though the intersectionality theory was initially developed considering systems of marginalisation based on racial and gender markers, Crenshaw explained that the theory applies to the intersection of any system of marginalisation such as class, sexual orientation and age. 35

Key conceptual elements in European regulation
There is currently no legally binding definition of hate speech in international or European human rights law.However, both the CoE and the EU have developed legal strategies to counter hate speech by clarifying key elements in the conceptualisation of hate speech or explaining the procedural responsibilities of stakeholders involved in the moderation of speech (e.g.media, Internet intermediaries, 36 law enforcement, governments).Though this section identifies the main instruments regardless of the type of strategy, we focus on the instruments that expand on the key conceptual elements of hate speech.
There is an overall alignment of key human rights values between the two European systems.To ensure European legal consistency, Article 52 (3) of the Charter of Fundamental Rights of the EU (CFREU) 37 requires the same meaning and scope to be given to CFREU provisions as to corresponding rights in the ECHR.Furthermore, in Article 6(2) of the Treaty of the European Union (TEU) the EU commits to acceding to the ECHR, 38 which will enable individuals to submit to the ECtHR complaints of violations of ECHR by the EU. 39Thus, both the CoE and the EU constitute reference systems to summarise the main elements of the European regulation of hate speech.
At the EU level, there are strategies to counter hate speech in primary, secondary and 'soft' law sources. 40While some strategies focus on substantive regulation (i.e. the conceptualisation of hate speech), most focus on procedural regulation (e.g. the liability of internet intermediaries, HRDD).Though the next paragraphs summarise the main strategies, Internet intermediaries' HRDD responsibilities are addressed more thouroughly in Sections 3 and 4. Importantly, this article does not focus on intermediary liability, but rather on human rights due diligence responsibilities.
Following the above mentioned alignment of primary sources of EU law with the ECHR, 41 content in the provisions in the CFREU addressing hate speech should be interpreted in the same way as the ECtHR interpretation for the equivalent provisions in the ECHR.The most relevant secondary legal sources are the Council Framework Decision on combating certain forms and expressions of racism and xenophobia by means of criminal law (Framework Decision), 42 the AVMSD, 43 the DSA.Finally, the main supplementary legal source at the EU level is the CoC.
Despite the variety of EU regulatory strategies applicable to hate speech, there is no coherent and all-encompassing framework.On the one hand, the CoC and the DSA refer to the conceptualisation of hate speech as presented in the Framework Decision which focuses only on certain forms of racist and xenophobic hate speech (by reference to race, colour, religion, descent or national or ethnic origin), thus excluding other types of hate speech such as misogyny and queerphobia.This is all the more worrisome as data presented in the latest monitoring round of the CoC indicate that hatred on accounts of sexual orientation is the most commonly reported ground for hate speech. 45On the other hand, the AVMSD applies a different legal rationale, referring to the broader list of grounds of prohibited discrimination in Article 21 CFR, 'such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation'.
The CoE level also developed various regulatory approaches to counter online hate speech, including legally binding and non-binding initiatives.The most relevant treaties are the ECHR, the 2003 Additional Protocol to the Convention on Cybercrime, 46 the 2011 Convention on preventing and combating violence against women and domestic violence (the Istanbul Convention), 47 and the 1994 Framework Convention for the Protection of National Minorities. 48The most relevant non-binding initiatives include Recommendations 49 38 The accession negotiations of the EU to the ECHR resumed in 2020. 39Each of the EU Member States is already party to the ECHR.However, without the EU's accession to the ECHR, individuals cannot lodge complaints against EU institutions.The accession will mean that the EU will be subjected to the oversight of the ECtHR in the application of the ECHR.Further information at 'European Union Accession to the European Convention on Human Rights -Questions and Answers' <https://www.coe.int/en/web/portal/eu-accession-echr-questions-and-answers> accessed 6 April 2023. 40Primary legal sources of EU law are the treaties establishing the EU and general legal principles.Secondary sources of EU law comprise legislative delegated and implementing acts.Further information on EU legal sources at <https://www.europarl.europa.eu/factsheets/en/sheet/6/sources-and-scopeof-european-union-law>accessed 27 September 2023.'Soft law' refers to nonlegally binding sources that may aid the interpretation of hard, binding law and which may have an impact on businesses' behaviour in practice.For further information in the field of business and human rights, see Sarah Joseph and Joanna Kyriakakis, 'From soft law to hard law in business and human rights and the challenge of corporate power' (2023) 36(2) LJIL 335. 41 (2020)1 on the impact of algorithmic systems on human rights; and, (2022)16 on a wide-ranging strategy to combat hate speech in light of current challenges.Aside from the recommendations, the CM also adopted a 'Declaration on the manipulative capabilities of algorithm processes' (2019) and 'Guidelines on upholding equality and protecting against discrimination and hate during times of crisis' (2021).

E. Nave and L. Lane
General Policy Recommendations of the European Commission against Racism and Intolerance (ECRI). 50The work by th CM and ECRI is essential to understand new phenomena and is frequently cited in the ECtHR's jurisprudence. 51he ECtHR has developed an extensive body of jurisprudence interpreting the ECHR and referring to various CoE instruments regulating hate speech, the most relevant of which is the ECHR.The main provisions cited by the ECtHR have been the prohibition of abuse of rights, the provision containing the legal requirements for restrictions of freedom of expression, 52 respect for private life, the prohibition of discrimination and the right to an effective remedy. 53The key elements in the regulation of hate speech in the ECtHR's jurisprudence are found on applications by perpetrators of hate speech.The ECtHR classifies hate speech in two categories: (1) no clear abuse of rights but prohibited under civil or administrative law, as long as the prohibition aligns with Article 10(2) (Section 2.2.1); and (2) clear abuse of rights under Article 17 and thus criminally actionable (Section 2.2.2).The following subsections expand on these two categories of hate speech by making reference to the CM/Rec (2022)16.CM/Rec(2022)16 is the most significant instrument of the CM building on the ECtHR's jurisprudence and presenting a comprehensive framework to counter online hate speech.

Hate speech is always illegal
The first category of hate speech is when, though not criminally actionable, the speech is still prohibited under civil or administrative law.This prohibition of speech needs to align with the criteria emanating from Article 10(2) ECHR, i.e. it should be: i) prescribed by law; ii) in pursuit of one or more specified legitimate interests (national security, territorial integrity or public safety, prevention of disorder or crime, for the protection of health or morals, reputation or rights of others, prevention of the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary); and, iii) necessary in a democratic society.
According to the ECtHR, the necessity test entails an analysis of the following contextual factors: 54 political and social background; 55 intent of the speaker; 56 speaker's status or role in society; 57 content of the expression; 58 extent of the expression; 59 and the nature of the audience. 60In this examination of the context, drawing on the insights of critical race and intersectionality theory, it is important to explicitly consider socio-historical marginalisation and the intersectionality of systems of marginalisation affecting people targeted by hate speech.

The most serious forms of hate speech are criminally actionable
The second category of hate speech is criminal hate speech. 61Following the ECtHR's jurisprudence, hate speech is a criminal act when there is a clear abuse of rights under Article 17 ECHR, i.e. when the hateful speech violates or limits (to a greater extent than allowed by the ECHR) any right in the ECHR. 62These cases of hate speech are considered by the ECtHR to be the most serious forms of hate speech.
Though the ECtHR assesses each application on a case-by-case basis, its jurisprudence on Article 17 reveals the minimum European human rights threshold for hate speech to be considered criminal.This was distilled in Paragraph 11 of the CM/Rec/(2022)16.It includes: a. public incitement to commit genocide, violence or discrimination; b. racist, xenophobic, sexist and LGBTI-phobic threats; c. racist, xenophobic, sexist and LGBTI-phobic public insults under conditions such as those set out specifically for online insults in the Additional Protocol to the convention on Cybercrime concerning the criminalization of acts of a racist and xenophobic nature committed through computer systems (ETS No. 189); d. public denial, trivialization and condoning of genocide, crimes against humanity or war crimes; and, e. intentional dissemination of material that contains such expressions of hate speech (listed in a-e above) including ideas based on racial superiority or hatred.
Nevertheless, CM/Rec(2022)16 misses a crucial opportunity to reflect the open nature of the list of impermissible grounds 63 for hate speech by limiting it to 'racist, xenophobic, sexist and LGBTI-phobic' hate speech.While the ECtHR may not yet have addressed criminal hate speech targeting other groups, it would have enhanced legal coherency had it included a reflection on the intersectionality of historical oppression to justify the open-ended nature of potential targeted groups.This is an important caveat as the ECtHR may in the future be called to rule on serious forms of hate speech targeting people on the basis of their queerness (importantly more broadly conceived than LGBTI), ableism, or non-neurotypical characteristics which, if amounting to the acts explained 50 The most relevant GPRs of ECRI include: GPR No. 6 on combating the dissemination of racist, xenophobic and antisemitic material via the Internet; GPR No. 7 on national legislation to combat racism and racial discrimination; GPR No. 11 on combating racism and racial discrimination in policing; and, GPR No. 15 on combating hate speech. 51 Although no EU guidance currently exists clarifying the main elements of criminal hate speech, in December 2021, the EC proposed to extend the list of EU crimes to hate speech. 64Studying the CoE developments, such as the CM/Rec/(2022)16, to inform the EU regulatory initiatives will help to bring legal coherence between the two human rights systems.
The following sections explore the HRDD responsibilities of online platforms moderating illegal content online, clarifying the specific responsibilities applicable to the moderation of the most serious cases of hate speech. 65The focus on criminal hate speech provides a clear and more foreseeable legal basis for the extrapolation of corporate HRDD responsibilities.

Broader framework: AI and the corporate responsibility to respect human rights
This section presents key international and European HRDD instruments relevant to AI businesses, such as online platforms, moderating content online: the UNGPs, OECD initiatives, the CSDDD and the AIA proposals.The main takeaway is that all AI businesses have the responsibility to adopt a policy commitment to respect human rights.Applied to online platforms, this can be interpreted to mean that they should explain in their ToS how their content moderation respect human rights.

United Nations Guiding Principles on Business and Human Rights
Unanimously endorsed in 2011 by the United Nations Human Rights Council, the UNGPs articulate a universal framework for the prevention and mitigation of human rights interference by businesses. 66Though not legally binding, the UNGPs constitute the most influential international expression of the corporate responsibility to respect human rights, particularly through HRDD. 67The UNGPs clarify that businesses should have in place policies and processes to respect human rights including: (a) a policy commitment to respect human rights; (b) a HRDD process to identify, prevent, mitigate and account for adverse impacts on human rights; (c) processes to enable the remediation of any human rights abuses. 68pplying Principle 15 to online platforms, the policy commitment can arguably be reflected in a more detailed manner in a company's ToS.Typically, ToS are legal agreements between online platforms and their users containing, amongst other topics, the allowed/prohibited online conduct and explaining how the company considers human rights. 69ToS guide the machine learning and large language models used to moderate content online.As the main publicly available policy tool used by online platforms to communicate with their users the rules guiding their services applicable to both users and the platform itself, ToS can be said to fulfil the purpose of the corporate policy commitment to respect human rights.
The HRDD commitment is essential to identify, prevent, mitigate and account for actual and potential human rights abuses by businesses. 70Notably, the UNGPs prescribe that HRDD should involve meaningful consultation with potentially affected groups.Applying this conceptualisation of HRDD to online platforms, these arguably have the HRDD responsibility to better engage with people targeted by harmful content hosted by them.One way could be by employing a community-driven contextualisation of hate speech (applicable to cases of hate speech not criminally actionable) in ToS.Further, this paper proposes that preventive HRDD responsibilities requires online platforms to revisit their policy commitments to adequately reflect their commitments to human rights, hence the HRDD responsibility to review existing ToS.
Reflecting the commitment to respect human rights in ToS is all the more important given the non-binding nature of the UNGPs and would be a complementary measure to clarify the applicability of the existing human rights regulatory and policy frameworks to corporations.Furthermore, the rising debate about services provided by large online platforms possibly amounting to public services essential for the exercise of the right to freedom of expression, 71 strengthens the argument that the businesses' freedom to decide what to include in ToS should be proportionally restricted to give primacy to HRDD and to reflecting human rights standards in ToS. 72

Initiatives by the Organization for Economic Cooperation and Development
The OECD has also developed numerous initiatives that shed light on the corporate HRDD framework.First, the OECD Declaration and Guidelines for Multinational Enterprises comprising recommendations to conduct responsible business were first adopted in 1976 and updated in 2011 to include a chapter on human rights in line with the UNGPs. 73n 2018, the OECD adopted its Due Diligence Guidance for Responsible Business Conduct 74 to help companies implement the Guidelines for Multinational Enterprises and understand the application of due diligence principles.This Guidance refers to the UNGPs and suggests that HRDD includes the elements demonstrated in Fig. 1 below.
Chapter IV of the 2018 Guidance expands on HRDD and reiterates the importance that businesses embed human rights into their policies.To do this, the Guidance suggests that businesses make the commitment publicly available, for example on their website, underlining its importance to business relationships. 75The Guidance also explains that consumers or end-users of products, as persons or groups whose interests can be affected by the companies' activities, must be informed about the due diligence processes shaping the companies' operations. 76Applying this framework to online platforms, it can be argued that, though not in a legally binding way, ToS communicated by online platforms to their users are typically the tool fulfilling the purpose of the OECD's policy commitment standard.
In recent years, the OECD has adopted instruments specifically addressing HRDD for AI businesses, and thus with impact for online platforms.The Recommendation of the Council on AI, adopted in 2019, 77 stresses that AI businesses should respect the rule of law, human rights and democratic values throughout the AI system lifecycle, including the right to non-discrimination. 78AI actors should also commit to transparency and explainability to promote a better understanding of the AI systems and to enable stakeholders to understand the outcome of decisions led by AI systems. 79Applying this framework to online platforms, it can similarly be argued that ToS constitute an adequate tool for AI actors to communicate to their users in a transparent and explainable manner their automated-decision making algorithms used for large scale content moderation.
Finally, in 2021, the OECD published its annual publication Business and Finance Outlook 2021: AI in Business and Finance. 80The potential contribution of automated content moderation to the proliferation of illegal content online is expressly raised as a key concern and it is suggested that content moderation policies balance freedom of expression with general human rights safeguards (e.g.right to appeal and to remedy). 81This instrument reiterates two essential points: the need to implement HRDD throughout the whole cycle of business operations and the need to develop explainable AI systems. 82

EU proposal for a directive on corporate sustainability due diligence
The UN and OECD initiatives are key to introducing HRDD and have significantly influenced the legislative framework on HRDD under development in the EU. 83In June 2023, the European Parliament adopted a draft detailing many amendments to the CSDDD proposal by the European Commission in February 2022. 84This Directive will be enforced by national authorities and by a European Network of Supervisory Authorities to be set up by the Commission.Although the scope of the CSDDD has been broadened in the latest draft, it remains more limited than the UNGPs and the OECD's guidance, covering EU companies with 250+ employees and a turnover of over €40 million worldwide and non-EU companies with an equivalent turnover threshold generated in the EU. 85With regard to companies with lower revenue and fewer employees, the CSDDD extends due diligence responsibilities for some companies operating in 'high-impact sectors', which do not currently include online platforms. 86he current CSDDD draft would require relevant companies to: 1) integrate HRDD into policies and management systems; 2) identify and assess adverse human rights and environment impacts; 3) prevent, cease and minimise adverse human rights and environment impacts; 4) assess the effectiveness of measures; 4) communicate; and 6) provide remedial mechanisms for human rights and environmental negative impacts caused by their own operations, their subsidiaries and their value chains. 87This places an important emphasis on 'preventive responsibilities' 88 to mitigate or avoid potential harms rather than only taking action once harm has already occurred. 89mportantly, regarding the CSDDD's conceptualisation of human rights, Annex I focuses on international human rights law, excluding regional European human rights law (i.e. the ECHR and the CFREU).The CSDDD could have effects beyond EU companies in some situations.However, omitting references to key and legally binding European instruments protecting human rights applicable to all Member States (MS) of the EU whilst referring to non-binding, international standards and international treaties that have not been universally adopted has been criticised. 91pplying this framework to the regulation of hate speech in Europe can lead to legal incoherence because the most concrete attempt to conceptualise hate speech and its most serious forms was developed at the CoE level in CM/Rec(2022)16 (explained above in Section 2.2.2). 75 Ibid 22. 76 84 Ibid and CSDDD (n 13), respectively.As a Directive, when adopted, the CSDDD will be legally binding on EU countries, setting a goal to be attained at the EU level whilst giving individual countries the freedom to decide which laws to adopt to reach such a goal.The move towards mandatory HRDD at the EU level is part of a broader movement towards binding HRDD standards for private companies, including at the international and national level.For  This potential legal incoherence could be addressed preemptively because, although the list in Annex I is restricted to international human rights law, international human rights such as the right to life, liberty and security, 92 the prohibition of inhuman or degrading treatment, 93 the prohibition of discrimination 94 can be interpreted to include the conceptualisation of criminal hate speech as per CM/Rec(2022)16.Part I of Annex I expressly acknowledges the application of inter alia Article 7 ICCPR and, part II of the same Annex expressly acknowledges inter alia the UDHR and the ICCPR. 95Thus, for online platforms falling under its scope, the CSDDD's provisions could be applicable to online moderation of hate speech.Proposed Recital 22 of the CSDDD reflects this, mentioning that 'the Commission should develop sector-specific guidelines', including in relation to 'the production, provision and distribution of information and communication technologies or related services, including … artificial intelligence, … social media and networking … and other platform services'. 96Interestingly, these are not mentioned in Article 13(1)(a)(c), which suggests specific sectors for which guidelines should be adopted.Nevertheless, online platforms could fall within the scope of Article 13, which is not phrased as constituting an exhaustive list.

EU proposal for an Artificial Intelligence Act
The EU initiated a legislative process to regulate the responsibilities of AI businesses when, in April 2021, the EC proposed a Regulation on harmonised rules on artificial intelligence (AIA). 97In December 2022, the Council adopted its General Approach to the AIA 98 and, in June 2023, the EP adopted its Draft Compromise Amendments proposed by the Committee on Internal Market and Consumer Protection (IMCO) and by the Committee on Civil Liberties, Justice and Home Affairs (LIBE). 99hough not framed as a human rights instrument, one of the AIA's objectives is for AI systems to 'ensure a high level of protection of … fundamental rights … from harmful effects of artificial intelligence systems in the Union'. 100nder the General Approach, AI systems are defined as those 'developed through machine learning approaches and logic-and knowledge-based approaches'. 101The AIA takes a two-prong approach of prohibiting certain systems whilst regulating others on the basis of a risk-based approach with three levels of risk posed by AI systems: i) unacceptable risk AI; ii) high-risk AI; iii) low or minimal risk AI.
The potential application of the AIA to content moderation by online platforms can be summarised in two ways.First, AI systems used for content moderation systems can be prohibited under Article 5(1)(a) of the AIA.This provision prohibits systems that 'deplo[y] subliminal techniques beyond a person's consciousness with the objective to or the effect of materially distorting a person's behaviour in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm'.According to Amnesty International, this was arguably the case when Meta (formerly Facebook) did not  remove and even amplified hate speech towards the Rohingya Muslims in Myanmar, potentially contributing to adverse impact on their human rights. 102econd, for AI systems falling outside the remit of Article 5 of the AIA, recommender systems in content moderation that impact the administration of justice and democratic processes may end up being considered high-risk AI systems, should the EP's amendments be adopted in the final text.Article 6(3) defines high-risk systems as those whose output is not 'purely accessory in respect of the relevant action or decision to be taken and is not therefore likely to lead to a significant risk to the health, safety or fundamental rights'. 103This is arguably the case for content moderation systems that either allow or promote material containing hate speech, which, due to the vast number of posts to be monitored on social media platforms, are subject to minimal human oversight, making the systems more than 'purely accessory' to decisions to remove content.
Content moderation AI systems could also be considered to fall indirectly under the scope of 'high-risk' systems in limited situations.As explained in Section 4.1 below, the DSA prescribes the HRDD responsibility for social media companies to report criminal offences, including criminal hate speech, to law enforcement.When law enforcement agencies use the results of content moderation AI systems to assess the 'risk for a natural person to become a potential victim of criminal offences', which could include physical or psychological harm caused by hate speech, Arguably, this AI system falls under the scope of Paragraph 6(a) of Annex III, which labels such systems as high risk.
Should these provisions apply, providers 104 of content moderation systems would be subject to a number of risk-management standards reflecting various elements of HRDD.This includes an obligation to identify and analyse 'known and foreseeable risks most likely to occur… in view of the intended purpose of the system'. 105Further, Article 9(2) (d) requires providers of high-risk systems to adopt 'suitable risk management measures' to respond to risks.Arguably, this could include a prohibition of criminal hate speech in the ToS of platforms in relation to their content moderation systems.

Specific framework: preventive corporate responsibilities to counter online hate speech
This section examines how preventive HRDD responsibilities apply to the moderation of criminal hate speech in Europe.The growing clarity regarding the European conceptualisation of the criminal hate speech, namely with the adoption of CM/Rec(2022)16, enables a common regional understanding of criminal hate speech from which specific HRDD responsibilities can be developed.The main instruments analysed are the DSA, the AVMSD, the EU Code of conduct and Recommendation CM/Rec(2022)16.
While Section 3 clarified that AI businesses, including online platforms, must adopt a policy commitment to respect human rights, this section expands on the preventive HRDD responsibilities.We explain that the current European regulatory system suggests that online platforms should reflect their commitment to human rights in the ToS, including to human rights standards applicable to counter online hate speech, such as the right to respect for private and family life and the prohibition of discrimination.
The EU has developed frameworks regulating content moderation and the HRDD responsibility of online platforms not to host illegal online content, such as hate speech.Video-sharing platforms have the heightened responsibility to explicitly reflect the prohibition of hate speech in their ToS.While such HRDD frameworks take different approaches, they reflect the importance of including the prohibited content in the ToS, which should be conveyed to users in a clear and transparent manner.

EU Regulation on a Single Market for Digital Services
The most relevant instrument at the EU level articulating HRDD responsibilities for online platforms is the Regulation on a Single Market For Digital Services (DSA), in force since November 2022. 106The DSA regulates online platforms operating in an online environment under the EU territorial jurisdiction and establishes HRDD responsibilities for different online stakeholders, depending on their role, size and impact. 107oting that some of the biggest online platforms are based in the USA, the DSA introduces a new regulatory approach as, aside from having to comply with the legal framework in the USA, companies now also have to adapt to European legal standards in operations conducted in the EU.For example, the legal framework on freedom of expression is significantly distinct as the USA gives primacy to the First Amendment whereas in the EU, though freedom of expression is considered a quintessential human right in democratic societies, the conditions for restrictions in cases of discrimination are expressly prescribed. 108he DSA sets due diligence responsibilities (Chapter III) for various stakeholders, including for online platforms (Sections 2, 3 and 4) and for VLOPs (Section 5). 109The DSA provides general instructions to intermediary services to 'diligently regard' fundamental rights of the users as enshrined in the CFREU. 110This paper examines the HRDD rules in the DSA applicable to online platforms, with a particular focus on the HRDD 102 Amnesty International, 'Myanmar: The social atrocity: Meta and the right to remedy for the Rohingya' (2022) <https://www.amnesty.org/en/documents/asa16/5933/2022/en/>, and Al Jazeera, 'Rohingya sue Facebook for $150bn for fuelling Myanmar hate speech' (n 7). 103Lane, 'Preventing long-term risks to human rights in smart cities' (n 67). 104Article 3(2) defines a 'provider' as 'a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark, whether for payment or free of charge'.This would cover online platforms. 105Article 9(2)(a); Lane, 'Preventing long-term risks to human rights in smart cities' (n 67).
106 DSA (n 15).The DSA is a legal instrument in the form of a EU Regulation and directly regulates the means through which Member States must achieve the prescribed goals.responsibilities of VLOPs as these represent the category of stakeholders posing higher risks of disseminating illegal content.
The DSA introduces asymmetric HRDD for VLOPs precisely because of their far reach, high turnover and their ability to comply with stronger HRDD requirements. 111The heightened HRDD threshold for VLOPs is also reflected in Article 34 of the DSA which states that VLOPs 'shall identify, analyse and assess (…) any significant systemic risks' which include: (a) the dissemination of illegal content through their services; and (b) any negative effect for the exercise of the fundamental rights to respect for private and family life, freedom of expression and information, the prohibition of discrimination and the rights of the child, as enshrined in Articles 7, 11, 21 and 24 of the Charter, respectively.' 112he DSA specifically mentions that the dissemination of hate speech pertains to the first category of systemic risks to be assessed by VLOPs 113 and that it falls under the category of illegal content in the EU. 114In fact, in the Explanatory memorandum, it is clarified that the DSA builds on the Recommendation on illegal content of 2018, 115 which already mentioned hate speech as illegal content in the EU.Furthermore, the DSA confirms to build upon self-regulatory initiatives such as codes of conduct or other self-regulatory measures which, in the framework applicable to hate speech, includes the Code of conduct against illegal hate speech. 116 concrete proposal in the DSA for VLOPs to mitigate systemic risks, such as the dissemination of hate speech, is by 'clearly and unambiguously' informing users of ToS as well as remedies and redress mechanisms, 117 adapting their terms and conditions and their enforcement, 118 and 'adapting their content moderation processes, including the speed and quality of processing notices related to specific types of illegal content and, where appropriate, the expeditious removal of, or the disabling of access to, the content notified, in particular in respect of illegal hate speech or cyber violence, as well as adapting any relevant decision making processes and dedicated resources for content moderation'.119 The inclusion of ToS in Article 35 of the DSA as a mitigatory measure must be interpreted as a recognition that ToS are a key tool to provide legal clarity, foreseeability and transparency to users of VLOPs.As such, ToS represent the ideal communication tool to contain the conceptualisation of what is illegal content, and hence, what is hate speech.
The DSA does not indicate to VLOPs the conceptualisation of illegal content that they should follow in their ToS.We suggest that the conceptualisation of hate speech should be limited to the most serious forms of hate speech, i.e. criminal hate speech.The limitation of the requirement to reflect the conceptualisation of hate speech to criminal hate speech is justified by the European human rights understanding in CM/Rec/(2022)16.The EC proposed to add hate speech to the list of EU crimes 120 but, in the meantime and while the EU does not conceptualise the main elements of criminal hate, CM/Rec/(2022)16 summarises the European human rights standards for the conceptualisation of the most serious forms of hate speech.
Additionally, VLOPs should inform their users through their ToS that such criminal offences are reported to competent law enforcement authorities.The DSA prescribes the due diligence measure applicable to all providers of hosting services, including online platforms, that criminal offences involving a threat to life be reported to law enforcement or judicial authorities. 121Given the additional requirements for VLOPs regarding information and transparency of their ToS, also in the context of cooperation with law enforcement, businesses should utilise ToS to clearly inform their users of the companies' HRDD responsibilities.
Applying this framework to countering online hate speech, it is possible to propose minimum best practices for the improvement of legal clarity and coherence in European human rights frameworks -VLOPs should explicitly mention in their ToS the minimum European human rights elements in the conceptualisation of criminal hate speech and inform users that speech is removed and reported to law enforcement.
Should the EC add hate speech to the list of EU crimes, 122 it should become imperative for VLOPs to explain to users in their ToS what the framework of criminal hate speech is and what consequences it bears for users posting such illegal content i.e. referral to law enforcement.Under Article 35(3), the Commission could issue guidelines on best practices since the abovementioned legal avenues would support the businesses' compliance with transparency and clarity requirements regarding ToS in the DSA but also more generally with the businesses' preventive corporate HRDD responsibilities.

EU Audiovisual Media Services Directive
Another relevant EU legal instrument creating HRDD responsibilities for online platforms is the 2018 revised AVMSD. 123The AVMSD regulates the activity of TV broadcasters, video-on-demand services and video-sharing platforms.With particular relevance for this paper, videosharing platforms include commercial services devoted to making available to the general public programmes and user-generated videos with the purpose to inform, entertain or educate, shared via the Internet, and where content organisation is determined by the video-sharing platform (i.e. the service displays, tag and recommends video content to the users). 124hile the AVMSD mostly imposes obligations on Member States in their regulation of audiovisual media services, it also directly establishes minimum standards to be adhered to by businesses. 125Regarding the prohibited content, the AVMSD initially refers to the conceptualisation of hate speech in Council Framework Decision 2008/913/JHA, which is limited to acts of racism and xenophobia. 126Nevertheless, similarly to the DSA, the AVMSD expressly prohibits disseminating 'incitement to violence or hatred directed against a group of persons or a member of a group based on any of the grounds referred to in Article 21 of the [CFREU]'.Further, the AVMSD recognises that this reference to the Council Framework Decision should be applied 'to the appropriate extent', and Articles 6(a)(a) and 28(1)(b) AVMSD refer to an expansive interpretation of protected categories in Article 21 CFREU. 127It is therefore possible to argue that the AVMSD considers a broader, intersectional conceptualisation of hate speech.
The AVMSD introduces substantial developments concerning the responsibility framework for video-sharing platforms, including specific measures to comply with the prohibition to host hate speech.Article 28 (3) prescribes that video-sharing platform services shall implement compliance measures consisting of including and applying in the terms and conditions the requirements in Articles 28(1) and 9(1).While not specifically phrased as due diligence measures, Article 28(3)(i) and (ii) AVMSD symbolise a milestone in HRDD as they expressly address the considerable role of ToS in ensuring a clear and transparent tool for a public commitment to respect human rights.

EU Code of conduct on countering illegal hate speech online
In 2016, the EU agreed with some of the largest online platforms on a 'Code of conduct on countering illegal hate speech online' (CoC); of note, Facebook, Twitter and YouTube were amongst the first signatories. 128The CoC is a co-regulatory instrument which, though resulting in legal consequences if breached, is arguably difficult to enforce.However, it represents a strong acknowledgement of the rise of hatred in the digital environment and it likewise symbolises a strong policy commitment from online platforms to counter online hate speech.The very restrictive conceptualisation of hate speech in the CoC was already criticised in Section 2.2. of this paper, particularly in comparison to Article 21 of the CFREU and the AVMSD.We adopt a broader intersectional interpretation of the impermissible grounds for hate speech.
For what concerns the HRDD responsibilities imposed upon such online platforms to counter 'incitement to violence and hateful conduct', the CoC points directly to the relevance of containing 'clear information on individual company Rules and Community Guidelines' as a means to improve notices and flagging of said content. 129This requirement directly follows the preventive HRDD responsibility to commit to respect human rights in a policy statement and during operations.
Moreover, online platforms signatories to this CoC are required to 'educate and raise awareness with their users about the types of content not permitted under their rules and community guidelines', which is a preventive responsibility to help mitigate and avoid potential risks.Additionally, these businesses must have in place 'clear and effective processes to review notifications', which could function as a more responsive measure to risks that may already have incentivise, depending on the actual content of the flagged material. 130These requirements align with the corporate human rights responsibility to respond to risks by having HRDD procedures in place to identify, prevent, mitigate and account for human rights abuses.

Council of Europe Committee of Ministers' Recommendation CM/ Rec(2022)16
Adopted in May 2022, the Recommendation CM/Rec(2022)16 is a milestone achievement in combating online hate speech in Europe.Though not legally binding, CM/Rec(2022)16 represents a clear political pledge of the statutory decision-making body of the CoE: the Committee of Ministers.CM/Rec(2022)16 articulates concrete guidance to all 46 Member States for a comprehensive human rights framework to address hate speech, including in the digital sphere.This means that the guidance provided is inevitably also addressed to the 27 EU Member States.Furthermore, the EU human rights standards draw inspiration from those at the Council of Europe. 131aragraph 31 of CM/Rec(2022)16 clearly provides that 'internet intermediaries should ensure that human rights and standards guide their content moderation policies and practices with regard to hate speech, explicitly state that in their terms of service and ensure the greatest possible transparency with regard to those policies, including the mechanisms and criteria for content moderation'. 132is standard is complemented by CM/Rec(2018)2 on the roles and responsibilities of internet intermediaries, 133 which underlines that internet intermediaries are responsible for respecting human rights and for implementing adequate measures to that end. 134It adds that intermediaries whose services pose a higher risk of potential adverse impacts on human rights should adopt greater precautionary measures.Again, one example of such precautionary measures is the careful development and application of the ToS.Moreover, CM/Rec(2018)2 stresses the importance of drafting and applying ToS agreements, community standards and content-restriction policies in a transparent fashion. 135Nevertheless, companies must still comply with their HRDD responsibilities throughout their operations, i.e. the design, development and deployment of content moderation systems.
Finally, CM/Rec(2022)16 also contains significant advances concerning the responsibilities of internet intermediaries to moderate criminal hate speech.Paragraph 32 articulates the responsibility of internet intermediaries to remove only the most severe cases of hate speech, i.e. criminal hate speech.This appears to be more reactive than preventive and is not expressly referred to as a corporate HRDD responsibility.However, it does reflect the HRDD responsibilities found in 125 E.g.AVMSD (n 16), Article 28.
127 AVMSD (n 16), Recital 17.Furthermore, Article 9(1)(c) AVMSD stipulates that 'audiovisual commercial communications shall not (i) prejudice respect for human dignity; [or] (ii) include or promote any discrimination based on sex, racial or ethnic origin, nationality, religion or belief, disability, age or sexual orientation'. 128CoC (n 18). 129Ibid 2. 130 Ibid.Aside from these main HRDD responsibilities, signatories to the CoC must also comply with additional responsibilities such as ensuring that there are civil society organisations fulfilling the role of 'trusted flaggers' and providing regular training on hate speech policies to their staff, 3. 131 Even in the CoC, the parties refer to the jurisprudence of the ECtHR.See footnote 1 of the CoC. 132Emphasis added. 133Council of Europe Committe of Ministers, Recommendation CM/Rec(2018) 2 to Member States on the roles and responsibilities of internet intermediaries. 134Ibid paragraph 2.1.2. 135Ibid paragraph 2.2.2.This can be accomplished, for example, by starting to involve human rights experts in the drafting of ToS.

E. Nave and L. Lane
the UNGPs and the OECD's guidance to take measures to cease or mitigate adverse impacts. 136Similarly, the suggestion in Paragraph 2.2. that intermediaries report instances of criminal hate speech to public authorities reflects the HRDD measure to provide for or cooperate in remediation when appropriate. 137urthermore, Paragraph 18 CM/Rec(2022)16 clarifies that intermediaries are 'to respect human rights, including the legislation on hate speech, to apply the principles of human rights due diligence throughout their operations and policies, and to take measures in line with existing frameworks and procedures to combat hate speech'. 138M/Rec(2022)16 goes beyond the responsibility to remove criminal hate speech to include that 'internet intermediaries, including social media, should review their online advertising systems and the use of micro-targeting, content amplification and recommendation systems and the underlying datacollection strategies to ensure that they do not, directly or indirectly, promote or incentivise the dissemination of hate speech.' 139 This invites corporations to conduct a full revision of their business models to ensure that their content moderation algorithms are specifically designed to not disseminate, recommend or profit from hate speech.

Proposal of a legal standard
The analysis of the European regulatory and policy human rights framework on criminal hate speech (Section 2) and on the corporate preventive HRDD responsibilities to respect human rights (Section 3) and to counter hate speech (Section 4) suggests that online platforms should reflect as a minimum legal standard the most serious cases of hate speech, i.e. criminal hate speech, in their ToS.To clarify, this paper does not take the position that the existing legal framework fails to regulate criminal hate speech on online platforms.Instead, this paper claims that the human rights framework on criminal hate speech and the HRDD framework applicable to online platforms do not directly explain their implications for the drafting of the ToS of online platforms.Nevertheless, a combined analysis of these human rights frameworks reveals the above explained human rights implications for the drafting of the ToS.
The proposed standard should currently be a recommendation of best practice for the general AI business, 140 but it can be interpreted as a mandatory legal standard specifically for VLOPs, video-sharing platform services and for companies bound by the CSDDD.According to Article 35(3) of the DSA and Article 13 of the CSDDD, the EC could issue guidelines clarifying that VLOPs, video-sharing platforms and platforms falling under the scope of the CSDDD should explicitly mention in their ToS that they prohibit, remove and report criminal hate speech to relevant public authorities.Fig. 2 showcases that this legal standard should be implemented in the initial phase of designing policies and management systems of AI business.

Case studies: compliance of 'Terms of service' with the conceptualization of criminal hate speech
The previous sections clarified that this paper builds upon the conceptualisation of the most serious forms of hate speech (i.e.criminal hate speech) as reflected in the CM/Rec(2022)16 (Section 2), as well as the application of corporate HRDD responsibilities of AI businesses to online platforms (Section 3), with a particular focus on how companies should conceptualise hate speech in their ToS (Section 4).Section 5 presents an empirical qualitative analysis of three case studies assessing the compliance of online platforms' ToS with the understanding of the most serious forms of hate speech as prescribed in Paragraph 11 of the CM/ Rec(2022)16 (cited in Section 2.2.2). 141 The online platforms studied are Facebook (Section 5.1), Twitter (Section 5.2) and YouTube (Section 5.3).These platforms were selected based on the following criteria: (1) they fall under the scope of the CSDDD; 142 (2) they are signatories to the CoC; and (3) they qualify as VLOPs as defined in the DSA.As a video-sharing platform, YouTube will also be evaluated in light of the standards in the AVMSD.This section contains additional considerations as to whether the evolving nature of Facebook and even Twitter, as platforms storing and suggesting large amounts of user-generated videos, qualifies them to be assessed in light of the AVMSD.

Facebook
Facebook is a social media platform owned by Meta Platforms, based in the United States of America (USA).Facebook has close to 3 billion users, 143 with over 250 million active in Europe, 144 and, in October 2022 it was ranked the third most visited website worldwide. 145As of August 2022, this platform has a turnover of over $100 billion. 146This company falls under the scope of the CSDDD and qualifies as a VLOP under the DSA.In addition, and with developments witnessed in recent years where Facebook also 'service displays, tags and recommends video 136 OECD 2018 HRDD Guidance (n 74); UNGPs (n 12); Section 3 of this paper. 137Ibid. 138Regarding the responsibilities of intermediaries moderating hate speech prohibited under civil or administrative law, though the default approach for the responsibility of removal advocated in CM/Rec(2022)16 is strictly invoked in cases of criminal hate speech, CM/Rec(2022)16 prescribes that intermediaries deprioritise and contextualise (paragraph 22) and publish transparent reports with disaggregated and comprehensive data on hate speech cases and restrictions (paragraph 25). 139Ibid paragraph 36 [emphasis added]. 140The recommendation for 'more explicit and specialised guidance' on human rights for AI businesses is stressed by Lane, 'Artificial Intelligence and Human Rights' (n 79). 141As explained in Section 4.1.,though these online platforms are based in the USA and thus typically bound by the USA legal frameworks on freedom of expression, the adoption of most notably the DSA formalises the need of companies operating in the EU territorial jurisdiction to comply with the European regional legal frameworks.content to the users', it arguably also qualifies as a video-sharing platform service under the AVMSD. 147acebook expressly prohibits hate speech in its Community Standards, defining it as 'a direct attack against peoplerather than concepts or institutions-on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.' 148 It proceeds by providing a definition of attacks as 'violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.' 149It goes on to expressly prohibit the use of harmful stereotypes which it defines as 'dehumanizing comparisons that have historically been used to attack, intimidate, or exclude specific groups, and that are often linked with offline violence.' 150acebook classifies the severity of hate speech into three 'tiers'. 151Tier 1 designates the most serious and Tier 3 the least serious forms of hate speech.Tier 1 broadly encompasses hate speech which is violent speech, that establishes dehumanising comparisons or that mocks the concept, events or victims of hate crimes.Tier 2 generally covers hate speech that states inferiority or expressions of contempt, dismissal, disgust, and cursing with the intent to insult.Tier 3 largely encompasses hate speech that excludes or segregates protected characteristics.These detailed explanations are provided in the Community Standards which are embedded in the website of Meta Transparency Center and could therefore be less accessible to users with less digital literacy.Nevertheless, the prohibition of hate speech is also clearly indicated in Facebook's Help Center, where the prohibition is phrased as 'hate speech, credible threats or direct attacks on an individual or group'. 152The Help Center is embedded in Facebook's website and directly links to the Meta Transparency centre website where the abovementioned detailed explanation is provided.It is thus possible to conclude that users are well informed about where to access information about Facebook's prohibition of hate speech.
Three key remarks can be made regarding the compliance of Facebook's conceptualisation of hate speech with European human rights standards.First, Facebook does not adopt an open-ended list of protected categories of people, though it recognises that hate speech serves to perpetuate historical oppression. 153This falls short of the conceptualisation of hate speech that we suggested in Section 2 in line with European human rights law, which should explicitly include historically marginalised groups in contexts where hateful expressions are conveyed. 154nder Tier 1, Facebook excludes from protection people who 'carried out violent crimes or sexual offenses or representing less than half of a group'.This does not provide an authoritative source dictating whether certain people committed a crime or not.In fact, it was this Fig. 2. OECD Due diligence process, including precautionary measures to counter criminal hate speech. 147AVMSD, Article 1. 148 Facebook Community Standards Hate speech (2023) <https://transpar ency.fb.com/en-gb/policies/community-standards/hate-speech/> accessed 27 September 2023. 149Ibid. 150Ibid. 151Ibid. 152Policies and reporting Legal removal request, What types of things aren't allowed on Facebook?(2023) <https://www.facebook.com/help/212826392083694> accessed 27 September 2023. 153In fact, the disregard for historical oppression and the purely quantitative threshold of considering absolute numbers of people targeted by hate speech as the main metric has led to content moderation decisions violating human rights standards.Facebook's Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children (2017) <https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-algorithms> accessed 27 September 2023. 154The possibility for gender-based violence on Facebook according to its Community Guidelines was further demonstrated by Audrey Carlsen and Fahima Haque, when they showed that the statement 'Female sports reporters need to be hit in the head with hockey pucks' would not be considered hate speech.The New York Times, What Does Facebook Consider Hate Speech?Take Our Quiz (2017) <https://www.nytimes.com/interactive/2017/10/13/technology/facebook-hate-speech-quiz.html>accessed 27 September 2023.conceptualisation that enabled Facebook to amplify false allegations that two Muslim tea shop owners had raped a Buddhist girl, fuelling the violence against the Rohingya Muslim community in Myanmar. 155This clearly violates European human rights standards 156 and should not feature in the company's Community Guidelines.Further, if understood as permission for gender-based discrimination, Facebook's acceptance under Tier 2 of 'certain gender-based cursing in a romantic break-up' violates numerous regional and international human rights treaties (e.g. the CFREU, 157 the CoE Convention on preventing and combating violence against women and domestic violence (also known as the Istanbul Convention), 158 the CoE Convention on Cybercrime (also known as the Budapest Convention), 159 and the UN Convention on the Elimination of all Forms of Discrimination Against Women 160 ).
Second, from the perspective of the right to freedom of expression, concerning the types of prohibited acts, it is positive to note that Facebook may consider content with intentional satirical tones not to constitute hate speech, as these are often used as counter-speech by people targeted by hate speech. 161This acknowledgement aligns with the principle set in the ECtHR's Handyside v United Kingdom judgement that to support pluralism, tolerance and broadmindedness, 'freedom of expression […] is applicable not only to "information" or "ideas" that are favourably received or regarded as inoffensive or as a matter of indifference but also to those that offend, shock or disturb the State or any sector of the population.' 162 Third, the classification of hate speech under 3 Tiers fails to align with the conceptualisation of the most serious forms of hate speech as per CM/Rec(2022)16 and the consequences of the classification are not mentioned.Facebook clarifies in its Transparency Center that 'severity' is 'the likelihood that the content could lead to harm both offline and online' and that this is a key factor determining which content the human review teams review first. 163However, it is not explained whether there is a standard action for review within each Tier.This framework does not align with the standard of differentiating the elements of criminal hate speech as articulated in Paragraph 11 of CM/Rec (2022)16.Moreover, this framework fails to align with Paragraph 32 of CM/Rec(2022)16, requiring that online platforms remove only the most serious forms of hate speech.
Regarding HRDD, Facebook's Transparency Center does include a section dedicated to 'Enforcement' that explains how technology and review teams detect and review potentially violating content and accounts. 164This communication arguably resembles the HRDD phase of identifying and assessing adverse impacts on human rights caused during the business' operations.In addition, Meta also clarifies that it follows a three-part approach to content enforcement encompassing 'removal, reduction and information', reflecting the HRDD responsibilities to cease, prevent, mitigate and communicate adverse impacts on human rights.Nevertheless, while Facebook's enforcement process does resemble the international and regional corporate HRDD standards, there is no formal recognition of such legal inspiration.A formal recognition would be important not only to pay due tribute to the influence of the HRDD in the company's internal enforcement processes but also encourage other platforms to also follow the HRDD framework.

Twitter
Twitter, Inc. is a social media platform based in the USA, ranked in January 2023 as the fourth most visited website worldwide. 165This platform has around 1.3 billion accounts, 166 with over 100 million active in Europe. 167In 2022, it had a turnover of $4.4 billion. 168This company qualifies as a VLOP as per the DSA and also falls under the CSDDD.
Twitter prohibits 'hateful conduct', which it defines as the promotion or incitement of 'violence against or directly attack[ing] or threaten [ing] other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease'. 169Additionally, it prohibits the display of 'hateful imagery and display names', described as 'hateful images or symbols in [a user's] profile image or profile header'. 170n assessing the compliance of the conceptualisation of hate speech in Twitter's Community Guidelines with European human rights standards, three aspects are worth discussing.First, Twitter does not articulate an open-ended list of protected categories as it limits the protection of hate speech to people targeted on the basis of the characteristics listed above.This may fail to protect people belonging to a historically oppressed or marginalised group, who should be protected in order to follow an approach promoting legal coherence within European human rights law (see Section 5.1 above).Nevertheless, Twitter does acknowledge that its policy to counter hate speech seeks to combat 'abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalised', which aligns with the conceptualisation of hate speech by Matsuda. 171Twitter also allows slurs used between groups if not intended to be hateful and if used as 'a means to reclaim terms that were historically used to demean individuals'. 172This policy aligns with the standard in CM/Rec(2022)16 on platforms' crucial role in promoting counter-speech and alternative speech. 173econd, with respect to the types of prohibited acts, Twitter requires that threats be 'violent' and slurs be 'repeated'. 174These requirements do not align with the European human rights standards on criminal hate speech.However, the severity (scale of violence) and frequency or reach (scale of relevance) could be interpreted as relevant criteria for the contextualisation of hate speech prohibited under civil or administrative law (as explained in Section 2.2.1).Additionally, although Twitter prohibits references to genocides, it does not include a prohibition of denial or trivialisation of other war crimes or crimes against humanity, as recommended in CM/Rec(2022)16.
Third, Twitter does not clarify what measure(s) it will take towards prohibited hate speech as it merely acknowledges that hate speech 'may' be removed. 175Hence, Twitter's conceptualisation of hate speech and human rights commitments in its ToS do not align with European human rights standards, which require differentiating the elements of criminal hate speech and the removal of criminal hate speech. 176oncerning HRDD, similarly to Facebook, Twitter has a dedicated website to explain how it enforces its policies. 177However, this communication focuses more on the types of enforcement measures rather than explaining the enforcement process (the latter ideally relating to the HRDD process).This means that it is not clear how Twitter approaches its HRDD responsibilities since it does not inform users about the processes in place to 'identify, mitigate and cease' potentially adverse impacts on human rights.

YouTube
YouTube, owned by Google LLC, is a video-sharing social media platform based in the USA, ranked the second most visited website as of October 2022. 178As of January 2023, YouTube had over 2.5 billion users, 179 with over 400 million active in Europe, 180 and, as of February 2022, a revenue of $28.8 billion. 181This company qualifies as a VLOP as per the DSA and as a video-sharing platform service as per the AVMSD, and falls under the CSDDD.
YouTube prohibits hate speech, which it defines as 'content promoting violence or hatred against individuals or groups based on any of the following attributes: age; caste; disability, ethnicity; gender identity and expression; nationality; race; immigration status; religion; sex/ gender; sexual orientation; victims of a major violent event and their kin; veteran status.' 182 In reviewing YouTube's conceptualisation of hate speech against European human rights standards on countering online hate speech, three comments are in order.First, similar to Facebook and to Twitter, YouTube does not adopt an open-ended list of categories protected from hate speech and therefore fails to align with the open-ended interpretation of protected categories articulated in Article 21 of the CFREU.
Second, YouTube broadly defines hate speech acts as 'encouragement to violence, threats, and incitement to hatred'. 183Under 'other types of content' YouTube clarifies that dehumanising, alleging superiority or calling for the subjugation or domination over individuals is prohibited.This aligns with Paragraph 11 of CM/Rec(2022)16, which also considers discrimination as one of the most serious forms of hate speech.Nevertheless, similar to the Guidelines of Twitter, YouTube too does not include a prohibition of denial or trivialisation of war crimes or crimes against humanity, as recommended in CM/Rec(2022)16.Additionally, YouTube allows hate speech if used for educational purposes. 184In this context, considering the European human rights standards stemming from the ECtHR decision in Roj TV A/S v. Denmark, it should be noted that 'the one-sided coverage with repetitive incitement to participate in fights and actions, incitement to join the organisation/the guerrilla, and the portrayal of deceased guerrilla members as heroes, amounted to propaganda for (…) a terrorist organisation'.Hence, for hate speech to be intended as educational, the authors of such posts must expressly demonstrate that intent and disassociate themselves from such hateful messages.The wrongful implementation of this policy by YouTube was closely scrutinised when in 2021 Syrian activists denouncing air strikes and militant takeovers saw their videos being removed by automated AI content moderation tools. 186Hence, a clarification aligned with the standards explained in Roj TV A/S v. Denmark would contribute to a clearer and more coherent legal framework upholding fundamental rights and protecting human rights activists. 187hird, and again similarly to Facebook and Twitter, YouTube too does not clarify what happens to hate speech posted on its platform.YouTube's policies provide incoherent explanations ranging from 'in some rare cases, we may remove content', followed by a requirement of 'repetition' of abusive behaviour, 188 while slightly after this it informs its users that content violating the hate speech policy will be 'removed'.This is an unclear framework and it does not align with CM/Rec(2022) 16), which expressly requires platforms to remove criminal hate speech.Nevertheless, it should be noted that YouTube informs users that it may 'limit features' when content comes close to hate speech.This policy aligns with paragraphs 22 and 23 of CM/Rec(2022)16), which require alternative measures (aside from removal) for hate speech that is not criminal, such as deprioritisation or contextualisation.
Like Twitter, YouTube has a dedicated website communicating its enforcement guidelines 189 but it does not address the HRDD process.YouTube is arguably one step behind Twitter, since it seems to place the burden of identifying adverse impacts on human rights on its users rather than on the company itself.This is visible in the 'Reporting' section, which is structured in a way that only reflects options in which the user reports inappropriate content -as opposed to explaining about how YouTube itself proactively and (independently from its users) puts HRDD systems in place to identify adverse impacts on human rights.Table 1 below summarises the findings of the case studies' compliance with the conceptualisation of criminal hate speech.

Conclusion
This paper addresses a vacuum in the legal framework by clarifying corporate human rights responsibilities in Europe to counter the most serious forms of online hate speech.Following (emerging) standards on HRDD, AI and online content moderation at the international and European level, we claim that there is a legal standard emanating from the HRDD framework in the European context prescribing the responsibility for online platforms, particularly for VLOPs, video-sharing platforms and for platforms under the scope of CSDDD, to align their ToS, with the conceptualisation of the criminal hate speech as explained in the European human rights standards.Further, ToS should explicitly reflect the HRDD responsibilities to prohibit, remove and report criminal hate speech to relevant public authorities.ToS can be considered a human rights 'policy commitment' when they include a clear explanation of the platform's commitments to human rights, including the prohibition of criminal hate speech (Section 3).This HRDD measure could also form part of the ongoing preventive HRDD responsibilities to address potentially adverse impacts on human rights.
The limitation of the requirement to harmonise and reflect the conceptualisation of criminal hate speech is justified by a growing European human rights understanding of criminal hate speech as reflected in CM/ Rec/(2022)16 from which specific HRDD responsibilities can be developed.It is worth remembering that the EC proposed to add hate speech to the list of EU crimes which, if and when this proposal materialises, will strengthen the need for a standardised conceptualisation of criminal hate speech in online platforms' ToS.This legal avenue supports compliance with the transparency and clarity required by Terms and Conditions (Article 14 of the DSA) generally imposed on all providers of intermediary services.To follow an approach of legal coherence, platforms should explicitly conceptualise hate speech in a manner that protects an open-ended list of protected characteristics that have been historically subject to oppression (Section 2).This conceptualisation specifically addresses the rights of people or groups of people that have been and remain marginalised members of society.
The three case studies in Section 5 demonstrate that although Facebook, Twitter and YouTube have each adopted ToS prohibiting hate speech to a certain degree, none of them currently conceptualises hate speech in a way that is consistent with European human rights standards.More specifically, none recognises the difference between prohibited hate speech and criminal hate speech, nor the specific HRDD responsibilities associated with countering criminal hate speech.Furthermore, the three case studies reveal the lack of alignment of content moderation practices by online platforms with the HRDD responsibilities to identify, mitigate, cease, remedy and inform about potentially adverse impacts on human rights.
Addressing law/policy-makers, we also recommend that the EC issues a best practice guideline (under Article 35(3) of the DSA and Article 13 of the CSDDD) suggesting that VLOPs, and particularly video-sharing platforms, should explicitly mention in their ToS that they prohibit, remove and report to law enforcement authorities criminal hate speech in line with the conceptualisation in Paragraph 11 CM/Rec(2022)16.Further to this and also by issuing a best practice guideline, we recommend that the EC suggest that VLOPs, with a similar heightened focus on video-sharing platforms, adopt HRDD compliant content moderation processes which should likewise be explicitly mentioned in their ToS.
This paper has primarily addressed the first phase of HRDD processes, i.e. the adoption of a policy commitment as a preventive HRDD responsibility.Further research is necessary to examine what could be required in relation to the remaining phases of HRDD, i.e. the tracking and communicating implementation and results as well as the provision of remedies when applicable.For example, what online platforms moderating content should do to identify and prevent the promotion of criminal hate speech, and how they could effectively respond to these risks, should be the subject of further study.

Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Fig. 1 .
Fig. 1.OECD Due diligence process and supporting measures.74 16to Member States on combating hate speech.Regulation of the European Parliament and of the Council specifically, is there a legal standard emanating from the European HRDD framework prescribing the responsibility for online platforms19to align their ToS, as a minimum legal standard, with the conceptualisation of the criminal hate speech as explained in the European human rights standards, in particular with the Recommendation CM/ Rec(2022)16?To answer this research question, Section 2 employs doctrinal research to clarify the elements of the most serious cases of hate speech regulated by criminal law.The legal framework of criminal hate speech presents a common European understanding under which specific HRDD can be required of online platforms.As explained in CM/Rec (2022)16, the most serious cases of hate speech represent a criminally actionable violation of rights under Article 17 of the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR).
critical discussion, see e.g.Sarah Joseph and Joana Kyriakakis (41); Chiara Macchi and Claire Bright, 'Hardening Soft Law: The Implementation of Human Rights Due Diligence Requirements in Domestic Legislation' in Martina Buscemi et al (eds) Legal Sources in Business and Human Rights: Evolving Dynamics in International and European Law (Brill 2020) 218; Surya Deva, 'Mandatory human rights due diligence laws in Europe: A mirage for rightsholders?' (2023) 36(2) LJIL 389.85TheCSDDDGeneralApproach had a narrower scope, applying to EU companies with 500+ employees and an annual turnover of €150 million, Its application to non-EU companies falling under the scope of Article CSDDD, as well as the so-called 'Brussels effect' of EU regulation across the globe.See Anu Bradford, The Brussels Effect: How the European Union Rules the World (OUP 2020).91ClaireMethven O'Brrin and Jaques Hartmann, 'The European Commission's proposal for a Directive on corporate sustainability due diligence: two paradoxes' (EJIL: Talk!, 19 May 2022) <https://www.ejiltalk.org/the-european-commissions-proposal-for-a-directive-on-corporate-sustainability-due-dilig ence-two-paradoxes/> accessed 6 April 2023. 90 European Commission, 'Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, COM (2021) 206 final. 98Council of the European Union, 'Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts-General approach'.Interinstitutional File 2021/0106(COD). 97 For a more detailed analysis on Article 14 DSA see João Pedro Quintais, Naomi Appelman, and Ronan Fahy (2023).Using Terms and Conditions to apply Fundamental Rights to Content Moderation.

Table 1
Case studies' compliance with the conceptualisation of criminal hate speech.