Jump to content

User:Iluvcats34/Online hate speech

From Wikipedia, the free encyclopedia

READY FOR GRADING

[edit]

Introduction

[edit]

Online hate speech is a type of speech that takes place online with the purpose of attacking a person or a group based on their race, religion, ethnic origin, sexual orientation, disability, or gender.[1]

Online hate speech is the expression of conflicts between different groups within and across societies. Online hate speech is a vivid example of how the Internet brings both opportunities and challenges regarding the freedom of expression and speech while also defending human dignity.[2]

Multi-stakeholders processes (e.g. the Rabat Plan of Action) have tried to bring greater clarity and suggested mechanisms to identify hateful messages. Yet, hate speech is still a generic term in everyday discourse, mixing concrete threats to individuals and/or groups with cases in which people may be simply venting their anger against authority.

The Internet's speed and reach makes it difficult for governments to enforce national legislation in the virtual world. Social media is a private space for public expression, which makes it difficult for regulators. Some of the companies owning these spaces have become more responsive towards tackling the problem of hate speech online.[2]

Politicians, activists, and academics discuss the character of online hate speech and its relation to offline speech and action, but the debates tend to be removed from systematic empirical evidence.

Online hate speech has been on the rise since the start of 2020, with COVID-19 tensions, Anti-Asian rhetoric, ongoing racial injustice, mass civil unrest, violence, and the 2020 Presidential Election. Yet, many instances of hate speech have been refuted with the First Amendment, which allows online hate speech to continue.

Definition

[edit]

Hate Speech

[edit]

Hate Speech is the controversial clash of freedom of expression and individual, group and minority rights, as well as concepts of dignity, liberty and equality.[2]

In national and international legislation, hate speech refers to expressions that advocate incitement to harm, particularly discrimination, hostility or violence, based upon the targets' social and/or demographic identity. Hate Speech may include, but is not limited to, speech that advocates, threatens, or encourages violent acts.

Hate Speech contains two types of messages. The first message is addressed to the targeted group and functions to dehumanize and diminish members of this group. The following is an example of this type of message:

The other message in hate speech is to let others with similar opinions know they are not alone, and to reinforce a sense of an in-group that is (purportedly) under threat. The following is an example of this type of message, sent to like-minded individuals:

Characteristics

[edit]

The proliferation of hate speech online, observed by the UN Human Rights Council Special Rapporteur on Minority Issues poses a new set of challenges.[3] [Added the citation]

Those challenges related to its permanence, itinerancy, anonymity and complex cross-jurisdictional character.

Facebook, on the contrary, may allow multiple threads to continue in parallel and go unnoticed; creating longer lasting spaces that offend, discriminate, and ridicule certain individuals and groups.[2]

The itinerant nature of hate speech also means that poorly formulated thoughts, or under-the-influence behavior, that would have not found public expression and support in the past may now land on spaces where they can be visible to large audiences.[2] [Added the citation]

Anonymity can also present a challenge to dealing with online hate speech. China and South Korea enforce real-name policies for social media. Facebook, LinkedIn, and Quora have sought to activate a real-name system to have more control on online hate speech.

Many instances of online hate speech are posted by Internet "trolls," which are typically pseudonymous[4] users who post shocking, vulgar, and overall untrue content that is meant to trigger a negative reaction out of people, as well as trick, influence, and sometimes recruit them, if they share the same opinion.[5] Social media has provided a platform for radical or extremist political groups to form, network, and collaborate to spread their messages of anti-establishment and anti-political correctness, and promote ideologies that are racist, anti-feminist, homophobic, etc.[6] Fully-anonymous online communication is rare, as it requires the user to employ highly technical measures to ensure that they cannot be easily identified.[2]

While there are Mutual Legal Assistance treaties in place across Europe, Asia, and North America, these are characteristically slow to work. The transnational reach of many private-sector Internet intermediaries may provide a more effective channel for resolving issues in some cases, although these bodies are also often impacted upon by cross-jurisdictional appeals for data (such as revealing the identity of the author(s) of a particular content).[2] Each country has a different understanding of what defines as hate speech, making it difficult to prosecute perpetrators of online hate speech, especially in The US where there is a deep-rooted constitutional commitment to the Freedom of Speech.[7]

Unlike the dissemination of hate speech through conventional channels, online hate speech dissemination involves multiple actors, whether knowingly or not. When perpetrators disseminate their hateful message on social media, they do not only hurt their victims, but may also violate terms of service in that platform and at times even state law, depending on their location. The victims, on their part, may feel helpless in the face of online harassment, not knowing to whom they should turn to for help. Nongovernmental organizations and lobby groups have been raising awareness and encourage different stakeholders to take action.[2]

Some tech companies, such as Facebook, use Artificial Intelligence (AI) systems to monitor hate speech.[8] However, Artificial Intelligence may not always be an effective way of monitoring hate speech since the systems lack emotion and judgment skills that humans have.[9] For example, a user might post or comment something that classifies as hate speech and/or violates community guidelines, but if the target word is misspelled, or some letters are replaced with symbols, the AI systems will not recognize it. Therefore, humans still have to monitor the AI systems that monitor hate speech, which is a concept referred to as "Automation's Last Mile."[9]

Frameworks

[edit]

Stormfront Precedent

[edit]

Launched in March 1995 by a former Ku Klux Klan leader, it quickly became a popular space for discussing ideas related to Neo-Nazism, White nationalism and White separatism, first in the United States of America and then globally.[10] [Added the Citation] The forum hosts calls for a racial holy war and incitement to use violence to resist immigration.[10] [Added the Citation] and is considered a space for recruiting activists and possibly coordinating violent acts.[11] [Added the Citation] The few studies that have explored the identities of Stormfront users depict a more complex picture, rather than seeing it as a space for coordinating actions. Well-known extreme right activists have accused the forum to be just a gathering for "keyboard warriors."

International Principles

[edit]

The International Covenant on Civil and Political Rights (ICCPR) addresses hate speech and contains the right to freedom of expression in Article 19[12] [Added the Citation] and the prohibition of advocacy to hatred that constitutes incitement to discrimination, hostility or violence in Article 20.[12] [Added the Citation] For example, the 1948 Universal Declaration of Human Rights (UDHR), which was drafted as a response to the atrocities of the World War II, contains the right to equal protection under the law in Article 7, which proclaims that: "All are entitled to equal protection against any discrimination in violation of this Declaration and against any incitement to such discrimination."[12] [Added the Citation] The UDHR also states that everyone has the right to freedom of expression, which includes "freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."[12] [Added the Citation]

Hate Speech and ICCPR

[edit]

The ICCPR is the legal instrument most commonly referred to in debates on hate speech and its regulation, although it does not explicitly use the term "hate speech." Article 19, which is often referred to as part of the "core of the Covenant",[13] [Added the Citation] provides for the right to freedom of expression. This sets out the right, and it also includes general strictures to which any limitation of the right must conform in order to be legitimate. Article 19 is followed by Article 20 that expressly limits freedom of expression in cases of "advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence."[14] [Added the Citation]

Regional Responses

[edit]

American Convention on Human Rights

[edit]

He concluded that the Inter-American Human Rights System differs from the United Nations and the European approach on a key point: The Inter-American system covers and restricts only hate speech that leads to violence.[15]

African Charter on Human Rights and Peoples' Rights

[edit]

The African Charter on Human Rights and Peoples' Rights takes a different approach in Article 9 (2), allowing for restrictions on rights as long as they are "within the law." This concept has been criticized and there is a vast amount of legal scholarship on the so-called "claw-back" clauses and their interpretation.[16] [Added the Citation]

Private Spaces

[edit]

Search Engines, while they can modify search results for self-regulatory or commercial reasons, have increasingly tended to adapt to the intermediary liability regime of both their registered home jurisdictions and other jurisdictions in which they provide their services, either removing links to content proactively or upon request by authorities.[17] [Added the Citation]

This means that they should avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved."[18] [Added the Citation] The United Nations Guiding Principles also indicate that in cases in which human rights are violated, companies should "provide for or cooperate in their remediation through legitimate processes."[18] [Added the Citation]

Social Responses

[edit]

Case Studies

[edit]
[edit]

Pew Research Center surveyed over 10,000 adults in July 2020 to study social media's effect on politics and social justice activism. 23% of respondents, who are adult social media users, reported that social media content has caused them to change their opinion, positively or negatively, on a political or social justice issue.[19] 35% of those respondents cited the Black Lives Matter movement, police reform, and/or race relations.[19] 18% of respondents reported a change of opinion on political parties, ideologies, politicians, and/or President Donald Trump.[19] 9% of respondents cited social justice issues, such as LGBTQIA+ rights, feminism, immigration, etc.[19] 8% of respondents changed their opinion on the coronavirus pandemic, and 10% cited other opinions.[19] Based on these results, social media plays an important role in influencing public opinion.

Media Manipulation and Disinformation Online

[edit]

A study conducted by researchers Alice Marwick and Rebecca Lewis observed media manipulation and explored how the alt-right marketed, networked, and collaborated to influence their controversial beliefs that could have potentially helped influence President Trump's victory in the 2016 election. Unlike mainstream media, the alt-right does not need to comply to any rules when it comes to influence and do not need to worry about network ratings, audience reviews, or sensationalism.[6] Alt-right groups can share and persuade others of their controversial beliefs as bluntly and brashly as they desire, on any platform, which may have played a role in the 2016 election. Although the study could not conclude what exactly the effect was on the election, but did provide extensive research on the characteristics of media manipulation and trolling.[6]

Hate Speech and Linguistic Profiling in Online Gaming

[edit]

Professor and gamer Kishonna L. Gray studied intersectional oppressions in the online gaming community and called on Microsoft and game developers to "critically assess the experiences of non-traditional gamers in online communities...recognize diversity...[and that] the default gaming population are deploying hegemonic whiteness and masculinity to the detriment of non-white and/or non-male users within the space."[20] Gray examined sexism and racism in the online gaming community. Gamers attempt to identify the gender, sexuality, and ethnic background of their teammates and opponents through linguistic profiling, when the other players cannot be seen.[20] Due to the intense atmosphere of the virtual gaming sphere, and the inability to not be seen, located, or physically confronted, gamers tend to say things on the virtual game that they likely would not have said in a public setting. Many gamers of marginalized communities have branched off from the global gaming network and joined "clans," that consist of only gamers of the same gender, sexuality, and/or ethnic identity, to avoid discrimination while gaming. A study found that 78 percent of all online gamers play in "guilds" which are smaller groups of players, similar to "clans." [21] One of the most notable "clans," Puerto Reekan Killaz, have created an online gaming space where Black and Latina women of the LGBTQIA+ community can play without risk of racism, nativism, homophobia, sexism, and sexual harassment.[20]

In addition to hate speech, Professor and gamer Lisa Nakamura found that many gamers have experienced identity tourism- which is when a person or group appropriate and pretend to members of another group- as Nakamura observed white male gamers play as Japanese "geisha" women.[22] Identity Tourism often leads to stereotyping, discrimination, and cultural appropriation.[22] Nakamura called on the online gaming community to recognize Cybertyping- "the way the Internet propagates, disseminates, and commodifies images of race and racism."[23]

Anti-Chinese Rhetoric Employed by Perpetrators of Anti-Asian Hate

[edit]

As of August 2020, over 2,500 Asian-Americans have reported experiencing racism fueled by COVID-19, with 30.5% of those cases containing anti-Chinese rhetoric, according to Stop AAPI Hate. The language used in these incidents are divided into five categories: virulent animosity, scapegoating of China, anti-immigrant nativism, racist characterizations of Chinese, and racial slurs. 60.4% of these reported incidents fit into the virulent animosity category, which includes phrases such as "get your Chinese a** away from me!"[24]

Myanmar

[edit]

The Internet has grown at unprecedented rates. Myanmar is transitioning towards greater openness and access, which leads to social media negatives, such as using hate speech and calls to violence.[25] In 2014, the UN Human Rights Council Special Rapporteur on Minority Issues expressed her concern over the spread of misinformation, hate speech and incitement to violence, discrimination and hostility in the media and Internet, particularly targeted against a minority community.[3] [Added the Citation]

As commentaries on these campaigns have pointed out, such global responses may have negative repercussions on the ability for local solutions to be found.[26] [Added the Citation]

Ethiopia

[edit]

The long-lived ethnic rivalry in Ethiopia between the Oromo people and the Amhara people found battleground on Facebook, leading to hate speech, threats, disinformation, and even deaths. Facebook does not have fact-checkers that speak either of the dominant languages spoken in Ethiopia nor do they provide translations of the Community Standards, therefore hate speech on Facebook is largely unmonitored in Ethiopia. Instead, Facebook relies on activists to flag potential hate speech and disinformation, but the many burnout activists feel mistreated.[27]

In October 2019, Ethiopian activist Jawar Mohammed falsely announced on Facebook that the police were going to detain him, citing religious and ethnic tension. This prompted the community to protest his alleged detainment and the racial and ethnic tensions, which lead to over 70 deaths.[28]

A disinformation campaign originated on Facebook centering on popular Ethiopian singer, Hachalu Hundessa, of the Oromo ethnic group. The posts accused Hundessa of supporting their controversial Prime Minister Abiy Ahmed, whom Oromo nationalists disproved of for his catering to other ethnic groups. Hundessa was assassinated in June 2020 following these hateful Facebook posts, prompting public outrage. Facebook users blamed the Amhara people for Hundessa's assassination without any evidence in a long thread of hateful content.[27] According to The Network Against Hate Speech, many Facebook posts called for “genocidal attacks against an ethnic group or a religion — or both at the same time; and ordering people to burn civilians’ properties, kill them brutally, and displace them."[27] The violence in the streets and on Facebook escalated to the point that the Ethiopian Government had to shut down the Internet for three weeks. However, neighboring countries could still post and access the hateful content of the matter, and the volunteer activists could not access the Internet to flag hate speech. Therefore “there are hours of video that came from the diaspora community, extremist content, saying we need to exterminate this ethnic group,” according to Professor Endalk Chala of Hamline University.[27]

Facebook officials ventured to Ethiopia to investigate, but did not release their findings. Facebook announced that they are hiring moderators who can speak Amharic and other languages in Ethiopia but did not provide extensive detail.[27]

[rest of article]

Private Companies

[edit]

YouTube

[edit]

YouTube, a subsidiary of the tech company Google, allows for easy content distribution and access for any content creator, which creates opportunity for the audience to access content that shifts right or left of the 'moderate' ideology common in mainstream media.[29] YouTube provides incentives to popular content creators, prompting some creators to optimize the YouTuber experience and post shock-valued content that may promote extremist, hateful ideas.[29][30] Content diversity and monetization on YouTube directs a broad audience to the potentially harmful content from extremists.[29][30] YouTube allows creators to personally brand themselves, making it easy for young subscribers to form a parasocial relationship with them and act as "regular" customers.[30] In 2019, YouTube demonetized political accounts,[31] but radical content creators still have their channels and subscribers to keep them culturally relevant and financially afloat.[30]

YouTube has outlined a clear “Hate Speech Policy” amidst several other user policies on their website.

Facebook

Hate Speech on Facebook and Instagram quadrupled in 2020, leading to the removal of 22.5 million posts on Facebook and 3.3 million posts on Instagram in the second quarter of 2020 alone.[8] Facebook has been accused of holding bias when policing hate speech, citing political campaign ads that may promote hate or misinformation that have made an impact on the platform.[8] Facebook adjusted their policies after receiving backlash, accusations, and large corporations pulled their ads from the platform to protest the platform's loose handling of hate speech and misinformation.[8] Political campaign ads now have a "flag" feature that notes that the content is newsworthy but that it may violate some community guidelines.[8] In 2020, Facebook added guidelines to Tier 1 that forbid blackface, racial comparisons to animals, racial or religious stereotypes, denial of historical events, and objectification of women and the LGBTQIA+ community.[32]

Instagram, a photo and video-sharing platform owned by Facebook, has similar hate speech guidelines as Facebook, but is not divided in tiers. Instagram's Community Guidelines also forbid misinformation, nudity, self-injury glorification, and posting copyrighted content without authorization.[33]

TikTok

"Facebook Hates Google+ ???" by Frederick Md Publicity is licensed with CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/

TikTok lacks clear guidelines and control on hate speech, which allows bullying, harassment, propaganda, and hate speech to become part of normal discourse on TikTok. Far-right hate groups, terrorist organizations, and pedophiles thrive on TikTok by spreading and encouraging hate to an audience as young as 13 years old.[34] Children are naive and easily influenced by other people and messages, therefore they are more likely to listen and repeat what they are being shown or told.[35] The Internet does not have an excessively monitored space that guarantees safety for children, so as long as the internet is public, children and teenagers are bound to come across hate speech.[35] From there, young teenagers have a tendency to let their curiosity lead them into furthering their interest and research into radical ideas.[35] However, children cannot take accountability for their actions in the way that adults can and should,[35] placing the blame on not only the person who posted the vulgar content, but the social media platform itself. Therefore, TikTok is criticized for their handling of hate speech on the platform. While TikTok prohibits bullying, harassment, and any vulgar or hateful speech in its Terms & Conditions, TikTok has not been active long enough to have developed an effective method to monitor this content.[34] Other social media platforms such as Instagram, Twitter, and Facebook have been active long enough to know how to battle online hate speech and vulgar content,[34] but the audience on those platforms are old enough to take accountability for the messages they spread.[35] TikTok, on the other hand, has to take some responsibility for the content distributed to its young audience.[34] TikTok users are required to be of at least 13 years of age, however that requirement can be easily waived, as apps cannot physically access users' age. Researcher Robert Mark Simpson concluded that combatting hate speech on youth-targeted media "might bear more of a resemblance to regulations governing adult entertainment than to prohibitions on Holocaust denial."[35]

Media and Information Literacy

[edit]

Citizenship education focuses on preparing individuals to be informed and responsible citizens through the study of rights, freedoms, and responsibilities and has been variously employed in societies emerging from violent conflict.[36] [Added the Citation]

Information literacy cannot avoid issues such as rights to free expression and privacy, critical citizenship and fostering empowerment for political participation.[37] [Added the Citation]

Teaching strategies are changing accordingly, from fostering critical reception of media messages to include empowering the creation of media content.[38] [Added the Citation]

References

[edit]
  1. ^ Johnson, N. F.; Leahy, R.; Restrepo, N. Johnson; Velasquez, N.; Zheng, M.; Manrique, P.; Devkota, P.; Wuchty, S. (September 2019). "Hidden resilience and adaptive dynamics of the global online hate ecology". Nature. 573 (7773): 261–265. doi:10.1038/s41586-019-1494-7. ISSN 1476-4687.
  2. ^ a b c d e f g h Gagliardone, Iginio; Gal, Danit; Alves, Thiago; Martinez, Gabriela (2015). "Countering Online Hate Speech". unesdoc.unesco.org. Retrieved 2020-10-30.{{cite web}}: CS1 maint: url-status (link)
  3. ^ a b Izsak, Rita (2015). "Report of the Special Rapporteur on minority issues". Human Rights Council.
  4. ^ "Interview: Ian Brown". University of Oxford. 26 November 2014. {{cite web}}: Missing or empty |url= (help)CS1 maint: url-status (link)
  5. ^ Phillips, Whitney (2015). This Is Why We Can't Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. MIT Press.
  6. ^ a b c Marwick, Alice; Lewis, Rebecca (2017). Media Manipulation and Disinformation Online. Data & Society Research Institute.
  7. ^ Banks, James (November 2010). "Regulating hate speech online". International Review of Law, Computers & Technology. 24: 4–5 – via ResearchGate.
  8. ^ a b c d e "Hateful posts on Facebook and Instagram soar". Fortune. Retrieved 2020-11-21.
  9. ^ a b Gray, Mary; Suri, Siddharth (2019). GHOST WORK: How to Stop Silicon Valley from Building a New Global Underclass. New York: Houghton Mifflin Harcourt.
  10. ^ a b Bowman-Grieve, Lorraine (2009-10-30). "Exploring "Stormfront": A Virtual Community of the Radical Right". Studies in Conflict & Terrorism. 32 (11): 989–1007. doi:10.1080/10576100903259951. ISSN 1057-610X.
  11. ^ Nobata, Chikashi; Tetreault, J.; Thomas, A.; Mehdad, Yashar; Chang, Yi (2016). "Abusive Language Detection in Online User Content". WWW. doi:10.1145/2872427.2883062.
  12. ^ a b c d "The Universal Declaration of Human Rights". United Nations. 1948.{{cite web}}: CS1 maint: url-status (link)
  13. ^ Lillich, Richard B. (April 1995). "U.N. Covenant on Civil and Political Rights. CCPR Commentary. By Manfred Nowak. Kehl, Strasbourg, Arlington VA: N. P. Engel, Publisher, 1993. Pp. xxviii, 939. Index, $176; £112;DM/sfr. 262". American Journal of International Law. 89 (2): 460–461. doi:10.2307/2204221. ISSN 0002-9300.
  14. ^ Leo, Leonard A.; Gaer, Felice D.; Cassidy, Elizabeth K. (2011). "Protecting Religions from Defamation: A Threat to Universal Human Rights Standards". Harvard Journal of Law & Public Policy. 34: 769.
  15. ^ INTER-AMERICAN COURT OF HUMAN RIGHTS (13 November 1985). "ADVISORY OPINION OC-5/85" (PDF). Corte Interamericana de Derechos Humanos.{{cite web}}: CS1 maint: url-status (link)
  16. ^ Viljoen, Frans (2007). International Human Rights Law in Africa. Oxford: Oxford University Press.
  17. ^ Mackinnon, David; Lemieux, Christopher; Beazley, Karen; Woodley, Stephen (November 2015). "Canada and Aichi Biodiversity Target 11: understanding 'other effective area-based conservation measures' in the context of the broader target". Biodiversity and Conservation. 24 – via ResearchGate.
  18. ^ a b United Nations (2011). Guiding Principles on Business and Human Rights. New York: Office of the New Commissioner.
  19. ^ a b c d e Perrin, Andrew (15 October 2020). "23% of users in U.S. say social media led them to change views on an issue; some cite Black Lives Matter". Pew Research Center. Retrieved 2020-11-22.{{cite web}}: CS1 maint: url-status (link)
  20. ^ a b c Gray, Kishonna (2012). "INTERSECTING OPPRESSIONS AND ONLINE COMMUNITIES". Information, Communication & Society. 15: 411–428 – via Taylor & Francis.
  21. ^ Seay, A. Fleming; Jerome, William J.; Lee, Kevin Sang; Kraut, Robert E. (2004-04-24). "Project massive: a study of online gaming communities". CHI '04 Extended Abstracts on Human Factors in Computing Systems. CHI EA '04. Vienna, Austria: Association for Computing Machinery: 1421–1424. doi:10.1145/985921.986080. ISBN 978-1-58113-703-3.
  22. ^ a b Nakamura, Lisa (2002). "After Images of Identity: Gender, Technology, and Identity Politics". Reload: Rethinking Women + Cyberculture: 321–331 – via MIT Press.
  23. ^ Nakamura, Lisa (2002). Cybertypes: Race, Ethnicity, and Identity on the Internet. New York: Routledge.
  24. ^ Jeung, Russell; Popovic, Tara; Lim, Richard; Lin, Nelson (2020). "ANTI-CHINESE RHETORIC EMPLOYED BY PERPETRATORS OF ANTI-ASIAN HATE" (PDF). Asian Pacific Policy and Planning Council.{{cite web}}: CS1 maint: url-status (link)
  25. ^ Holland, Hereward (14 June 2014). "Facebook in Myanmar: Amplifying Hate Speech?". Al Jazeera.{{cite web}}: CS1 maint: url-status (link)
  26. ^ Georg, Schomerus (13 January 2012). "Evolution of public attitudes about mental illness: a systematic review and meta‐analysis". Acta Psychiatrica Scandinavica. 125: 423–504 – via Wiley Online Library.
  27. ^ a b c d e Gilbert, David (24 September 2020). "Hate Speech on Facebook Is Pushing Ethiopia Dangerously Close to a Genocide". www.vice.com. Retrieved 2020-12-06.{{cite web}}: CS1 maint: url-status (link)
  28. ^ Lashitew, Addisu. "Ethiopia Will Explode if It Doesn't Move Beyond Ethnic-Based Politics". Foreign Policy. Retrieved 2020-12-06.
  29. ^ a b c Munn, Luke (July 2020). "Angry by design: toxic communication and technical architectures". HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS. 7: 1–11 – via ResearchGate.
  30. ^ a b c d Munger, Kevin; Phillips, Joseph (2019). A Supply and Demand Framework for YouTube Politics. University Park: Penn State Political Science. pp. 1–38.
  31. ^ "Our ongoing work to tackle hate". blog.youtube. Retrieved 2020-11-21.
  32. ^ "Community Standards Recent Updates | Facebook". www.facebook.com. Retrieved 2020-11-21.
  33. ^ "Community Guidelines | Instagram Help Center". www.facebook.com. Retrieved 2020-11-21.
  34. ^ a b c d Weimann, Gabriel; Masri, Natalie (2020-06-19). "Research Note: Spreading Hate on TikTok". Studies in Conflict & Terrorism. 0 (0): 1–14. doi:10.1080/1057610X.2020.1780027. ISSN 1057-610X.
  35. ^ a b c d e f Simpson, Robert Mark (2019-02-01). "'Won't Somebody Please Think of the Children?' Hate Speech, Harm, and Childhood". Law and Philosophy. 38 (1): 79–108. doi:10.1007/s10982-018-9339-3. ISSN 1573-0522.
  36. ^ Osler, Audrey; Starkey, Hugh (2006). "Education for Democratic Citizenship: a review of research, policy and practice 1995-2005". Research Papers in Education. 24: 433–466 – via ResearchGate.
  37. ^ Mossberger, Karen; Tolbert, Caroline; McNeal, Ramona (2007). Digital Citizenship: The Internet, Society, and Participation. MIT Press.
  38. ^ Hoechsmann, Michael; Poyntz, Stuart (2012). Media Literacies: A Critical Introduction. West Sussex: Blackwell Publishing.