Hate and Free Speech Discussion

February 1, 2021

Introduction

As current levels of social and political polarization reach new heights, so do discussions about extremism, disinformation, and hate speech. But what do these terms mean for institutions such as philanthropy, technology, and media which straddle and blur the boundaries between public and private life? In many ways, current debates follow old patterns and paradigms surrounding the limits of free speech. As a constitutionally protected right, free speech protections spur discussion across the spectrum of constitutional rights, public safety, and political activity. As a result, conversations around hate speech are often fraught and highly contested, leaving the United States as one of the few democracies in the world to not have some form of legislation prohibiting its amplification and proliferation 

Although broad consensus on what constitutes hate speech is unlikely to be reached in the near future, leading voices from the law, nonprofit, and academic sectors are actively shaping the current state of the conversation.  

The legal field has contributed to shaping the public understanding of free speech and hate speech in theoretical and practical ways. Legal scholars, political philosophers, and ethicists explore the definition through in-depth analyses of hate speech sub-categories ranging from racist speech, to incitement, to violence. While disagreements continue to fuel debates for clearer categories and appropriate legal repercussions, theoretical legal inquiries focus on drawing a link between the nature of the speech and ways in which it can directly lead to violence. Politically speaking, this debate often falls along predictable partisan and ideological lines. 

More recently, the tech industry has played an increasingly active role in shaping the practical definition of hate speech. Operating with very little government regulation, social media platforms have unprecedented control over content that is capable of reaching broad swaths of the public. Tech companies such as Google, Apple, and Facebook, have taken aggressive action to moderate hateful content on their platforms through their terms of service policies. They also utilize content moderation tools such as profile removals, app de-platforming, and stricter user control. The use of such tools can often lead to claims of censorship or bias. 

Additionally, academic voices have weighed in on the issue through philosophical interrogations and sociological studies. On the one hand, ethical questions are posed around the moral right to freedom of expression and the direct causation between speech and violence. On the other hand, sociological approaches center the experiences of victims of hate speech. By prioritizing the human and social impact, this inter-disciplinary approach presents a more granular and practical understanding of hate speech, along with providing opportunities available to manage the problem in the medium and long terms 

Below is a review of recent discussions by thought leaders and experts from a variety of sources that capture the state of the debate today, helping readers navigate an otherwise complex multidimensional field.

Prepared by Nagham El Karhili, Research and Program Manager at Horizon Forum

If you have questions or comments, we’d love to hear from you. Reach us at info@thehorizonforum.org