By: Zainab Arain
As discussions around political polarization, extremism, and hate speech abound in the public sphere, grantmakers are assessing both how they are impacted by and can respond to the current cultural and political moment. Various sectors have responded in ways unique to their own circumstances and institutions. For philanthropy, which straddles the boundaries between public and private life, the stakes are high. In 2021, the Chronicle of Philanthropy reported that charitable donations from 351 philanthropic entities to recognized hate groups exceeded fifty-two million dollars between 2011 and 2018. Other reports over the past few years have documented even higher numbers of hate funding (These reports can be found here and here).
As stewards of the public sphere occupying a unique and powerful role in American civic life, grantmaking institutions may benefit from understanding how those in the tech and higher education sectors navigate these debates around hate and extremism through policies and practices. Over the last five years, the technology industry has emerged at the center of the hotly contested public conversation about online hate speech online. In a recent and unprecedented move, the internet service company Cloudflare stopped hosting Kiwi Farms, a far-right website that “doxed” and attacked LGBTQ activists. Higher education institutions, meanwhile, grapple with maintaining their identity as the paradigmatic marketplace of ideas balancing safety, free speech, and intellectual freedom. Following the death of Queen Elizabeth II, Carnegie Mellon distanced itself but did not take action against a professor for what can be considered uncivil discourse. Experts in these sectors point to the issue of sector regulation as a fundamental challenge in addressing the issue of hate.
Worsening public attitudes toward tech companies have led to growing calls for government regulation by politicians and civil society advocates alike, with the Obama, Trump, and Biden administrations all making tech regulation a top priority. However, most experts and practitioners believe that governmental approaches to regulation are not ideal for a range of reasons. Consequently, the technology industry continues to largely self-regulate around issues of hate speech, with little transparency, consistency, or accountability. Fundamentally, content moderation has become the sector’s hammer to any instance of hate speech, even as it has a hard time defining what actually constitutes hateful speech or activity. This self-regulatory approach ranges from Meta’s changing position on its main role as a platform and Twitter’s constantly shifting hate policy, to Spotify’s codification efforts to manage hateful content and actors on the platform and Twitch’s policy which prioritizes minimizing harm to users over freedom of expression. Many experts believe these strategies are ineffective. In their assessment, despite the persistent investment in content moderation by social media platforms, they continue to play an instrumental role in hosting hateful speech and ultimately enabling violence against minorities.
Further adding to the challenge of self-regulation is the lack of transparency within the tech industry. Companies rely on internal audits as their primary response mechanism to public scrutiny around organizational hate speech policies, and while this is a step forward from a completely closed off sector, experts believe that greater sector transparency, which provides observable data and measurable tracking, would go further to address the core concern of the proliferation of hate. Instead, the secretive environment has created an atmosphere of mistrust and antagonism among actors within the same sector and proven detrimental to collaboration. It has been noted that transparency reports have been on a steady decline for the past few years.
Recently, through the development of voluntary coalitions and forums, there have been strides within the tech industry to address the problems in self-regulating extremist rhetoric. Following the horrific Christchurch mosque shooting in 2019, global governments and technology companies, including Meta, Google, and Twitter, voluntarily signed on to the “Christchurch Call,” a commitment that included the promise to provide greater transparency in the setting of community standards/terms of service, to enforce those standards, and to implement regular and transparent public reporting. However, the effects of this commitment have yet to materialize. Similarly, in 2017, Meta, Microsoft, Twitter, and YouTube founded the Global Internet Forum to Counter Terrorism (GIFCT) to create a collaborative, information-sharing space to counter terrorist and extremist activity online. Bringing together the technology industry, government, civil society, and academia, it is intended to foster technical collaboration among members, advance relevant research, and expand cross-industry efforts to counter the online spread of extremist content.
Like technology companies, university campuses — long heralded as the paradigmatic marketplace of ideas—face increasing pressure from stakeholders on all sides to regulate faculty speech and guest speaker activity. Unlike tech, the higher education sector does not have a specific response strategy comparable to content moderation. Rather, it has a variety of responses, ranging from academic censorship to administrative consequences, that allow for certain types of discourses to flourish, while monitoring, and even censoring, others. Following the rise of the Black Lives Matter movement in the summer of 2020, many higher education institutions have taken public stances around hateful and harmful speech and issued statements affirming their commitment to diversity, equity, and inclusion, while some have pledged to advance anti-racist work, whether in the form of implementing new policies and programs or revising institutional missions. For example, the Southern Illinois University (SIU) system approved a values statement affirming their commitment to anti-racism in December 2020, and in April 2021 hired for the system’s inaugural Vice President for Antiracism, Diversity, Equity, and Inclusion (ADEI) and Chief Diversity Officer. All SIU faculty and staff are now required to complete anti-racist training that involves case studies, simulations, and more.
Given the heightened discussions around polarization and hate speech, there are increasing calls on academics to hold accountable their peers, publishers, and universities to protect academic integrity and scholarship in an era when free speech is misused to silence the pursuit of scholarly rigor and ethical engagement. At the same time, another set of critics are sounding the alarm of what they call “cancel culture” — the purported attempt to silence voices from the Right as a form of draconian censorship antithetical to democratic values. Taken together, institutions of higher education remain at the crossroads of contemporary debates on acceptable public discourse without any clear indication of a sector-wide response to polarization and extremism.
However, regardless of its shortcomings, continued self-regulation in these sectors is the best viable way to address the rise in extremist discourse, political polarization, and violence. Philanthropy’s long tradition of centering values of social justice and equity can serve as the perfect building block to bring to life an exemplary proactive and comprehensive blueprint for sector self-regulation around the problem of hate and extremist discourse in society. It can learn from the higher education and tech industry’s regulatory shortcomings by adopting greater transparency and consistency in the grantmaking processes and engaging in multi-stakeholder trust-building and collaboration to outline best practices for its own sector. Specifically, grantmakers can consider the adoption of the positive aspects of self-regulation, such as a commitment to civil society through community forums, along with investment in third-party monitoring frameworks as a productive approach to independent self-regulation. While many in the advocacy and political space seek rapid solutions to the deep problem of hate funding and activity, it is more reasonable to expect that slow, effective, consensus building will lead to new realities over the short and medium term. The examples of GIFCT and The Christchurch Call in technology and the emerging trend in higher education to adopt anti-racist frameworks in their governance all point to ways in which complex self-regulated sectors can contribute to making public discourse free from hate and extremism. Given its position at the apex of civil society, philanthropy is particularly well suited to demonstrate the effectiveness of a non-punitive, bottoms-up, community-approved strategic response.