Abstract
Excerpted From: Angel Díaz, Online Racialization and the Myth of Colorblind Content Policy, 103 Boston University Law Review 1929 (December, 2023) (296 Footnotes) (Full Document)
Social media platforms' global reach and pervasiveness make them essential avenues for communication. Channeled through words, photos, and videos, these platforms provide valuable insights into the complex multitude of human experience. Joy, fear, desire, insecurity, and outrage all find waiting outlets that can transform these emotions into linguistic expression, political education, and newfound community. At the same time, these platforms present an equal opportunity to misinform, incite violence, and scale efforts to subordinate.
In September 2022, communities around the world took to social media to process the death of Queen Elizabeth II. Where some expressed grief or shared loving tributes, others crafted memes making light of her death or posted in solidarity with the victims of British colonization. Among these voices was Uju Anya, a Carnegie Mellon Associate Professor of Second Language Acquisition. She tweeted, “I heard the chief monarch of a thieving raping genocidal empire is finally dying. May her pain be excruciating.”
In a follow-up tweet, Professor Anya said she expressed no sympathy for Queen Elizabeth II, given her supervisory role in a “genocide that massacred and displaced half my family.” As the post began to spread, it was reposted by Amazon CEO Jeff Bezos, who commented, “This is someone supposedly working to make the world better? I don't think so. Wow.” Soon after her initial post, Twitter removed Professor Anya's tweet.
A company spokesperson later told a journalist that the post was removed for violating Twitter's rules against abusive behavior. At the time, Twitter prohibited “targeted harassment of someone” or actions that “incite other people to do so.” Specifically, the company purportedly took a zero-tolerance approach for any post that “wishes, hopes, promotes, incites, or expresses a desire for death, serious bodily harm or serious disease against an individual or group of people.” Twitter's reasoning behind the policy was that harassment limits people's ability to feel free to participate in public life and can lead to physical and emotional harm. The company's decision elided any analysis as to whether a relatively obscure professor's tweet could harm or silence Queen Elizabeth II, and whether there was a countervailing public interest in protecting harsh criticism of a global political leader.
A few days after the Queen's death, Stephen Miller, former senior advisor to President Trump and founder of America First Legal, tweeted about the threats he perceived to the British monarchy's future legitimacy:
Key to monarchy is its mystery. Key to its mystery is that monarchs descend from an ancient line of fabled kings & queens. Though it may not be apparent now, a longterm concern for UK monarchy will be if, due to marriages, future monarchs have same family trees as their subjects.
Long criticized for his white nationalist views, Miller's affinity for white replacement theory provides important context for his argument. In relevant part, white replacement theory views miscegenation and nonwhite immigration as existential threats to white people. Prior to Queen Elizabeth II's death, there was ongoing controversy about her grandson's marriage to a Black woman and the royal family's concern about “how dark” their son would be. In context, it is clear that Miller's tweet was not an abstract musing about what sustains magical bloodlines, but a longstanding racist trope: race-mixing stains whiteness. However, because the tweet never mentioned a race or an individual, it did not trip up Twitter's rules against harassment or hate speech.
The differing responses to Miller and Anya's tweets reflect an enduring double standard in social media content moderation. This Article argues that understanding this asymmetry requires engaging with the logics of racism. Content moderation connects the profitability of white racism, the regulatory benefits of protecting politicians who trade in bigotry, and the racial biases that inform how platforms conceptualize the harms of online speech. The result is a system that guards the social, political, and economic advantages attendant to whiteness. This system is operationalized through content policy, making these policies elemental to the maintenance of white supremacy and an essential site of inquiry for critical scholars.
Once an afterthought for social media companies, content policy is now the main document driving millions of decisions over how people can express themselves online. Content policies are cited in press statements, used to parry regulatory oversight during congressional questioning, and cited in Supreme Court briefs. Social media content policy is also the central focus of discussion and negotiation with governments and civil society, and includes the document Meta's Oversight Board uses to assess the company's removal decisions. Content policy is the closest document to a constitution for the private system of content moderation, but it can be rewritten, ignored, or set aside at will.
More than a mirror for offline bigotry, social media companies shape and foster racial hierarchies through the drafting and enforcement of their policies. These choices determine whose speech will be restricted and whose will be protected, which posts will receive prominent distribution, and which will remain buried in obscurity. Through millions of daily enforcement decisions, social media companies have an unprecedented ability to shape global speech norms. While companies claim their policies apply equally to everyone, this Article argues colorblind content moderation is a racialized system that doles out a measured hand for the powerful and an iron fist for the marginalized.
Under this system, social media companies court, foster, and protect white racism. By requiring explicit racial animus or undeniable calls to violence before company intervention, content policy largely shields the vast arsenal of attacks available to white voices who trade in the language of coded messages and dog whistles. Showcasing racism cloaked as edgy humor or political debate fosters white supremacist ideology, leaving platforms wrong-footed when content boils over into white vigilantism and authoritarian incitement. Conversely, communities of color are policed as violent, suspicious, and uncivilized. This racialized gaze results in policies restricting their ability to organize politically, denounce racism, or simply build community with one another. Colorblind hate speech rules restrict the ability of marginalized communities to attack white racism or directly speak about their experiences under white supremacy. Meanwhile, racialized enforcement of violent extremism policy broadly suppresses political debate and sacrifices everything from satire to journalism in the name of public safety.
Colorblind content policies obscure and legitimate a racially hierarchical system by making it appear natural and inevitable. At times, these decisions reflect a desire to foster a more favorable regulatory environment, as many drivers of racial hatred also have political power. At other times, they reveal a limited ability to truly understand racism's nuances and threats, viewing white supremacy as an extreme outlier instead of an organizing principle upon which American society is structured. This approach is further solidified by racism's profitability, as much of what draws people (and attendant advertising revenue) to platforms are popular figures who regularly employ bigotry.
The Article proceeds in four parts. First, this Article provides a theoretical overview of my approach to analyzing online racial stratification. Building on a multidisciplinary approach grounded in critical race theory, I provide foundational definitions for race and racialization, and how whiteness and white supremacy shape social media. These definitions center the cultural, economic, and political dimensions of white supremacy, as well as the role of content policy in “moving whiteness from privileged identity to a vested interest.”
Second, this Article traces the evolution of social media content policy, explaining how key actors strategically avoid engaging with the realities of racism. This disengagement has two effects. On the one hand, it explains why platforms are consistently caught wrong-footed in attempts to moderate discourse rife with racial bigotry. On the other hand--and more insidiously-- the refusal to address racism is often part of a conscious corporate strategy to appease conservative politicians and to continue leveraging racist content for financial gain.
Third, this Article identifies two main approaches for how platforms consider race: (1) racial targeting and (2) racialized threat assessments. The first approach treats race as a protected category, only prohibiting posts explicitly targeting an individual or group based on their perceived race. This method is typically found in rules against hate speech and harassment. There is no attempt to account for histories of subjugation or how race is reflected in contemporary power dynamics. The second approach rarely makes explicit mention of race at all. Instead, companies use secret blacklists and broad prohibitions to police racialized groups that are viewed as inherently dangerous. This approach is mostly deployed through policies against terrorism and violent extremism.
Building on this typology, I conclude by advancing an alternative model of race-conscious content policy. Interventions include accounting for vertical power arrangements, eliminating prohibitions that overburden political participation, and publishing blacklists that contain banned individuals and organizations. Acknowledging the challenges and dangers of identifying racial groups, I also propose potential starting points that leverage individual design elements specific to individual platforms. Each of these interventions is an invitation to be clear-eyed about the ongoing and mutable nature of racism.
[. . .]
Social media is more than a mirror for offline bigotry; it is an active developer of the ways racial stratification is conceived, protected, and advanced. The status quo approach to drafting and interpreting content policy protects the cultural, political, and economic advantages attendant to whiteness. In other words, the standard approach for understanding and redressing racism leaves communities of color trapped in another person's imagination. Whether it is our past, present, or future, racial subjugation is understood as natural and inevitable. Challenging this discriminatory system requires not only mapping the specific ways that content policy advances white supremacy, but also proposing an alternative vision that seeks to provide protections for the victims of racial subjugation. To be sure, this task is not without peril. At its core, content moderation is a censorship regime, one that largely operates outside of democratic transparency or accountability. But the dangers of misuse must not prevent us from attending to the rise in racial hatred and authoritarianism that floods our online and offline communities. content policy faces the world as it is so that we can redirect it toward what it must become: a place of dignity and equal opportunity.
Visiting Assistant Professor, University of Southern California Gould School of Law.