The January 6 attack on the Capitol has transformed politics in the United States in ways that journalists, lawyers and politicians are still struggling to understand. It was at once the chaotic culmination of a right-wing movement before and during the Trump administration and a stunning symbol of a possible future. Like so many other political events in the past decade, it again revealed how social media has made once unthinkable political events possible. Facebook, Twitter, and YouTube, among other sites, amplified marginal conspiracy theories and far-right militia organizations, allowed a sitting president to delegitimize the election he lost, and permitted open planning for a violent attack on the seat of government.
Social media companies have faced sustained criticism for years about the negative impacts of speech on their platforms. These have ranged from national political conflicts such as the organization of genocide in Myanmar, to worldwide endemic personal harassment like the Gamergate scandal or revenge porn. But the attack on the Capitol appears to have crossed a line. Social media sites have responded accordingly by widely banning Donald Trump and many other right-wing activists and organizations.
“Deplatforming” on this scale would have been unimaginable just weeks earlier. It has provoked predictable complaints from the right. But the American Civil Liberties Union and other free speech organizations have also expressed concerns about a new standard for censorship without transparency or accountability for private companies. Others have noted that left-wing social media accounts have been getting banned for some time, and are also current targets of arbitrary shutdowns. There is already a national security-oriented response underway to investigate and surveil right wing movements as “domestic terrorism.” But this alone will not solve the social conditions that encourage fascist thought and activity, or prevent right wing activists from finding new ways to organize online.
Protecting democracy from the power of free speech seems like a paradox. However, free speech on the internet has never truly been free. The regulation of speech online is in fact framed by laws that allow private sites to censor content at will. Moreover, many factors affect the impact of online speech besides its mere existence or deletion. User algorithms and advertising demand can promote speech to different audiences, while metadata can inform users about the sources or reliability of that speech. Until this point, promoting right-wing propaganda and accompanying advertising with little metadata has been wildly profitable.
But with the Capitol attack, it seems as if the wild west era of monetized political speech online is reaching its end. There are two plausible futures for the industry. Either the tech monopolies will keep the power to arbitrarily restrict speech to prevent controversy and protect their bottom lines, or the government will better regulate the internet to mitigate the power of tech companies to profit from negative speech and political extremism. This is already provoking deeper questions about the meaning of the First Amendment and the public’s rights and interests in the internet itself.
Free Speech Is Not Social Media’s Priority
Establishment opinion about the role of online speech in society and politics has evolved rapidly over the past decade. Prior to 2000, the internet represented a forum that was alternative and even countercultural to mainstream political parties, business and media. In the following decade, the spread of social media seemed to affirm the era’s neoliberal values of “promoting Democracy” in the world, culminating most obviously in the Arab Spring protests 10 years ago this month. When internet companies were smaller and fragmented, protecting them and their users against the censorship of governments worldwide seemed not just to defend free speech, but to transform the world for the better.
This is certainly the tone of Columbia University President and First Amendment scholar Lee Bollinger’s piece in Foriegn Policy‘s 100 Top Global Thinkers of 2012, “Defending Free Speech in the Digital Age.” The issues of the day motivating Bollinger’s argument were China blocking access to The New York Times and the suppression of the right-wing anti-Islamic film Innocence of Muslims. Despite the gratuitous insult of the latter to more than a billion people and the political and social backlash Middle Eastern governments faced, Bollinger blithely expresses that the overall arc of freedom of speech on the internet was toward progress. “When the number of people around the world who are engaged in the marketplace of ideas increases, we can expect a corresponding rise in the flow of innovation in both the academy and the economy,” Bollinger writes. He argues that globalization thus directly fuels conflict between governments and internet publishers and social media sites.
The objects of Bollinger’s concern in 2012 were appropriate, but the economic and political dynamics around it have turned out to be rather the inverse of how he and many others saw them a decade ago. Rather than the United States First Amendment becoming a guiding principal for speech around the world, internet companies are in fact accepting a huge array of sovereign controls on speech, different in each country. This is leading more toward collaboration between the tech giants and governments to protect their profits rather than outright conflict. And of course, the “marketplace of ideas” has turned out to become as useful for the innovation of illiberal ideas and organizations, from ISIS to QAnon, as for science or education.
The Private Government of Social Media
The social media business model is founded upon a legal collaboration between the industry and the US federal government. In the early days of the internet, corporations sued chat boards for the libel of individuals who had posted complaints about them, arguing they were “publishers” of the posts. With inconsistent judicial interpretations over whether internet hosts were moderating their content and thus counted as such, Congress stepped in with Section 230 of the Communications Decency Act of 1996.
The law states “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Moreover, it expressly allowed internet providers, as private companies, the discretion to ban content that they did not wish to carry. This extends to internet service providers or server farms banning entire platforms from using their services, as a judge has just ruled Amazon Web Services could do to Parler, a social media network heavily used by right wing activists.
Benjamin Cramer argues this creates “a ‘moral hazard’ in which the absence of future liability encourages ethical lapses and unaccountable behavior in the present.” Rather than more frequently intervening in user content, social media sites until the present have chosen to leave all but the most egregious fraud or harassment alone in order to avoid accusations of infringing users’ free speech “rights.” As partisan politics has become more heated in recent years, the sites are often attacked for their perceived biases from both sides, further encouraging them not to intervene.
This would not be quite so bad, writes Jack Balkin, if the social media industry were not monopolized by Twitter, Facebook, and Google. At present, those companies have so much political and economic power, they effectively represent practically a new, “private state” that has subsumed the U.S. Constitution’s protection of free speech to make a profit for themselves while providing their users no accountability. “Governance by Facebook, Twitter, and YouTube has many aspects of a nineteenth-century autocratic state,” Balkin writes, “one that protects basic civil freedoms but responds to public opinion only in limited ways.”
Indeed, pressure for better governance have made social media companies establish independent tribunals to adjudicate management’s decisions to ban content or block users. These include Facebook’s “Oversight Board,” composed of famous politicians and journalists, which began meeting last month. In its first decision on January 27, it actually overturned Facebook executives’ decisions to block content based on “hate speech, nudity and COVID misinformation.” It will soon meet to review the decision to ban Donald Trump.
How Can The Internet Serve the Public Interest?
With public scrutiny now so heavily focused on the politicization of speech online, social media companies will not be able to plead inaction in the name of free speech any longer. Yet potential government regulators are still reluctant to intervene, both fearful of the power of the industry and of actually infringing the First Amendment. Who in the future will both determine the “red lines” on speech, or otherwise try to protect users from the long term negative externalities of unregulated content algorithms?
Past research available on JSTOR reflects less urgency than the present crisis demands. Cramer believes that a more rigorous application of “Corporate Social Responsibility,” in part to maintain public goodwill, would gradually move social media companies toward more ethical treatment of their users and the targets of their speech. Balkin believes the current Section 230 framework could be reformed by better enforcing companies’ user agreements and by treating a company’s relationship to its users as an “information fiduciary.” In short, this means borrowing legal ideas from the financial, medical, and legal industries to force social media companies to be more transparent with their users and allow users to (genuinely) opt out of data surveillance and targeting.
Weekly Newsletter
Since January 6, however, there are increased calls to abolish Section 230 altogether, which would transform the internet as we know it. As anti-monopoly activist Matt Stoller points out, Congress is now reluctant to attack powerful corporations like Amazon because it has stood as a monolith, willing to keep right-wing and violent content off their platforms since the attack. Taking away the protections of Section 230 and demonopolization thus go hand-in-hand. If platforms shared at least some of the liability for the content they spread—or the way they exponentially amplify and profit from it—it would be possible to dismantle the vertical integration that now exists between internet service providers, server farms, and social media platforms. Facing true competition, the majority of new entities would seek responsible ways of selecting out or diminishing the impact of negative content as it passes between these stages of the internet infrastructure.
This is the turning point between continuing to allow the tech titans to treat the world’s communication networks as their private fiefdoms, or restoring democracy and transparency to what we have long considered the “public sphere.”
Support JSTOR Daily! Join our new membership program on Patreon today.