Thanks to mounting evidence that Facebook advertising played an instrumental role in Russian manipulation of the US election, we are once again experiencing one of those cyclical surges in Facebook-related angst.
In a post for Civic Hall, Micah Sifry recently declared, “I Wish I Knew How to Quit You, Facebook.” After writing a terrific take-down of Facebook in Wired, Virginia Heffernan announced her own departure from the platform. Beth Kanter, who first alerted me to Sifry’s piece, wrote her own very useful round-up of recent arguments about why and how it’s so hard to leave Facebook.
And yet, we continue to ask ourselves: Is it really time to turn our back on Facebook? And if so, what could possibly replace it?
The Case against Facebook
There is certainly no shortage of reasons to consider quitting Facebook, so let me just touch on the most frequently cited:
- Facebook spreads fake news. In the 2016 US election, “the most popular fake news stories were more widely shared on Facebook than the most popular mainstream news stories [and] many people who see fake news stories report that they believe them”. (Allcott and Gentzkow, 2017, citing Silverman 2016)
- Facebook compromises our privacy. “Facebook, by its very nature, raises fundamental privacy challenges because it enables users to disclose unprecedented volumes of highly personal information, not only to friends and friends of friends, but depending on one’s privacy settings, to very large and unfamiliar audiences as well.” (Rubinstein and Good, 2013)
- Facebook collects too much data. The company’s drive is to “accumulate and exercise power through cross-network pressure built through the advantage of superior user data.” (Fattal, 2012)
- Facebook’s algorithms are manipulative. “Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging.” (Kramer et. al., 2014) “By determining what stories individuals will find interesting, these algorithms encourage certain activities and value some experiences over others.” (McNeill, 2012)
- Facebook is a waste of time. “The new Facebook was supposed to ‘let us know a little about what is going on in our world’ but it just shows us who has been bored recently.” (Quoted in TDR, 2008)
- Facebook is addictive. “[S]ocial networks are superb at delivering instant gratification. [They offer] rewards [that] serve as jolts of energy that recharge the compulsion engine, much like the frisson a gambler receives as a new card hits the table.” (Disalvo, 2010)
For those piling onto the latest incarnation of the We Hate Facebook club, it’s crucial to differentiate between the first four reasons and the last two. If you’re worried that Facebook is wasting your time, or leading to excessive usage—well, that’s going to be an issue on any social network.
A network that makes less sophisticated use of data may do less to drive truly compulsive behavior, but that doesn’t speak to the fundamental unease with social networking versus “real life.” For people who are concerned about the individual or mental health impact of using Facebook, quitting the platform is about quitting social media—not about switching to a better version.
Competing with Facebook
But it’s the first four issues—and especially, Facebook’s complicity in the fake news epidemic—that are driving the current Facebook backlash. Fake news, privacy, data collection, and algorithmic manipulation are all areas where Facebook has shown itself to be a bad (or at least, highly questionable) actor. That’s created a perceived opportunity to build a better Facebook: one that shows more respect for user data and privacy, and that avoids the perils of manipulative algorithms and fake news.
MeWe, the platform Sifry wrote about in his recent post, offers a “Privacy Bill of Rights” that includes stipulations like “You own your personal information & content,” “You will never receive a targeted advertisement or 3rd party content based on what you do or say online,” and “We do not manipulate, filter, or change the order of your content or what you see.” Ello, which launched itself as the anti-Facebook a few years ago, similarly stresses its privacy policy: “Ello does not make money from selling advertising on the site, serving ads to you, or selling information about our users to third parties, including advertisers, data brokers, search engines, or anyone else.” Path is a social network that initially handled the noise and privacy problems by limiting the size of each user’s network: as the launch post put it, “Because your personal network is limited to your 50 closest friends and family, you can always trust that you can post any moment, no matter how personal.”
But all these would-be Facebook competitors face a fundamental problem: a social network is only as useful as the people who are on it. In “Information, Innovation, and Competition Policy for the Internet,” Howard A. Shelanski digs into the dynamics that make it hard for new upstarts to compete with established social networks, concluding that “each user creates a positive externality for other users by enlarging the scope of the service.” In other words, any Facebook competitor faces an uphill battle: with so many people already on Facebook, it’s extremely hard to offer a compelling alternative.
When you look at the Facebook alternatives that have been around for a while, you can see that’s exactly the problem they’ve run into. With its vision of small-scale networks, Path baked growth limitations right into its business model; no surprise that it raised (and then removed) its limits on network size, before eventually selling the platform. Ello initially positioned itself as a network for anyone who doesn’t like Facebook’s privacy policy, then settled into a much narrower mandate as a platform for artists. MeWe is apparently trying to address the growth challenge with a sign-up workflow that tricks new users into inviting everyone they know to sign up; I’m not alone in noting the irony of joining a privacy-oriented social network, only to have my first experience consist of spamming every single one of my email contacts with a MeWe invite. (Sorry, friends!)
In a discussion of Micah Sifry’s piece that unfolded on—you guessed it!—Facebook, tech writer Amy Vernon argued that the only way an emergent social network can get over the scaling problem is if it’s substantially new:
Any platform based on what we already have with Facebook (even if it’s supposedly privacy-focused) is doomed to fail. Facebook succeeded and surpassed MySpace because it was a different animal. MySpace took over from Friendster because it was different….In order for us to give up Facebook, something different needs to rise up. Thing is, every time they do, Facebook buys it or copies/buries it. Instagram, WhatsApp, Snapchat (tried to buy it, arguably has decreased its influence through copying its features in Instagram). Everyone needs to stop trying to be the new Facebook, and instead build something that people want to use.
The Problem Startups Can’t Fix
If the pace of scaling and expectations for innovation make it hard for a new startup to compete with Facebook, maybe that’s not such a bad thing. After all, most of Facebook’s would-be competitors are not unlike Facebook itself: they’re for-profit companies, ultimately accountable to investors or (if they go public) to shareholders. As Jacob Hasler puts it in a a 2014 article, “[w]ithout clear authority to pursue a social mission, directors are likely to protect themselves by maximizing shareholder value.” That means they’re under huge pressure to generate revenue, or at least, to present a business model and growth pattern that promise not-too-distant riches.
That creates a strong intrinsic pressure to collect and sell user data, and/or to use that data as the basis for a thriving ad business. If any of those networks actually attracted as many users as Facebook has, it would also find itself in the position of overwhelming its users with the sheer volume of posts: it would need some kind of mechanism—an algorithm, say—to determine who sees what. That algorithm would also open up another revenue opportunity: charging companies money in order to make their content more visible.
Does this sound familiar yet? Yeah, I’m pretty much describing Facebook as it exists today.
That doesn’t mean I’ll let the platform off the hook for fostering a business culture that consistently places dollars ahead of its users’ privacy and well-being, but I do recognize the market pressures that drive that kind of culture. Other for-profit startups may talk a good game—now, while they’re small—but I’m skeptical of their ability to resist the same pressures should they ever get large enough to pose a meaningful challenge to Facebook itself.
A Kinder Kind of Startup?
My concerns are exactly why some companies—Ello among them—have incorporated as Public Benefit Corporations (PBCs). The PBC is a relatively new corporate structure that require companies to have some kind of public benefit mission alongside the profit motive: as Hasler notes, “[h]aving a specific public benefit allows the corporation to pursue a personally selected mission, which could be anything.” Ello’s own website says that it incorporated as a PBC to “assure that Ello could never sell member data and that we would always remain ad-free.” But setting up as a PBC is no guarantee that a startup will resist the very real business pressures that have made Facebook what it is. As Rae André writes in a 2012 assessment of public benefit corporations, “benefit corporations follow accountability practices that serve particular private interests, and because of this, the probability that they will be responsive to the citizenry as a whole, to society, is low.”
Call me old-fashioned, but there’s only one kind of Facebook alternative that I’d actually trust to look after its users’ data, privacy and information access: one that is collectively owned.
Imagining a Cooperative Facebook
A socially owned social network — let’s just call it a socially owned network — could follow the lead of the consumer co-operatives that exist in a number of countries and sectors. Unlike worker or producer co-ops, which are owned by the people who work in them, consumer co-ops are owned by the customers — or in the case of a social network, by the users. In “A Theory of Co-Operatives,” R. Carson describes these co-ops as follows: “a consumer co-operative, or consumer-managed firm, is managed by elected representatives of at least some of its retail customers, the latter constituting its membership.”
As an organizational model, the modern co-op traces back to the founding of the Rochdale Equitable Society of Pioneers in 1844; as Joel David Welty puts it, the founders of that society “simply wanted to create an institution they could control themselves for their own benefit.” In the nearly two hundred years since, the co-operative has proven itself as a viable model for combining enterprise with social good. Carson writes, “In some instances, a co-operative may be the least-cost way of avoiding potential external effects.”
As a business model that’s explicitly designed around member participation and benefit — and backed with a proven legal framework — a cooperatively owned social network would be a lot more credible when it comes to offering long-term protection of user privacy, data and information access. And indeed, we’ve seen a number of efforts at building social networks on co-operative principles. The #BuyTwitter campaign is busily lobbying for a user takeover of the flagging social network. Diaspora is a decentralized, open source social network that turned itself into a “community-run project” in 2012, housed under a non-profit. Social.coop is an actual cooperative that provides accounts for a server that’s part of Mastodon, another decentralized social network.
The Tough Choices for a Cooperative Social Network
None of these networks have even come close to the level of success that would allow them to seriously challenge Facebook. And I suspect that’s because there are hard trade-offs involved in creating a broadly appealing social network: a social network that is comprehensive enough, in its membership and content, to replace Facebook in the lives of its users.
Take one obvious example: I’d rather not be part of a social network that enables hate speech. I also want to be on a social network where I can reach and connect with a very wide range of people. But some of the people I want to connect with (let’s say family members and former colleagues) are people who (sigh) may want to connect with people who belong to white power groups, or anti-gay groups. If I were part of a social networking co-op that reflected my principles of social equality by banning hate speech, those white power and anti-gay groups wouldn’t be on the network. The people who belong to those groups would stay on Facebook. And the family members I’d like to stay in touch with on my social network will stay on Facebook, because that’s where their friends are.
If we want to have values-driven social networks, we’re going to create social networks with walls. Precisely because those walls keep some people out, our values-driven networks will never be as big as Facebook…and indeed, may never reach the scale that makes them sustainable. Facebook’s lack of values is what makes us want to leave; it’s also what makes Facebook elastic enough to contain — or more accurately, to capture — everybody.
To have a sustainable, user-centered alternative to Facebook, we’ll have to wrestle with a great many contradictions like this one. We’ll have to wrestle with the tension between co-operative values of progressive social inclusion, and cyber-libertarian values of openness and free speech. We’ll have to contend with our simultaneous desire for wide-open access to information, and for insulation from misinformation. We’ll have to choose between getting flooded with irrelevant updates, and trusting some algorithm to make those choices for us. Some of us might even have to choose between chasing the Silicon Valley dream of making a billion dollars, and the human dream of making a social network that’s actually good for its users.
In other words, we’ll have to make all the difficult, awkward choices Facebook now makes for us. We probably won’t get it right. We’ll fight, we’ll quit, we’ll try agin.
But Facebook’s pathetic response to the fake news epidemic has made one thing abundantly clear. As hard as these choices are, they need to be our choices—not Facebook’s.