The icon indicates free access to the linked research on JSTOR.

Like many Americans with any level of internet expertise—where expertise is defined as “has actually used the internet this decade”—I found myself groaning through Mark Zuckerberg’s testimony before Congress last week. I really wanted to be full of righteous anger at Facebook’s outrageous disregard for user privacy and data protection, and instead found myself scratching my head. Can it be that most senators and representatives barely have enough tech knowledge to turn on a smartphone?

JSTOR Daily Membership AdJSTOR Daily Membership Ad

The problem with this is that we must pin our hopes on Congress if we want an internet that offers the benefits of information access and human connection without the loss of privacy. I’m not expecting a viable alternative to Facebook any time soon, and besides, Facebook is only one piece of a larger industry that has mostly decided that the best way to fuel innovation is by selling out its users. So the most obvious way to square our social media habits with our privacy concerns is likely involve some form of regulation — which might look like following Europe’s lead in establishing a legal framework that will require all companies to offer internet users transparent, meaningful controls over how their data is collected, shared, and used.

While that kind of regulation was notionally under discussion at last week’s hearings, for the most part the conversation inspired about as much confidence as I’d have in a group of gardeners talking about their preferred approach to open-heart surgery. Zuckerberg was largely patient with questions that ran from the awkward to the inane, but you could read his frustration with the collective level of knowledge even if you weren’t watching his facial expressions.

As it turns out, Zuckerberg has a tell. There was one turn of phrase that popped up whenever Facebook’s CEO faced a particularly nonsensical line of inquiry: I’m not sure. Just about every time Zuckerberg uttered those words, I could hear a subtextual “you blithering idiot.”

Look for these moments throughout the hearing, and you get a pretty good sense of the places where our elected representatives could use some instruction in the fundamentals of network technology and the digital economy. (Really, look! The Washington Post has transcripts of both the Senate hearing and Zuckerberg’s appearance before a House committee.) Reading these has left me fantasizing about how I could run a social media training for Congress and what it would need to cover (“Did you know that when you post things on the internet, you can never be sure you’ll really be able to delete them?”) but strangely nobody at Facebook has called to offer me their generous corporate sponsorship.

So I’d like to suggest that instead that our elected representatives do a little reading to acquaint themselves with the fundamentals of our digital world. And on the off chance that our Congressmen and Congresswomen fail to do the necessary homework, voters can use the same resources to become informed advocates for effective internet regulation. I’m going to begin with three essential truths about the internet that all of us need to understand.

Exploiting User Data is the Industry’s Dominant Business Model

While Orrin Hatch made headlines with his question about how Facebook makes money without charging its users (he somehow missed the fact that it’s ad-supported), he was far from alone in failing to grasp the fundamentals of how tech companies convert our details into dollars.  That became painfully apparent when Senator Maria Cantwell (D-WA) tried to put Facebook in the broader context of platforms like text messaging service WhatsApp (now owned by Facebook) and Palantir (enterprise-scale data analysis):

CANTWELL: So, when I look at Palantir and what they’re doing; and I look at WhatsApp, which is another acquisition; and I look at where you are, from the 2011 consent decree, and where you are today; I am thinking, “Is this guy outfoxing the foxes? Or is he going along with what is a major trend in an information age, to try to harvest information for political forces?”

And so my question to you is, do you see that those applications, that those companies—Palantir and even WhatsApp—are going to fall into the same situation that you’ve just fallen into, over the last several years?

ZUCKERBERG: Senator, I’m not—I’m not sure, specifically. Overall, I—I do think that these issues around information access are challenging.

The fact that a sitting U.S. Senator can think of harvesting information as some kind of canny political jujitsu is curious…though not as curious as her implication that Facebook is somehow unusual in falling prey to its appeal. This is how Google works. This is how Twitter works. This is mostly how data aggregators and social platforms work: by harvesting information so that it can be sold either directly or indirectly (in the form of ad targeting).

To their credit, at least some legislators seem to be looking for a way out of this trap. Senator Ron Johnson (R – WI) went down this path when he asked about other ways Facebook might fund its handy little service for letting folks do stuff like share family photos.

JOHNSON: …Facebook users still want to use the platform because they enjoy sharing photos and they share the connectivity with family members, that type of thing.… Your COO, Ms. Sandberg, mentioned possibly, if you can’t utilize that data to sell advertising, perhaps we would charge people to go onto Facebook. Have you thought about that model, where the user data is actually monetized by the actual user?

ZUCKERBERG: Senator, I’m not sure exactly how—how it would work for it to be monetized by the person directly.

I’m going to cut Johnson a tiny bit of slack here, because there actually has been some interesting debate on how market mechanisms could be used to price the trade-off between privacy and social media access. But that is a completely different concept than the idea of actually charging Facebook users some kind of subscription fee. And pay-for-privacy models have a pretty rotten reputation among privacy advocates; as law scholar Stacy-Ann Elvy writes in the Columbia Law Review, breaking down the legal and political perils of the models:

PFP [pay-for-privacy] and PDE [personal data economy] models generate substantial concerns for consumers, including the potential for exacerbating preexisting inequalities and unequal access to privacy. In some instances, consumers may be more adequately protected when the use of such programs is prohibited or when monetization is restricted.

In other words, allowing users to monetize their own data—perhaps by offering targeted discounts to consumers who have explicitly agreed to share selected data—creates more problems than it solves. It turns privacy into a luxury good that only some people can afford, makes lower income people disproportionately vulnerable to data-driven manipulation and fake news, and undercuts consumer efforts to press for universal data and privacy protections.

That doesn’t mean there’s no room for fee-for-service business models in social media. Asking users to pay a monthly fee for their favorite social media service may turn out to be the best way of weaning social media platforms off their dependence on the ad revenues they generate by offering data-driven targeting. (Though fees for service will almost certainly create other problems, like placing social networking out of reach for many lower-income countries and communities.)

But neither voters nor representatives will be able to meaningfully evaluate these alternative revenue streams unless they grasp a fundamental truth of today’s internet: moving away from a data-exploitation business model is more than just tinkering around the edges of how Facebook or Google work. Exploiting user data is the foundation of both companies’ business models, and is just as essential to hundreds or even thousands of other businesses. Just about every company that makes money selling ads, delivery email or providing insights is based on exploiting user data, almost always without informed or meaningful user consent. If we don’t like what this business model does to our politics, our society or our citizens, we need to buckle up for some seismic shifts in how we ask Silicon Valley to do business.

User-Generated Content is Only One Kind of Data

There’s a lot of confusion among both legislators and members of the public about what, exactly, is the data we should be worried about. Take this question from Senator Brian Schatz (D-HI), who at least seemed to know that some distinctions and clarifications are needed:

SCHATZ: Everybody kind of understands that when you click like on something or if you say you like a certain movie or have a—a particular political proclivity, that—I think that’s fair game; everybody understands that. What we don’t understand…is what exactly are you doing with the data and do you draw a distinction between data collected in the process of utilizing the platform, and that which we clearly volunteer to the public to present ourselves to other Facebook users?

ZUCKERBERG: Senator, I’m not sure I—I fully understand this. In—in general, you—your—you—people come to Facebook to share content with other people. We use that in order to also inform how we rank services like news feed and ads to provide more relevant experiences.

That was a rather tidy way for Zuckerberg to avoid delving into the the difference between what Facebook refers to as “content” (“we mean anything you or other users post, provide, or share using Facebook Services”) and what it considers “information” (“we mean facts and other information about you, including actions taken by users and non-users who interact with Facebook”). (Both definitions come from Facebook’s terms of service.)

To understand why this distinction matters, Congress (and voters) can read communications scholar Jessica Reyman’s User Data on the Social Web: Authorship, Agency, and Appropriation”:

Social Web services catalog users’ individual and collective activities across the Internet—aggregating, analyzing,  and selling a vast array of data in a practice known as data mining—to be used largely for consumer profiling and target marketing….Although users are aware of the content they are contributing online—when sharing a photo, writing a  blog post, updating a status, or entering a 140-character tweet—many are unaware of the additional, hidden contributions of data made with each act of participation….With every click and path followed, every status update and tweet entered, every photo and post contributed, every comment, every item tagged, users are collectively producing both the visible and the invisible social Web.

Focusing only on the content that people intentionally post to Facebook, without also addressing all the data that Facebook gathers simply by tracking what we view and click on, leads to grave errors on the part of both users and legislators. For Facebook users, it can easily create the mistaken impression that changing your privacy settings so that the audience for your content is limited to friends (rather than public) actually protects you from the kinds of data collection currently in the news, when it’s the information you don’t realize you’re sharing (like what you like or click on) that most profoundly affects how you’re targeted by advertisers. For the most part, Facebook’s privacy and ad preferences settings are a privacy placebo: they trick us into feeling a little better, but they don’t treat the underlying disease.

For legislators, ignorance of invisible data collection can be even more dangerous, because it enables a false narrative about citizens’ control over their Facebook data (which is mostly made up of “information,” not “content.”)  That’s the narrative behind the shockingly naive question Zuckerberg fielded from Representative Markwayne Mullin (R-OK), who asked: “Isn’t it the consumers’ responsibility to some degree to control the content to which they release?” Well, maybe…but deliberately posted content is only a small portion of what Facebook knows about us.

The good news is that not every representative is clueless about behind-the-scenes data collection. Representative Joseph Kennedy III (D-MASS) zeroed in on the problem of how few people really understand what Facebook knows. “I think one of the challenges with trust here is that there’s an awful lot of information that’s generated, that people don’t think that they’re generating,” Kennedy pointed out during last week’s hearings, “And that advertisers are being able to target because Facebook collects it.”

The Government is Not Protecting User Data From Abuse

Sadly, it’s a long walk from a few representatives having a basic understanding of what Facebook actually knows about us, to formulating a coherent and enforceable approach to protecting user data and privacy. That much became clear when our representatives got down to discussing a rare occasion when Facebook actually has been called to account for its data abuses: the 2011 Federal Trade Commission consent decree that settled persistent privacy complaints by requiring Facebook to make a number of concrete commitments.

It’s not surprising that our representatives wanted to know how recent disclosures about Cambridge Analytica square with the Facebook’s commitments under that decree. What is surprising is that our legislators have such a limited grasp of whether and how that agreement was enforced. Check out this exchange with Representative Michael C. Burgess (R-TX):

BURGESS: But you also signed a consent decree back in 2011….And there is a significant fine of $40,000 per violation, per day. And, if you’ve got 2 billion users, you can see how those fines would mount up pretty quickly.… So, in the course of your audit, are you—are you extrapolating data for the people at the Federal Trade Commission for that—the terms and conditions of the consent decree?

ZUCKERBERG: That is—I’m not sure what you mean by extrapolating data.

No wonder Zuckerberg was baffled. Burgess’ question implies that the FTC will need to do some deep digging, and get some inside information, before it can determine whether and how badly Facebook violated the 2011 decree. But Facebook was regularly accused of violating that decree long before the Cambridge Analytica story surged into the news this spring.  You only have to use Facebook for a month of two before it becomes eminently clear that the company regularly ignores the requirement to “obtain consumers’ affirmative express consent before enacting changes that override their privacy preferences.”

Indeed, a former FTC director has just published a laundry list of the ways that Facebook has violated the consent decree. Writing on the blog of the Harvard Law Review, David Vladeck notes that “the decree requires Facebook to assess risks to consumer privacy and take reasonable measures to counteract those risks,” yet “[i]t doesn’t appear that Facebook had even the most basic compliance framework to safeguard access to user data.”

Why didn’t the FTC decree prevent the Cambridge Analytica debacle or other privacy abuses? We can’t expect our legislators to make any progress on privacy regulation until they understand why and how past efforts have failed. In “The FTC and the New Common Law of Privacy,” legal scholars Daniel J. Solove and Woodrow Hartzog note:

A data protection authority is common in the privacy law of most other countries, which designate a particular agency to have the power to enforce privacy laws. Critics of the FTC call it weak and ineffective—”[l]ow-[t]ech, [d]efensive, [and] [t]oothless” in the words of one critic.

While Solove and Hartzog aim to mount a defense of the FTC as a privacy regulator, their article mostly paints a picture of an agency that does little to enforce the consent decrees that make up the bulk of its privacy interventions. As they acknowledge, “the FTC lacks the general authority to issue civil penalties and rarely fines companies for privacy-related violations under privacy-related statutes or rules that provide for civil penalties.”

And if you have any doubt about whether it should have been obvious that Facebook was in violation of its consent order, Solove and Hartzog themselves note that “the FTC stated that under its consent order, ‘Facebook will be liable for conduct by apps that contradicts Facebook’s promises about the privacy or security practices of these apps.”’ Anyone who submitted a Facebook app before 2014 could tell you that Facebook had no mechanism for ensuring the privacy compliance of third party developers—that’s exactly how Cambridge Analytica was able to get the data at the heart of the present controversy.

In that context, asking Facebook for data on whether and how it violated its consent decree is tantamount to finding a black-masked intruder walking out of your house with your TV under one arm and your jewelry box under the other, and then asking him if he’ll let you know if he decides to steal anything. “Extrapolating data” for the FTC?  All the FTC needs to do is log into Facebook, and it will be obvious the consent decree is being violated left and right. This isn’t a problem of information; it’s a problem of enforcement.

The Challenge of Effective Regulation

If I were feeling extremely generous, I might assume that the level of tech ignorance displayed by our elected representatives is not about them, but about me. After all, I’ve been researching the internet for two decades, and tinkering around with it for even longer. Maybe it’s not fair for me to expect Congress to know that Facebook’s data collection goes well beyond whatever people post on their walls.

Or maybe that’s exactly what we need to expect. As communications scholar Michael Schudson put it in an overview of the longstanding debate over the role of expertise in policy-making, “[a]democracy without experts either will fail to get things done or fail to get things done well enough to satisfy citizens.”

Reading the transcripts of Zuckerger’s testimony makes me realize that expert input is only useful when our legislators understand the fundamentals. Because our legislators are unclear about even the most basic truths about Facebook and its ilk, Zuckerberg was able to go unchallenged when making blatantly unsupportable claims like, “people have the ability to see everything that they have in Facebook, to take that out, delete their account, and move their data anywhere that they want.”

At the very least, a Congress that understands internet fundamentals would be able to push back on claims like these—claims that any moderately experienced Facebook user could disprove in ten minutes. But will well-informed representatives actually deliver the regulatory reform we need in order to curb data abuses by Facebook and its brethren?

I’m not sure.


JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Columbia Law Review, Vol. 117, No. 6 (OCTOBER 2017), pp. 1369-1459
Columbia Law Review Association, Inc.
College English, Vol. 75, No. 5 (May 2013), pp. 513-533
National Council of Teachers of English
Columbia Law Review, Vol. 114, No. 3 (APRIL 2014), pp. 583-676
Columbia Law Review Association, Inc.
Theory and Society, Vol. 35, No. 5/6 (Dec., 2006), pp. 491-506