The icon indicates free access to the linked research on JSTOR.

How will my data be used against me?

JSTOR Daily Membership AdJSTOR Daily Membership Ad

That’s the question on many minds in the wake of recent coverage of Cambridge Analytica, its extraction of Facebook data, and the use of that Facebook data by the Trump campaign. While these data collection and marketing practices came as no surprise to many people working in the digital marketing industry (myself included), they have shocked a great many people into re-evaluating how they use social media, and have even led some people (and some companies) to withdraw from Facebook.

If you ask me, that’s a mistake. The reality is that vast swaths of the internet are built on the collection of user data, and on the use of that data to deliver targeted advertising. Withdrawing from Facebook doesn’t get you out of the hands of data collectors, and as I have previously written here, I think there is little hope for building an alternative Facebook that operates any differently. And I for one would hate to see a world in which the power of social networking is available only those who are unconcerned about issues like personal privacy and media integrity.

Targeted Advertising Isn’t New

But there’s no getting around the fact that our access to the miracle of global connectivity comes at a very high price: the protection of personal privacy and autonomy.

We should be worried about the garden variety manipulation of our purchase decisions, social interactions, and information environment, all of which are shaped every day by social media algorithms and targeted advertising. It’s worth remembering that there were significant concerns with the broader impact of targeted advertising long before digital data gathering and social media ad targeting became the norm. All the way back in 1997, marketing experts N. Craig Smith and Elizabeth Cooper-Martin found that when it came to targeted advertising, “[w]omen, nonwhites, and older and less educated consumers are more likely to be concerned about targeting and express disapproval.” That reflects their finding that targeted advertising is of greatest concern when it affects vulnerable or less privileged people—a consideration that should inform our thinking about the way today’s vastly more sophisticated targeting may have radically different effects on different social groups.

How the Government Could Help

The Atlantic Council think tank has usefully framed the challenge in its recent report, “The MADCOM Future: How Artificial Intelligence Will Enhance Computational Propaganda, Reprogram Human Culture, And Threaten Democracy…And What Can Be Done About It.” That report defines MADCOMs as “the integration of AI systems into machine-driven communications tools for use in computational propaganda,” and predicts that MADCOMs “will gain enhanced ability to influence people, tailoring persuasive, distracting, or intimidating messaging toward individuals based on their unique personalities and backgrounds, a form of highly personalized propaganda.”

The Council’s prescription, as articulated in “The Emergence of MADCOMs,” focuses on potential government intervention, including recommendations that:

  • The Department of Homeland Security should expand its cybersecurity mission to include protection of the US public from foreign computational propaganda…
  • The Department of State should develop a computational engagement strategy for defending against online foreign propaganda…
  • Federal, state, and local governments should develop tools for identifying adversary computational propaganda campaigns, and for countering them with measures other than counter-messaging against the US public.

That all sounds great, but it only focuses on the specific (if important) scenario in which targeted messaging is perceived as a foreign threat. Even though the Cambridge Analytica story has landed in the middle of a massive investigation into Russian intervention in the United States presidential election, it would be an enormous mistake to think that psychographic profiling and ad targeting only happen—or are only problematic—when they are used in the context of a political campaign, or wielded by a foreign power.

And indeed, we have developed a collective suspicion of persuasion in all forms—particularly in the political sphere. As political science and communications scholar Diana C. Mutz points out in a thoughtful piece on the gap between the myth and reality of media influence on politics:

The entire purpose of election campaigns is to provide politicians with opportunities to expose the public to their persuasive arguments. Persuasion, rather than coercion or violence, was thought by our Founders to be a preferable means of conducting politics. But today we are ambivalent, at best, about this core part of our political system.

With enough public pressure and political will, it’s conceivable that we will someday move towards protecting vulnerable people from manipulation by regulating the biggest platforms, like Facebook and Twitter. I’m far less optimistic that we’ll develop robust, effective regulations that could address the range of smaller social networks that might attract Facebook defectors. That’s one of the reasons I’d like to see people stay on Facebook: I’d rather deal with one gorilla on a chain than a thousand wild monkeys.

Why We Have to Get Smarter

If we don’t expect government to step in, and we don’t want to abandon Facebook to the trolls and the marketers, what can we do?

The solution is simple—and difficult. If we can’t meaningfully limit the way our data is harvested, and we can’t effectively constrain the way that data is used to tailor and target the ads and content we see, there’s only one place we can fight the insidious and manipulative effects of near-psychic advertising: in between our ears. We have to become smarter news and advertising consumers, and learn to resist the unceasing stream of slanted messages that come our way.

As the Atlantic Council put it in their report:

Individuals must become savvier consumers of information and advocate for stronger personal privacy protections from their politicians. Collective intelligence systems for determining truth from fiction can be useful, and paying for quality news is an effective way to incentivize high-value information.

That’s a lot harder than it sounds. People consistently underestimate the extent to which their consumption choices or political views are affected by media coverage or advertising, even when they believe that other people can be manipulated. Researchers refer to this as the “third person effect”: As communications scholars Jeremy Cohen, Diana Mutz, Vincent Price, and Albert Gunther note in their article on the third person effect in defamation law, “The third-person effect suggests that people tend to believe that they are not affected by media messages as strongly as are others exposed to the same message. As Davison put it, people believe the media’s ‘greatest impact will not be on “me” or “you”, but on “them”-the third persons’.”

That media hubris means that we worry about protecting other people from fake news and targeted messages, without asking whether we ourselves are being manipulated. And there are certainly some systemic differences in how vulnerable we are to manipulation. In their article on “Social Media and Fake News in the 2016 Election,” economics scholars Hunt Allcott and Matthew Gentzkow found significant differences in who believed the kind of fake news that we now know Cambridge Analytica helped target:

First, heavy media consumers are more likely to believe ideologically aligned articles. Second, those with segregated social networks are significantly more likely to believe ideologically aligned articles, perhaps because they are less likely to receive disconfirmatory information from their friends…Third, “undecided” adults (those who did not make up their minds about whom to vote for until less than three months before the election) are less likely to believe ideologically aligned articles than more decisive voters.

But the flip side of this finding is that there are factors that can insulate people from persuasion by false news—and perhaps, from the larger phenomenon of targeted messaging. According to Allcott and Gentzkow, “people who spend more time consuming media, people with higher education, and older people have more accurate beliefs about news….people who report that social media were their most important sources of election news were more likely both to correctly believe true headlines and to incorrectly believe false headlines.”

What We Can Do Now

That translates into some very straightforward recommendations that we should all follow, regardless of whether we believe that we are personally vulnerable to advertiser, news and political manipulation:

  • Commit to daily media engagement—not just through whatever shows up in your news feed, but by actively seeking out trusted media sources.
  • Invest in education for yourself, your family, and your larger community—not only general education, but specifically around learning to discern the difference between independent, rigorous reporting and sponsored or fake news.
  • Validate every social media headline by looking at—for example, by doing a Google news search.

I suspect I’m preaching to the choir here: after all, if you’re the kind of person who reads a thousand words into a story based on academic research, you’re probably not the kind of voter or consumer who swallows whatever message pops in your Facebook feed. But that’s all the more reason we need you on Facebook: so that you can drop a Snopes link or factual correction whenever you see a friend sharing dubious information.

If Facebook’s filter bubble keeps you inside the universe of fellow JSTOR junkies, you can take a page from the troll army that searches Facebook for comment threads on topics like gun control or the Trump administration. Just click on one of the headlines in the tiny nightmare of Facebook’s Trending sidebar, and you can get to work adding links or informed perspective to threads that are currently missing both.

Of course, neither an army of fact-checking vigilantes nor a newly self-educating electorate are likely to overcome the systemic disinformation and manipulation of consumers and voters in the era of large-scale data collection and targeting. Lest we leave this all in the hands of overwhelmed consumers, let me return to one last recommendation from the Atlantic Council: “The technology sector should play a key role in developing tools for identifying, countering, and disincentivizing computational propaganda. Technology companies should make these tools ubiquitous, easy to use, and the default for platforms. They should also align business models and industry norms with societal values, and develop industry organizations for self-regulation.”

We must demand that the platforms that are reaping billions by selling our eyeballs and our data in turn invest in protecting their users from manipulation and abuse. In my opinion, that requires nothing less than a wholesale reconstruction of our regulatory environment, so that American internet users enjoy the same kind of privacy protections that are now the law in Europe. And nothing will serve us better in becoming advocates for a better internet tomorrow than becoming smarter news consumers today.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Journal of Marketing, Vol. 61, No. 3 (Jul., 1997), pp. 1-20
American Marketing Association
THE MADCOM FUTURE:: HOW ARTIFICIAL INTELLIGENCE WILL ENHANCE COMPUTATIONAL PROPAGANDA, REPROGRAM HUMAN CULTURE, AND THREATEN DEMOCRACY… AND WHAT CAN BE DONE ABOUT IT.
Atlantic Council
Daedalus, Vol. 141, No. 4, On Public Opinion (Fall 2012), pp. 83-97
The MIT Press on behalf of American Academy of Arts & Sciences
The Public Opinion Quarterly, Vol. 52, No. 2 (Summer, 1988), pp. 161-173
Oxford University Press on behalf of the American Association for Public Opinion Research
The Journal of Economic Perspectives, Vol. 31, No. 2 (Spring 2017), pp. 211-235
American Economic Association