One of the most contentious issues in the ongoing integration of the internet into our daily life has been the collection and monetization of users’ personal data. As the general public is becoming increasingly aware, companies such as Google, Facebook, and Amazon employ users’ information and “metadata” (time, location, contacts, etc.) to customize their services—and to target their advertising.

Big data, the aggregation of millions of users’ opinions and behavior, has gone from being a side-effect of searching or e-commerce functions to the very core of their business. Many of the convenient newer applications they can offer us, such as live Google traffic alerts or social media “friend” recommendations, are only possible with big data.

As we know too well, big data has big legal and ethical implications for security on both a personal and a social scale. Scholars have even noted that the domestication of this technology, its integration into our everyday life and social pleasures, encourage the public to downplay these implications as “soft” security problems, compared to the “hard” problems of military, diplomatic, or food security.

And yet, these “soft” security problems are significant. On a basic level, companies collecting data have found it difficult to prevent criminals from accessing sensitive personal and financial data. Some events, like the Equifax data breach and its management’s attempt to cover it up, seem merely to reveal incompetence, greed, and the fragility of the overextended networks of a long-standing financial service. Others, such as Cambridge Analytica’s sale of private personal data from Facebook users to political campaigns, indicate intentional abuse. Although Facebook has claimed that Cambridge Analytica obtained the data fraudulently, the entire platform is designed to analyze its users and customize marketing to them. As a free service, this is its business model.

More pervasive than these explicitly illegal or unethical activities are the changes internet services are making to society’s norms of socializing, recreation, and of privacy itself. While Equifax obtains credit information about people from different sources without their input, Facebook persuades individuals to put most of their personal information on to the site themselves. We willingly offer our personal photos, opinions, and other information, because of the great pleasure it gives us. We enjoy sharing our lives with friends and family, and social media platforms are set up to deliver fast and quantifiable feedback from our followers that fires our dopamine centers.

News stories critical of the effect of technology on society are generally outnumbered by the avalanche of enthusiastic reviews of every new tech gadget that more efficiently and pleasurably allow us to record ourselves and share this record on the internet, at all times of the day. Beyond smartphones, tech companies are promoting “the internet of things,” an interconnected web of devices for the home and office that may all be personalized and automated from afar using personal data settings and surveillance.

How can responsible legislation and internet regulation keep up with the pleasures that sway the public to accept these rapid changes as natural and inevitable?

Is Privacy Dead?

American society is in a state of confusion about what personal and family privacy means in the twenty-first century, and how to protect it. Legal scholar David Sklansky tracks this confusion to a few key shifts. First, much of our common law precedent on privacy is based on the Fourth Amendment, which is geared towards preventing government overreach in “unreasonable searches and seizures.” The public is generally aware of this civil right—but the meanings of these words are changing. The spread of closed-circuit cameras, as well as our own audio-visual devices, and their interconnection with databases via the internet to provide us with convenient services, means we have come to permit and indeed expect much more surveillance in our lives than before.

Furthermore, since the 1970s, many social and informational services have been ceded to the private sector. Philosopher Michel Foucault has argued the “biometric” data collection of prisons, schools, and the military were foundational to the all-encompassing power of the national state in the nineteenth century. These political functions became so tightly integrated into society during the twentieth century that citizens were effectively becoming self-disciplined to willingly render up all personal data the government wanted. In the past decade, it appears that social media represents a “killer app” to mold personal habits to make eager self-surveillance a new social norm.

As a result, Sklansky noted that the 2013 revelation of NSA collecting telephone metadata from millions of US citizens was “a collective shrug.” At present, private sector business has far exceeded the government in the wide scope of its data collection. But unlike the social contract enshrined in the Constitution that prohibits some government activities, our legal system actually protects these companies’ right to make contracts with individuals in which they allow the use of their data and metadata for various purposes. In practice, these are the complex, fine-print user agreements most people sign without reading very closely.

Legislators are scrambling to catch up with contract law in a way that forces business to make stronger efforts to protect consumers. Prior to this decade, writes Theodore Claypool, courts tended to punish internet companies only for misrepresenting the ways in which they used data based upon user agreements. Since 2010, the Federal Trade Commission has set more stringent standards, forcing companies like Facebook and Twitter to obtain the explicit consent of consumers before using their data for marketing purposes. Individual states have passed laws preventing employers from asking workers for access to their social media accounts, and to allow minors the right to fully erase prior posts.

The U.S. does not yet have as comprehensive laws as the European Union’s General Data Protection Regulation (GDPR), which entered into effect on May 25th. In lieu of the legal style or political will to monitor the internet, Tim Wu suggests the U.S. instead consider designating internet services companies as fiduciaries with a legal responsibility to protect their users, as strong as that of attorneys or medical doctors.

Showing Ourselves, Seeing Ourselves

These legal protections are welcome, but they still fall short of comprehending and addressing the ways widespread surveillance and social media have become culturally normalized. Robert Sweeny says these shifts have occurred through trends as wide-ranging as the rise of reality TV, security-centric news reporting after 9/11, and even video games.

Social media, in fact, have integrated free-to-play multiplayer gaming apps into their platforms to attract users and to establish regular habits. Even a function as prosaic as the face recognition algorithm used to automatically tag photos online becomes like a game. Users enjoy humorous artificial intelligence misidentifications while training it to better recognize the faces of their friends and family. The newest Apple iPhone has technology sophisticated enough to use face identification as a security measure to unlock phones—and to map users’ expressions on to silly emoji icons.

Sweeney believes it is now the responsibility of the educational system to teach students about the power structures inherent behind surveillance and their willing participation in it, as surveillance technology becomes more domesticated and innocuous. In teaching an introduction to art in the mid-2000s, Sweeny had students discuss their differing interpretations of personal photos, and of their range of emotions when viewing each other on a closed circuit camera of a campus quad. Recognizing the pleasure and power that anyone—even strangers—could feel when watching an unaware subject was an important part of the exercise. This lesson is just as important in understanding the power of social media in the present, although these abstract insights now tend to get lost in the details of network encryption and politics.

Home Smart Home

Both the pleasures of convenience and the power of control seem heightened in internet services designed for the home. These are provided through networked gadgets ranging from thermostats to light bulbs, security cameras, lock systems, and sprinklers. Independent innovators such as Nest Labs have been acquired by industry titans over time: Google, in this case. They provide the convenience of monitoring and controlling all networked functions from afar—at least until the system crashes for hours at a time.

Anders Albrechtslund and Thomas Ryberg designate users’ relationships with their smart homes as “participatory surveillance.” They argue that such devices invert the former one-way nature of surveillance, by giving users the ability to customize their experience and to access all the data that the service providers can. Functionally, they studied how such devices helped engage families to reduce home power consumption and to care for each other. Considering the older notions of privacy obsolete, they optimistically believe that users’ participation will influence designers to incrementally increase their control over the new home surveillance networks.

Yet there are limits to our control. The most ubiquitous home internet device in 2018 is the Amazon Echo, which takes voice commands from users under the default name “Alexa.” Alexa will play music, control interconnected smart home devices, check your bank balance, read your fitness tracker, buy plane tickets—and, of course, order your products from Amazon. “She” will engage in conversation and tell jokes. The device is popular enough that real estate developer Lennar is planning to offer it pre-built into all of its new units. What most users fail to consider, however, is that Alexa is always listening. Recently, a malfunctioning Echo unit recorded a Portland woman’s conversations and sent them to a random contact. If the onward march of the internet of things is unstoppable, it is clear these devices will shape our habits just as much as we will affect their design.

Print

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

California Law Review, Vol. 102, No. 5 (October 2014), pp. 1069-1121
California Law Review, Inc.
Business Law Today, (January 2014), pp. 1-4
American Bar Association
Studies in Art Education, Vol. 47, No. 4 (Summer, 2006), pp. 294-307
National Art Education Association
Design Issues, Vol. 27, No. 3 (Summer 2011), pp. 35-46
The MIT Press