The icon indicates free access to the linked research on JSTOR.

Interactions that dehumanize us.

JSTOR Teaching ResourcesJSTOR Teaching Resources

Disinformation that misleads us.

Algorithms that manipulate us.

These are the risks posed by the explosion in generative artificial intelligence—AI that uses massive amounts of pre-existing content (also known as “large language models”)—to generate text, images, and code as well as to provide information and answers to an ever-growing range of questions.

They’re also the risks that made many people worry about social media.

What We Missed about Social Media

I wish I had worried about social media more. In 2005, my partner and I launched what would now be called a social media agency, at a time when few had even heard the term “social media.” Like a lot of people working on the nascent social web at that time, we were a lot more attuned to its potential than to its risks.

Before the advent of YouTube, Facebook, and Twitter, social media was decentralized, not very corporate, and pretty small: It felt more like a club of people exploring the way user-created content could fuel activism, community, and creativity than the next gold rush. I was so confident that this new medium was intrinsically biased towards social engagement that I used to tell companies that they would have a hard time competing with the grassroots causes and callings that drove most online participation at that time.

But I forgot about this little thing called money. It turns out that if you’re prepared to buy attention with ads and celebrity spokespeople and an endless array of contests and prizes, you can absolutely pry attention away from social advocacy and creativity and direct it towards buying stuff and reviewing stuff and even unboxing stuff on camera.

Money and Media

Once people figured out that there was money to be made with social media—and a lot of it—the dynamics changed quickly. “With digital ad revenues as their primary source of profit,” Douglas Guilbeault writes in “Digital Marketing in the Disinformation Age,” “social-media companies have designed their platforms to influence users on behalf of marketers and politicians, both foreign and domestic.”

Advertising became more sophisticated, to recover the eyeballs and attention that TV and newspapers were losing to social networks and web browsing. In turn, “digital platforms driven by ad revenue models were designed for addiction in order to perpetuate the stream of data collected from users,” as L. M. Sacasas puts it in “The Tech Backlash We Really Need.”

And content became more sensational and more polarizing and more hateful, because sensational and polarizing is what attracted the traffic and engagement that advertisers were looking for; an explosion in hate speech was the result. As Bharath Ganesh notes in “The Ungovernability of Digital Hate Culture,” “[i]n a new media culture in which anonymous entrepreneurs can reach massive audiences with little quality control, the possibilities for those vying to become digital celebrities to spread hateful, even violent, judgements with little evidence, experience, or knowledge are nearly endless.”

Most of the terrible, destructive impacts of social media stem from this core dynamic. The bite-sized velocity of social media has made it endlessly distracting and disruptive to our families, communities, relationships, and mental health. As an ad-driven, data-rich, and sensational medium, it’s ideally suited to the dissemination of misinformation and the explosion of anti-democratic manipulation. And as a space where users create most content for free, while companies control the platforms and the algorithms that determine what gets seen, it has put creators at the mercy of corporate interests and made art subservient to profits.

Where We Went Wrong

Now we’re getting ready to do it all again, only faster and with far more wide-reaching implications. As Allen and Thadani note in “Advancing Cooperative AI Governance at the 2023 G7 Summit,” “the transition to an AI future, if managed poorly, can…displace entire industries and increase socioeconomic disparity.”

We’re embracing technologies that create content so rapidly and so cheaply that even if that content is not yet quite as good as what humans might create, it will be more and more difficult for human creators to compete with machines.

We’re accepting opaque algorithms that deliver answers and “information”—in quotes, because AIs often present wholly invented “hallucinations” as facts—without much transparency about where this information came from or how the AI decided to construct its answers.

We’re sidestepping crucial questions about bias in they ways these AIs think and respond, and we’re sidestepping crucial decisions about how we deploy these AIs in ways that mitigate rather than compound existing inequalities.

How To Do AI Better

If all this makes me sound like a terrible pessimist, it’s only because I have to fight so hard against my innate fascination with emergent tech. I’m falling hard for the magic and power of AI, just like I fell hard for social media and like I fell hard for my first experiences of the web, of the internet, of the personal computer.

Those of us who are truly inspired and enchanted by the advent of new technologies are the ones who most need to rein in our enthusiasm; to anticipate the risks and to learn from our past mistakes.

And there’s a lot we can learn from, because we know what we were warned about last time, what we disregarded, and how we missed the opportunities to avert the worse excesses of social media.

That begins with the companies driving this transformation. Instead of fighting regulation, AI companies could advocate for effective regulation so that they’re less tempted to sideline ethical and safety issues in order to race ahead of the competition. Some AI leaders are already signaling their support for regulation, as we saw when OpenAI’s Sam Altman appeared at a recent Senate hearing.

But we’ll still be in a dangerous position if regulators depend on the technical advice of AI executives in order to set appropriate rules, because even well-intentioned execs are going to be less than objective about regulations that constrain their potential for profit. AI is also a much more complicated, much faster moving area to regulate; legislators who were hard-pressed to comprehend and regulate social media are unlikely to do better with AI.

That’s why, as King and Shull argue in “How Can Policy Makers Predict the Unpredictable,” “policy makers must prioritize developing a multidisciplinary network of trusted experts on whom to call regularly to identify and discuss new developments in AI technologies, many of which may not be intuitive or even yet imagined.”

It’s going to take international coordination and investment to develop an independent source of regulatory advice that is genuinely independent and capable of offering meaningful advice: Think of an AI equivalent of the World Health Organization, with the expertise and resources to guide AI policy and response at a global level.

Becoming a Smarter User of AI

It’s just as crucial for ordinary folks to improve their own AI literacy and comprehension. We need to be alert to both the risks and opportunities AI poses for our own lives, and we need to be informed and effective citizens when it comes to pressing for government regulation.

Here, again, the example of social media is instructive. Social networks made massive investments in understanding how to capture, sustain, and monetize our attention. We only questioned this effort once we saw the impact it had on our mental health, our kids’ wellbeing, and the integrity of our democracies. By then, these networks were so embedded in our personal and professional lives that extracting oneself from social media imposed very real social and professional costs.

This time, let’s figure out how to be the agents who use the tools, rather than the subjects who get manipulated. We won’t get there by avoiding ChatGPT, DALL-E and the like. Avoidance only makes us more vulnerable to manipulation by artificially generated content or to replacement by AI “workers.”

Instead, we human workers and tech users need to become quickly and deeply literate in the tools and technologies that are about to transform our work, our daily lives, and our societies—so that we can meaningfully shape that path. In a delightful paradox, the AIs themselves can help us achieve that rapid path to AI literacy by acting as our self-documenting guides to what’s newly possible.

How AI Helps Build Mastery

If you have yet to delve deep into the potential of generative AI, here’s one place you can start: ask an AI for some examples of how it can transform your own work.

For example, you might prompt ChatGPT with something like:

You are a productivity consultant who has been hired to support the productivity and well-being of a team of policy analysts. You have been asked to identify ten ways these policy analysts can use ChatGPT to facilitate or support their work, which includes reading news stories and academic articles, attending conferences, booking briefings, drafting briefing notes and recommendations, and writing reports. Please provide a list of ten ideas for how to use ChatGPT to support these functions.

Once ChatGPT provides you with a list of options, pick one that you’d like to try out. Then ask ChatGPT to give you step-by-step instructions on how to use it for that particular task. You can even follow up your request for step-by-step instructions with a prompt like,

You are an automation researcher. Review the previous conversation and note five risks or considerations when automating these tasks or adopting this approach.

Seeing how generative AI analyses and enables the automation of your own work or personal tasks is a great way to understand how AI works, where its limits lie, and how it might transform your own corner of the world.

That understanding is what will allow you to use AI instead of getting used by it, and it’s what will allow you to participate meaningfully in the public conversation about how to shape AI, right now. And now is when we need to hear many thoughtful, informed, human voices engaging with the question of how to regulate and use AI.

Otherwise, our voices will be drowned out by the ever louder, ever more pervasive voices of our new AI companions.


Support JSTOR Daily! Join our new membership program on Patreon today.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Journal of International Affairs, Vol. 71, No. 1.5, Special Issue: CONTENTIOUS NARRATIVES: DIGITAL TECHNOLOGY AND THE ATTACK ON LIBERAL DEMOCRATIC NORMS (2018), pp. 33–42
Journal of International Affairs Editorial Board
The New Atlantis, No. 55 (Spring 2018), pp. 35–42
Center for the Study of Technology and Society
Journal of International Affairs, Vol. 71, No. 2, UNGOVERNED SPACES (Spring/Summer 2018), pp. 30–49
Journal of International Affairs Editorial Board
April 1, 2023, pp. 5–15
Center for Strategic and International Studies (CSIS)
MODERN CONFLICT AND ARTIFICIAL INTELLIGENCE, January 1, 2020, pp. 1–5
Centre for International Governance Innovation