The icon indicates free access to the linked research on JSTOR.

When a colleague told me that ChatGPT (Generative Pre-trained Transformer) could write a one-page response to a complex work of literature in under fifteen seconds, I rolled my eyes. Impossible! I thought. Even if it can, the paper probably isn’t very good. But my curiosity was piqued. A few days later, I created a ChatGPT account of my own, typed in a prompt that I had recently assigned to my ninth-grade students, and, watched with chagrin, as ChatGPT effortlessly produced a very good essay. In that one moment, I knew that everything about the secondary English classroom, and society in general, was about to change in ways both exciting and terrifying.

JSTOR Daily Membership AdJSTOR Daily Membership Ad

According to Robert F. Murphy, a senior researcher at the RAND Corporation, “Research on AI [artificial intelligence] got its start in the 1950s with funding primarily from the U.S. Department of Defense. One of the early products of this work was the development of rule-based expert systems (that is, systems that mimic the decisionmaking ability of human experts) to support military decision making and planning.” The goal of AI development from the outset was to create programming that could enhance how human beings go about problem solving. And while forms of AI, such as smart-phones, self-driving cars, and chat-bots, have become intrinsic to the fabric of our society, most people don’t recognize these now everyday devices as AI because they do exactly what they were designed to do: seamlessly assist us with daily tasks.

ChatGPT, however, is different from Google Maps, for example, helping you navigate your morning commute. There is not much room for that route-oriented AI to think independently and creatively through its task. The user types in a destination, and the AI plots a course to get there. But ChatGPT can do more because its parameters are elastic. It can write songs and poems of great complexity (I asked it to write a villanelle about orange juice, and it created a complex and hilarious one); it can offer insight on existential questions like the meaning of life; it can revise a business letter or offer feedback on a resume. In many ways, it feels like a personal assistant always there to help.

And while that’s revolutionary, it is also problematic. ChatGPT can’t “think” on its own or offer opinions. It can only respond to incredibly specific directions. Once the user gives it the go-ahead along with some other details, ChatGPT engages in complex problem solving and executes tough tasks, like writing an essay, in seconds. There is no sense of how to use ChatGPT since it can be used in any way for anything, creating an exponentially dangerous situation in which there are no directions or “how-tos.” Creators and users alike are putting the proverbial plane together as they’re flying it.

 

In “Should Artificial Intelligence Be Regulated?” Amitai Etzioni and Oren Etzioni contend that the more advanced the AI, the more parameters it requires. Monitoring ChatGPT is infinitely complicated since the coding that drives its very human-like thinking is, ironically, too massive and intricate for real human thinking to monitor. “’The algorithms and datasets behind them will become black boxes that offer us no accountability, traceability, or confidence,’ [. . . ] “‘render[ing] an algorithm opaque even to the programmers. Hence, humans will need new, yet-to-be-developed AI oversight programs to understand and keep operational AI systems in line.’”

Is it even possible for ChatGPT’s creators to regulate it, or is the AI simply being maintained so that others can use it and indulge in the novelty of its thinking?

If the answer leans away from boundaries, then the implication is that ChatGPT’s overseers may not understand what they’ve unleashed. In an interview with ABC News, ChatGPT CEO Sam Altman claims that “any engineer” has the ability to say, “we’re going to disable [ChatGPT] for now.” While that may reassure some people, history has shown what happens when regulation is placed in the hands of tech CEOs instead of in those of a more objective and independent regulatory body. Consider the BP Deepwater Horizon oil rig disaster of 2010. A number of investigations asserted that management routinely placed profits over safety. The rig eventually exploded, eleven workers died, and countless gallons of crude contaminated the Gulf of Mexico. Once ChatGPT becomes profitable for investors and companies, will administrators and engineers have both the will and the authority to shut it down if the program inflicts harm? What, exactly, constitutes such an action? What are the parameters? Who is guarding the proverbial guardians? The answer, as the Etzionis argue, is opaque and ambiguous at best.

Without any sort of definitive guidance, users are left to apply ChatGPT however they see fit. This has had an immediate impact in the world of education. According to an Intelligent.com poll, 30 percent of college students surveyed have used ChatGPT on written homework assignments; similarly, research from the Walton Family Foundation showed that 33 percent of kids aged 12 to 17 use ChatGPT for schoolwork and 51 percent of teachers report using ChatGPT in lesson planning. In other words, both teachers and students are actively and consistently using this thing without limits or instructions. It is likely being misappropriated for cheating and shortcuts. The quick response might be to ban the technology outright, a suggestion recently made by Virginia Governor Glenn Youngkin in a CNN town hall, or to allow teachers to engage with it openly.

The danger in ignoring the issue or absentmindedly embracing it, will, in either scenario, lead to misuse of a program society does not understand. Imagine a generation of students and professionals who rely on a machine to think for them. This will ultimately result in an educational landscape where cheating is rampant on both ends of the spectrum: students will have ChatGPT write their essays, and teachers will have ChatGPT grade them. It can do the latter in mere seconds with a completed rubric, a grade, and detailed comments; it also, however, makes mistakes. To prevent this, there must be a concentrated effort, and an ongoing conversation, in all industries to adapt AI so that it can enhance and supplement various professions rather than weaken them through inappropriate and unregulated use.

Mark A. Lemley and Bryan Casey emphasize the need to understand technologies like ChatGPT in “Remedies for Robots,” an article published in The Chicago Law Review, to effectively integrate these things into society. “If we don’t know how the robot ‘thinks,’” they write, “we won’t know how to tell it to behave in a way likely to cause it to do what we actually want it to do.” ChatGPT has incredible potential, but for it to be useful and reliable, individuals must be informed on how and when to use it. This won’t eliminate all misuse, and it is no replacement for official regulation carried out by a qualified agency, but it will create a conversation, the first step to making some form of human oversight more transparent and to using this platform in a healthy way.

Some educators have already initiated this conversation with their students. Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School of Business, has urged the necessity of welcoming ChatGPT into classrooms as a way of understanding and controlling the tool. He likens ChatGPT to long accepted educationally sound devices. “We taught people how to do math in a world with calculators [. . .],” Mollick said on NPR, implying AI is just another device for both students and teachers to utilize. Comparing ChatGPT to a calculator is akin to likening an iPhone to an abacus. Yet Mollick makes a good point. The technology, while complex, can be a valuable classroom resource if its use is acknowledged and discussed. Mollick is able to coach his students on how to apply ChatGPT to complex problem-solving or essay writing. His goal is to adapt to changing times and ensure that his students’ thinking and competencies are enhanced, not replaced. The technology shouldn’t be regarded in terms of good or bad. It is here, and that means educators and students need to learn how to use it and when not to.

ChatGPT does not herald the dawn of the AI apocalypse we know from T2 and The Matrix. It does, however, usher in a new era, and the opportunity is before us right now to study, discuss, and inform both students and parents about how to utilize ChatGPT appropriately. In his TED Talk on the benefits of embracing artificial intelligence, designer and engineer Maurice Conti opined that our “natural human capabilities are going to be augmented by computational systems that help you think, robotic systems that can help you make, and a digital nervous system that connects you to the world far beyond your natural senses.”

AI can be a good thing that yields growth and opens infinite pathways for students, but parameters must be thoroughly and thoughtfully designed by stakeholders at a grassroots level to ensure a balance between human and computational thinking takes shape. OpenAI, ChatGPT’s parent company, is not going to slow down the ongoing development of ChatGPT, so educators and their communities must collaborate to integrate this new platform into units, lessons, and assignments. Otherwise, we risk creating a world in which machines truly do the thinking for both pupils and their instructors, which likely spells Doomsday for the types of critical and abstract thinking educators so desperately try to cultivate.


Support JSTOR Daily! Join our new membership program on Patreon today.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Jan. 1, 2019
RAND Corporation
Issues in Science and Technology, Vol. 33, No. 4 (SUMMER 2017), pp. 32-36
Arizona State University
The University of Chicago Law Review, Vol. 86, No. 5 (September 2019), pp. 1311-1396
The University of Chicago Law Review