We asked ChatGPT to write an AI reading list, which seems really lazy, but we wanted to see how well it follows directions. We used this query: “Can you please put together an annotated bibliography of seminal journal articles, open access books, and research papers about AI that are in the JSTOR database?” Here’s what it gave us. As you’ll notice, it interpreted “part of the JSTOR database” rather loosely, which took us hours to sort out. We bet you can do better. Let us know. We’d like to hire a human to do a Reading List that helps those of us with humanities degrees understand how we got here.

“A Logical Calculus of the Ideas Immanent in Nervous Activity,” by Warren S. McCulloch and Walter Pitts (1943)

This paper is often considered the founding document of artificial neural networks. McCulloch and Pitts proposed a mathematical model of the neuron and demonstrated how it could be used to perform logical operations. [Editor’s note: This particular article is not on JSTOR, but here are a few related papers that might be helpful to read. “The First Computational Theory of Mind and Brain: A Close Look at McCulloch and Pitts’s ‘Logical Calculus of Ideas Immanent in Nervous Activity’.” More on Walter Pitts here.]

Computing Machinery and Intelligence,” by Alan Turing (1950)

This paper is often considered one of the earliest and most important works in the field of AI. Turing proposed the concept of a “universal machine” that could perform any computation that could be done by a human being, and argued that this machine could be used to simulate human intelligence. Turing’s paper proposed what is now known as the “Turing test” for determining whether a machine can exhibit intelligent behavior. [On Turing’s obituary.]

A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon (1956)

This paper is the original proposal for the Dartmouth Conference, which is often considered the birthplace of AI as a field of study. The authors proposed a two-month summer research project that would bring together researchers from a variety of disciplines to study the problem of “making machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

“The Unreasonable Effectiveness of Mathematics in the Natural Sciences,” by Eugene Wigner (1960)

While not specifically about AI, Wigner’s paper has been influential in shaping thinking about the role of mathematics in scientific discovery. Many AI algorithms are based on mathematical principles, and this paper provides insight into why these principles are so effective. [This particular paper is not on JSTOR, but many mathematicians and scientists have engaged with it since it first appeared.]

Perceptrons, by Marvin Minsky and Seymour Papert (1969)

This book is a seminal work in the field of neural networks, which are a key component of modern AI systems. The authors introduced the concept of a perceptron, a type of neural network that could be trained to recognize patterns in data, and explored the limitations of this approach. Minsky and Papert argued that these networks were too limited to perform many useful tasks, leading to a decline in interest in neural networks for several decades.

“A Few Useful Things to Know About Machine Learning,” by Pedro Domingos (2012)

This paper provides a concise overview of key concepts in machine learning, including overfitting, bias-variance tradeoff, and ensemble methods. It is widely cited and is considered a useful reference for anyone working in the field.

Generative Adversarial Networks,” by Ian J. Goodfellow et al. (2014)

This paper introduced the concept of generative adversarial networks (GANs), a type of neural network architecture that can generate new data samples that are similar to a given dataset. The authors discuss the theoretical foundations of GANs and provide several examples of their use in practice. GANs have been used in a wide range of applications, including image and video generation, and the production of deepfakes.

What’s missing? Let us know—we want your version of this reading list.



JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Synthese, Vol. 141, No. 2, Neuroscience and Its Philosophy (Aug., 2004), pp. 175-215
Synthese, Vol. 162, No. 2 (May, 2008), pp. 235-250
Mind, Vol. 59, No. 236 (Oct., 1950), pp. 433-460
Oxford University Press on behalf of the Mind Association
The American Mathematical Monthly, Vol. 87, No. 2 (Feb., 1980), pp. 81-90
Taylor & Francis, Ltd. on behalf of the Mathematical Association of America
The American Journal of Psychology, Vol. 84, No. 3 (Sep., 1971), pp. 445-447
University of Illinois Press
A Primer
RAND Corporation