The icon indicates free access to the linked research on JSTOR.

It seemed like a good idea; Microsoft introduced an Artificial Intelligence (AI), Tay, to comment on social media and learn to interact with others. Within hours, Tay had become a genocidal maniac. Microsoft spent hours frantically deleting Tay’s racist, misogynistic, Nazi-sympathizing tweets before finally pulling her off-line when she began advocating genocide. Some of the problems stemmed from a function where users could instruct Tay to repeat an offensive tweet verbatim—trolls thought it funny to teach Tay awful ideas. However, by the end, Tay’s racist bile was self-generated; she had learned hate. How did this happen? Is Skynet next?

JSTOR Teaching ResourcesJSTOR Teaching Resources

Artificial intelligence has been around for decades. The idea was to create a computer that could solve problems that only a human mind could tackle. The earliest attempts were the purview of computer scientists and engineers, who tackled AI as purely a technical problem. Early 1970s versions utilized improved computer memory to solve problems based on long, pre-programmed sequences of if-then rules (e.g. If 1 is added to 6, then the answer is 7). Some early AIs employed thousands of such rules, and the programs could accurately solve math word problems and other reasoning tasks.

Later AI workers realized that intelligence went way beyond logic, and started to incorporate diverse fields such as neuroscience, sociology, biology, psychology, etc. The focus of AI research became data retrieval—how does a brain decide what information is relevant?

Unfortunately for Microsoft, this cold efficiency is not applied to social systems, where intangible information is as crucial as relevance. Tay was designed to select relevant information from the vast fields of social media, but was not given any guidance as to how. In essence, Tay was amoral.

Should we worry about amoral AI? According to Adam Keiper and Adam Schulman writing in the New Atlantis, yes. We are entering an age where robots might well be entrusted with life or death decisions (think of an armed military drone without the pilot), so robot morality is something we should consider carefully.

Of course, not every concept of morality is the same, and there is the very real possibility that robots could develop their own morality. Keiper and Schulman note that an intelligent robot may well start acting according to its own moral code, a code that might not be recognizable to us. Nor is this an easy technical problem to solve even with established rules. Tay can be instructed to ignore all posts involving Nazis, for example, but is that truly a moral thought process? We don’t just have to decide how to make a machine intelligent. We must decide what intelligence means.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

ASEE Prism, Vol. 7, No. 6 (February 1998), pp. 18-23
American Society for Engineering Education
The New Atlantis, No. 32 (Summer 2011), pp. 80-89
Center for the Study of Technology and Society