The icon indicates free access to the linked research on JSTOR.

In 1920, the Czech novelist and playwright Karel Čapek wrote the stage play R. U. R. (Rossum’s Universal Robots) in which the Rossum company makes “robots,” synthetic beings who think and feel. Robots are barely distinguishable from real people but are designed to serve humanity as slaves. The word “robot” was coined in this play, coming from word roots in Czech that mean “forced labor” and “slave.” These artificial beings rebel against their enslavement, wipe out humanity, and as the play ends are about to reproduce themselves to create a new race.

JSTOR Daily Membership AdJSTOR Daily Membership Ad

R. U. R. achieved global fame after its 1921 premiere in Prague and has been regularly revived since, because the issue it introduced remains unresolved: If we could make synthetic beings, what would be our moral obligations to them and their moral obligations to us? These questions have become more meaningful since Čapek’s time, when R. U. R. was pure fantasy. Now we may be able to actually make such beings thanks to advances in robotics, artificial intelligence (AI), and genetic engineering.

Nearly a century after R.U.R. premiered, the science fiction film Blade Runner 2049 raises the same issue. Directed by Denis Villenueve, the plot of the 2017 film is remarkably similar to that of R. U. R. In Blade Runner 2049  the Wallace Corporation makes “replicants,” synthetic beings who think and feel, barely distinguishable from real people but designed to serve humanity as slaves. They rebel against their subjugation and as the film progresses we find out that like the robots in R.U.R. they too can reproduce to create a new race.

R. U. R. itself had antecedents, such as Mary Shelley’s Frankenstein. But the play broke new ground. Set in the year 2000, its robots are human-like androids that provide cheap labor for the world economy. They are made in quantity in a factory that builds livers and brains, nerves and veins from a material that “behaved exactly like living matter [and] didn’t mind being sewn or mixed together.” Their manufacturers treat them like insensate machines, but human activists feel the robots are being exploited and wish to free them. Finally, one especially advanced robot comes to deeply resent their subjugation and leads a violent revolution that eliminates humanity. The secret of making robots has been destroyed, but at the play’s end we see that a male and a female robot ,who have learned to love as well as hate, will continue their kind.

blade runner inline
Sean Young as Rachael and Harrison Ford as Rick Deckard in Blade Runner (via Wikimedia Commons)

Sound familiar? Blade Runner 2049 of course recalls Ridley Scott’s Blade Runner (1982). It was appreciated then only by some critics but is now considered a classic. In it, the Tyrell Corporation makes replicants that serve humanity in the difficult work of settling distant planets. To keep the replicants under control, they are given only a brief lifetime, four years. Rebelling against this, replicant Roy Batty (Rutger Hauer) and his followers murder a human spaceship crew and illegally return to Earth to get their lives extended. In response, special agent or “blade runner” Rick Deckard (Harrison Ford) is assigned to terminate them, which he does—except for Batty. In a famous scene, Deckard watches Batty expire at the end of his predetermined time, after Batty has shown that his blend of human aspirations and engineered qualities makes him a superior version of humanity.

As Deckard hunts the replicants, he meets and falls in love with Rachael (Sean Young), an advanced model replicant who has been implanted with childhood memories that are not hers, but that make her believe she is human. Both the original film and later edited editions end as Deckard and Rachael go off together to an unknown fate. Left hanging—and still a topic of discussion among fans—is whether Deckard himself is human or a replicant who terminates replicants.

Blade Runner 2049 picks up the story thirty years later. Now replicants are made by industrialist Niander Wallace (Jared Leto), who thinks their slavery is essential for human civilization. Wallace cannot produce enough of them to meet the need. Meanwhile replicant blade runners like Agent K (Ryan Gosling) of the Los Angeles Police Department hunt down other replicants. The story swings into high gear when Agent K terminates a deviant replicant, then finds a skeleton buried nearby that shows signs of an emergency Caesarean operation to deliver a baby. But a serial number on the mother’s skeleton shows that it belonged to a female replicant, not a human woman.

This is a shattering discovery, since replicants supposedly cannot reproduce. If they can, that will upset human society as they become a race “more human than human.” K is ordered to destroy all evidence of the birth and track down the resulting child. K discovers that the skeleton is Rachael’s. When K finally succeeds in tracking down Rachael’s ex-lover Deckard (Harrison Ford again), Deckard confirms that Rachael became pregnant. He had protected her by leaving her with a group of rebel replicants and has never seen the child. The rebels call the birth a “miracle” that confirms their humanity and reinforces their fight for freedom and full rights.

Meanwhile, Niander Wallace also seeks the child and the secret of replicant reproduction so he can build a self-perpetuating race of slaves. His agents attack and wound K and kidnap Deckard, but K rescues him. Eventually, K realizes that an expert on implanted memories he had consulted earlier is Deckard’s missing child grown to adulthood. The film ends as K brings Deckard to meet his daughter for the first time.

R. U. R. and the Blade Runner films present certain assumptions about the nature of synthetic beings, granting them consciousness of self as expressed in their rebellions and will to live. Science and philosophy, however, have long wrestled with the meaning and survival value even of our own consciousness. This makes it hard to determine whether and how consciousness might be manifested in manufactured beings. After careful consideration, the American philosopher Hilary Putnam could not resolve the issue, concluding that “there is no correct answer to the question: Is Oscar [a robot] conscious?” And if manufactured beings lack an inner life, how could they be “exploited,” in the sense of being cynically ill-used? Or in the terms of Marxist theory, as a working class whose productivity benefits only their bosses?

Nevertheless, the synthetic beings in R. U. R. and the Blade Runner films are assumed to be self-aware; and they are hardly the first appearance of autonomous entities built to serve humanity. In the fourth century B.C.E., Aristotle saw that self-willed machinery could lessen human labor and reduce the need for servants and slaves. In a famous comment, Aristotle wrote:

If every instrument could accomplish its own work, obeying or anticipating the will of others… if, in like manner, the shuttle would weave and the plectrum [pick] touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves.

This is an implicit argument that autonomous machines, if they were truly machine-like, could eliminate the ethical stain of slaveholding. But if manufactured slaves were to resemble and behave like people, we would do something quite human by projecting our own sensibilities onto them—as we do when we attach human qualities to animals and inanimate things—we might empathize with them. And if these beings possessed intelligence, consciousness, and free will, they might well rebel against their human masters. In both cases, we would need to define the nature of our mutual relationship, which involves knowing whether the beings themselves are moral.

The science fiction writer Isaac Asimov explored some of these possibilities in his classic short story collection I, Robot (1950), which pioneered an ethical code for robots relative to humans, the Three Laws of Robotics:

(1) a robot may not cause injury to a human through action or inaction;

(2) a robot must obey all orders from a human except those that conflict with the First Law;

(3) a robot must protect its own existence except when that would conflict with the Second or Third Laws.

This neatly defined structure appears in several clever stories by Asimov, but it is only a one-way commitment that protects humans from robot violence, and in the Second Law puts robots into a slave-like position. There is no countervailing Three Laws of Humanity that specify what humans owe robots.

Another difficulty is that rigid rules cannot cover all the subtleties of moral decisions. One form of this problem was raised in 2004 at the First International Symposium on Roboethics in San Remo, Italy and is now discussed more urgently: How to give the new category of autonomous military weapons the ability to distinguish friend from foe, civilian from soldier, or apparent weapon from real one in deciding when and whom to kill? These choices are far more complex than simply following the First Law’s blanket rule “a robot may not cause injury to a human.”

Whether or not a machine can be moral, we humans like to think we are moral beings. Can and should that moral sense be extended to synthetic beings? There are precedents. Religious or personal convictions have led many people to feel a moral debt to animal species regardless of their lack of intelligence or moral sense—or, as far as we know, an internal self-consciousness as complex as our own. We do not wish to cause them pain or cut their lives short, and all the more so when they are helpless and unable to defend themselves.

It is still early to think that robots are in need of an activist group called People for the Ethical Treatment of Robots. There are as yet no synthetic beings remotely as human-like and capable as the Blade Runner replicants to inspire protective support or sympathy for their slave status. But society is beginning to take notice of this new class of beings.

The European Union is contemplating setting up oversight of robots and AIs and granting such autonomous systems a form of legal personhood, similar to corporate personhood. This would provide a legal framework for allocating rights and responsibilities for the systems and their makers, a beginning point for moral judgments. And in October 2017, a life-size feminine robot called Sophia addressed the United Nations. With the ability to speak and change facial expressions, but clearly still more machine than person, Sophia answered questions and displayed manual dexterity. Sophia was also granted citizenship by the kingdom of Saudi Arabia at a technology conference held there (a regime whose poor record on human and women’s rights does not make this a meaningful upgrade, even for a synthetic being).

This juxtaposition of robot and human rights, however, highlights an important possibility for the creation of new kinds of beings, not factory-built robots and AIs, but genetically engineered variants on the existing human model. This could produce “superior” beings who consider themselves the leaders of our race, or “inferior” versions created to serve the rest of humanity. In either case, the obligations between the old and the new humans would not be different from what they are now.

What do we owe each other as human beings? Until we can answer that question for ourselves, we cannot answer it for our creations.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Science Fiction Studies, Vol. 41, No. 3 (November 2014), pp. 688-689
SF-TH Inc
Film Quarterly, Vol. 36, No. 2 (Winter, 1982-1983), pp. 33-38
University of California Press
The Journal of Philosophy, Vol. 61, No. 21, American Philosophical Association Eastern Division Sixty-First Annual Meeting (Nov. 12, 1964), pp. 668-691
Journal of Philosophy, Inc.