Most people would say I live in a four-person household: my husband, my daughter, my son and me. But it feels more and more like I’m living in an eight-person family: the four of us humans, plus Alexa, Siris 1 and 2, and Tivodrome.
Alexa is our Echo, the voice controlled computer from Amazon that can handle a range of basic information, content, and shopping tasks. She sits in the living room, where she most often serves to take note of our grocery needs. (“Alexa, please add Greek yogurt to our shopping list.”) Sometimes, the kids invoke her for their own amusement: “Alexa, open the Magic Door” will start a voice-controlled text adventure game; “Alexa, ask the Angry Bard for a burn” will produce a Shakespearean insult. But Alexa also operates as a third parent, and one that does not negotiate: every video gaming session in our house begins with “Alexa, set a timer for 45 minutes,” “Alexa, set a timer for 40 minutes,” and “Alexa, set a timer for 43 minutes.” When Alexa’s five-minute, two-minute and end-of-game-time alerts have all sounded, the kids are far less likely to whine for additional game time — perhaps because the Amazon Echo doesn’t recognize “Can I have five more minutes?” as a command.
Siri 1 lives in my iPhone, and Siri 2 lives in my husband’s. They may have started out with similar personalities, but by now, they’ve firmly diverged — and not just because my son altered my Siri to sound like an Australian man. My husband’s Siri is a know-it-all best friend: she helps him find nearby stores, identify that familiar song playing in the restaurant or answer weird questions like “What’s the formula for the volume for a cylinder?” My Siri is more like a nagging parent: because I constantly dictate reminders into my phone, Siri pops up throughout the day, telling me to respond to a neglected email, pick up the dry cleaning or finish an overdue report.
Tivodrome is the artificial family member who’s closest to being a legal person — though if you’re going to be literal about it, she’s actually a Mac Mini hooked up to our living room TV. She has her own email address, her own credit card and her own mailing address. Her purpose is to ensure that even though we live in Canada, we have access to all the content and products we could get if we lived in the United States. She uses a proxy connection to look like she’s in California, so we can watch videos on Amazon; her US mailing address and credit card allow us to order stuff that can’t be shipped to Canada.
Alexa, Siri, Tivodrome: if our digital household sounds a little crowded, consider how more and more families have virtual members. In the years ahead the family Echo, Roomba and iPhone will be joined by the self-driving car, the robotic caretaker and the delivery drone. As we speak to them, name them and even (as in Tivodrome’s case) get them their own credit cards, we have to wrestle with the question of how naming and talking to our devices changes our relationship to technology itself.
And of course, how it changes us. Sherry Turkle has notably argued that these nascent human-robot relationships pose a profound peril to our social skills and tolerance for complexity. In Alone Together: Why We Expect More from Technology and Less from Each Other, Turkle worries that “[r]elationships with robots, which do not demand or cultivate real empathy, threaten to degrade genuine relationships with real people.”
It’s an argument that makes intuitive sense — but only if you have access to the kind of robotic companions that do most of the work of appearing human. So far, as in our home, it takes both willpower and imagination to relate to our devices as even semi-personified. Siri isn’t even credible as a bluff Australian, and Amazon’s Echo is far too likely to resort to “I can’t find an answer to the question I heard.”
Why are we so determined to name and humanize our devices when they’re still so limited? There are are a couple of major drivers. One is the drive for connection: “Lacking social connection with other humans may lead people to seek connections with other agents and, in so doing, create humanlike agents of social support.” The other is to cope with uncertainty: “Given the overwhelming number of biological, technological, and supernatural agents that people encounter on a daily basis, one way to attain some understanding of these often-incomprehensible agents is to use a very familiar concept (that of the self or other humans) to make these agents and events more comprehensible.”
That argument isn’t so far away from Turkle’s, insofar as it sees our humanized devices as a problematic solution to our needs for connection and predictability. Denis Vidal paints a far more complicated picture in his fascinating article on what roboticists can learn from Hindu worship of anthropomorphic idols in the Himalayas. Vidal writes that
one of the most significant lessons that one can draw from observing the interaction between worshippers and their divinities is that the attribution of a specific identity and particular character to a deity is not necessarily dependent on its displaying a strong coherence of behaviour. On the contrary, most divinities behave with their worshippers with versatility and unpredictability. But the consequence of this is to deepen rather than obfuscate the intensity of the relationship, the logic of which is based on the human acknowledgement of and will to manage this unpredictability in a way that makes sense to all involved.
This analysis offers us hope that our evolving relationships with machines may deepen our humanity. Rather than seeing the anthropomorphizing of technology as a sign that we’re taking the easy way out of a difficult, human world, Vidal recognizes the hard work that goes into projecting an identity onto a limited device.
Far from simplifying our relationship to technology, naming and humanizing our devices complicates it, plunging us into a world of ambiguity. Are these devices people, or even quasi-people? Do they think, or can we relate to them as if they do? Relating humanely to our devices isn’t easier than relating to humans; it’s harder. For what is more demanding than trying to make a knowable, meaningful entity out of something that is not (yet) either?
Yet for all the energy we put into sustaining the illusion that our devices are quasi-human, we’re still quick to note the many ways in which they still fall far short of anything like sentience. That reflects not only our ambivalence about humanizing machines, but our ambivalence about ourselves. As Keiper and Schulman put it:
We want Als both because we deem ourselves worthy of delights and riches and because we believe we are too terrible to reliably achieve them on our own. We want them because we want both rulers and slaves; because we already consider ourselves to be both rulers and slaves, and deserving of treatment as such.
And, I would add, we want to become AIs ourselves. For all the work our family does to humanize the devices in our household, we are as eager to become computer-like ourselves as we are to have our computers become more human.
I long to have my kids heed my end of screen time warning as carefully as they heed the Echo’s; I rue my inability to remember the family’s grocery needs without Alexa’s help. My husband sounds a note of satisfaction whenever he can produce a random factoid without Siri’s help, as if he’s competing with her for the title of Most Knowledgeable Family Member. Even the kids want in on the action: if I ask either child to remind me of something, they now emulate Siri by responding in a robot-like voice, “creating reminder”.
It’s this effort that gives me hope that when truly sentient machines emerge, we may be ready for the challenge. Because we’ll not only have prepared ourselves to greet and relate to machines as people: we’ll have discovered the parts of ourselves that can understand, emulate and even love a machine.