The Whole Range of Human Possibilities: U-M professor Webb Keane inspects how humanity and morality intersect with “Animals, Robots, Gods”

WRITTEN WORD PREVIEW INTERVIEW

Animals, Robots, Gods book on the left; author portrait on the right.

To whom or what do we owe ethical consideration? What circumstances call for morality?

University of Michigan professor Webb Keane argues that the answer to these questions is inextricably linked to our personal context in his new book, Animals, Robots, Gods: Adventures in the Moral Imagination.

People don’t live moral life in the abstract, they live it within specific circumstances and social relations, with certain capacities, constraints and long-term consequences. Put another way, you simply cannot live out the values of a Carmelite nun without a monastic system, or a Mongolian warrior without a calvary, and the respective social, economic and cultural systems that sustain them and acknowledge their worth.

We are who we are—and we make decisions—based on the situations in which we find ourselves, according to Keane.

Animals, Robots, Gods contains five chapters along with an introduction and coda. In the introduction, Keane starts by sharing that one of the premises of the book is the question, “What is a human being anyway?” and says that, “we will explore the range of ethical possibilities and challenges that take place at the edge of the human.” As he shows, the delineation is not always so clear.

Keane brings in a range of examples to examine morality including the classic trolley problem, people on life support, organ transplants, animal hunters, robots, artificial intelligence, and shamans. An individual’s relationship to a situation and another person can change their calculations of what is moral. In the first-person perspective, “you might find yourself compelled by the very specific emotional bonds and social identities,” but “the more distanced third-person stance asks you to see things in more abstract or principled terms.” This difference influences choices ranging from when to turn off the machine that is keeping a relative alive to whether to slaughter animals. For example:

…Marapu people were losing ground to Protestant conversion. One of the hottest topics of polemic was animal sacrifice. Christians liked to accuse the Marapu people of wasting animals in vain. For their part, Marapu people would point out that they only kill animals for ritual purposes, and never without a prayer of offering, to direct the sacrifice to its goal. By contrast, they would say, ‘Christians are greedy, slaughtering animals whenever they feel like it, just so they can eat meat. We only do it out of duty.’

Which group is right or wrong? Is anyone right or wrong in that situation?

The book concludes by applying these concepts to computers and artificial intelligence. People, in some ways, merge with machines because “[Turkle] said that the users she interviewed tended to project what she called a ‘second self’ onto their computers, seeing them as part of themselves.” Such an interaction once again blurs the line between what is and is not human. Another confounding element arises “because we bring so many prior expectations and habits of interpretation into our encounter with computers, we are well prepared to make meaning with what the computer gives us.” Making meaning then becomes exactly what makes humans necessary even as AI grows more powerful:

…since AI has no physical, social or emotional experience, the semantic space it works with is only words. It has no reference to anything outside the corpus of texts. It takes human interpretation to make the connection between words and the world as they know it….

Despite the newness of AI, it is not without moral considerations and similarities to existing phenomena.

Keane will discuss his book and be joined in conversation by fellow U-M professor Elizabeth Anderson on Thursday, February 20, at 6:30 pm at Literati Bookstore. Prior to his event, I interviewed him about Animals, Robots, Gods.

Q: You have conducted research around the world. What keeps you studying and teaching anthropology at the University of Michigan?
A: I am lucky enough to be in one of the most interesting and supportive anthropology departments in the country, with wonderful, feisty students who do their best to keep me from growing lazy and complacent.

Q: Your new book is about “the ethical dilemmas posed by interactions with non-humans and near-humans, including animals and AI,” according to your webpage. Why are the topics of morality and humanity important right now?
A: Much of the debate around AI has centered on harms that are easy to see and grasp—the proliferation of hate speech, bias, and misinformation, and the threat to jobs. But things that are easy to see are also more readily countered. What about potential risks that are harder to see and possibly slower in emerging? I have been struck both by the moral panic that new technology can inspire—for instance, that AI may become our overlord—and a great deal of utopian nonsense, too. Why moral? Because things that exert pressure on the boundaries between human and non-human may affect our own self-understanding and ethical intuitions. They can change who we think we are and therefore how we treat others. And how we treat others depends on how far we extend our moral circle—but in doing so, we too may change.

Q: Relatedly, why was it important for you to write about morality and humanity?
A: As an anthropologist, I have been frustrated that almost all the conversation around new technology, medical ethics, and animal rights, both in the general public and among specialists, is unselfconsciously ethnocentric. It takes for granted the values and worldview of people in the world that has been called "WEIRD"—Western, educated, industrialized, rich, and democratic—but claims to speak of human universals. Well, most of humanity today is not WEIRD and until recently, no one was. So one of the jobs of the anthropologists is to broaden our scope of vision and learn from the whole range of human possibilities.

Q: Animals, Robots, Gods reads like a long essay but with chapters. What was your approach to writing this book? How did you go about organizing it?
A: I was approached by an editor at Penguin Books U.K. who asked if I would consider writing for a wider audience. I had never really tried to do so, apart from an op-ed piece and an essay or two and thought it would be fun to try. I had several sets of examples that seemed to go together, though I jettisoned some that didn’t quite work (one on corporations) or just made the book too long (the environment). I’ve taught undergraduates for over 30 years, so I tried to write in the voice I use in the classroom.

Q: Your research includes fieldwork, such as on Sumba, an island in Indonesia. How did you decide to make the leap to also study machines? How has considering robots and artificial intelligence been different from or similar to researching people and cultures?
A: First I should make clear, this book is mostly based on other people’s research, with a few bits from my own fieldwork. Since one of my purposes is to explore the range of possible ways of looking at things, I had to draw from lots of different sources. As for writing about new technology, there are several challenges. One is simply that you have to be careful about your own limits and know what lies beyond your expertise. I’ve learned a lot about AI but it’s a hugely technical field, and my focus has to be on how people respond to it rather than what’s going on under the hood. Second, there hasn’t been a lot of ethnographic research on AI yet, so unlike the chapters on people in comas and hunters and their prey, or even the one on robots, many of my sources come from written work by AI researchers or journalistic reporting—and that’s mostly from the U.S. Third, it’s so fast changing. I tried hard to find aspects that were not super time-sensitive and think I mostly succeeded, but already some of what you could say in 2023 is out of date.

Q: One of the themes of Animals, Robots, Gods is how morality is not separate from its context, except in thought experiments or when viewed from the third person perspective. For example, you write, “Morality is about social relationships: you might think this is pretty obvious. But in concrete situations, those relationships can be hard to recognize. This brings us back to self-driving cars. Who, or what, can you have a social relationship with? Who or what can be a moral agent?” You apply this contextual and relational perspective to both living beings and machines. Have you encountered any resistance from people about applying morality to machines or not? Why do you think that is?
A: Most responses I’ve had are intrigued rather than resistant. But in the book, I discuss one philosopher who is appalled that Japanese owners of robot pet dogs sponsor death rites for them when they break down. He says they are deluded. Anthropologists who’ve studied this find the story is more complex. After all, you can cry at a movie without being deceived about reality.

Q: I keep thinking about your example of the U.S. Army colonel who was managing a robot that was detonating explosives in a minefield, stopped the process when the robot had lost all but one limb, and said it was “inhumane.” From the outside looking in, the story seems surprising. Yet, you also write, “The more that AI-driven systems take charge of making decisions about people’s lives, in hiring, policing, finance, medical care and so forth, the more real becomes the problem of making machines moral, a problem that had once been speculative.” How might readers apply your findings to their own life? Is the goal to encourage people to be kinder to other creatures and machines or more skeptical?
A: Kindness isn’t always the key in every situation, but self-awareness is essential. When it comes to robots and AI, it’s crucial to keep our perspective and sort out what’s really new from what’s not—and above all, to be clear about our role in the results, and not grant our devices more agency than they have.

Q: What are you reading and recommending these days?
A: I’m always reading novels as well as trying to keep up with the best new fieldwork-based ethnographies. And I have just started The Line by James Boyle. He is making arguments quite similar to mine but from the perspective of legal theory.

Q: This is your sixth book. What are you working on next?
A: I’m writing about how AI is changing the way we think about language, and, like many people, I’m concerned about how new technology is binding us to the interests of a small number of corporations. Alongside this, I continue research in Indonesia.


Martha Stuit is a former reporter and current librarian. 


Webb Keane will discuss Animals, Robots, Gods: Adventures in the Moral Imagination and be joined in conversation by fellow U-M professor Elizabeth Anderson on Thursday, February 20, at 6:30 pm at Literati Bookstore, 124 East Washington Street, Ann Arbor.