Welcome to Ask a Professor, our series that offers an insider’s view of life in academia. This month we interviewed Joshua May, Professor of Philosophy and Psychology at The University of Alabama at Birmingham. With training in philosophy, the social sciences, and behavioral science, May uses scientific research to examine moral controversies, ethics in science (and life), and the mechanics and philosophies of social change. His most recent book, Neuroethics: Agency in the Age of Brain Science (Oxford University Press, 2023) delves into issues of free will, addiction, and patient autonomy to demonstrate that human agency and mental health are diverse and flexible.
How do moral, social, and political values influence the sciences? The social sciences? How can we become more virtuous in an era of AI, political polarization, and factory farming? These are just a few questions behind Joshua May’s wide-ranging body of research and teaching. In his own words, his work sits at “the intersection of ethics and science,” fed by a desire to understand moral controversies and social change—and the relationship between those things. He encourages us to resist false dichotomies and black-and-white thinking, looking instead for a third, fourth, or even fifth approach to a moral issue (see his discussion of factory farming below for an example). He’s considered the influence of emotions on moral judgement, the emotions provoked by bioethical issues such as human cloning, and the roles of empathy and ego in altruistic behavior. His longstanding interest in free will led to the 2022 co-edited volume Agency in Mental Disorder, which brings philosophical reasoning about limits and culpability to bear on addiction, mental illness, and psychotherapy.
May is also a “public philosopher,” an active contributor to popular debates on neurodiversity, veganism, and politics.
What’s something most people don’t know about your field?
People often think philosophy is mostly just about rehashing what the Ancient Greeks believed or that it’s otherwise disconnected from modern life. Yet contemporary philosophers are analyzing and commenting on some of the most pressing issues facing humanity today: Does neuroscience show that free will is an illusion? When does a human fetus gain rights? Could machines ever be conscious? What does it mean to be human in the age of AI? How should we treat the natural world, including the animals we eat? These include not just questions of ethics but also the nature of the mind and of scientific practice. So, I’d say philosophy is far from anachronistic; it’s a live discipline that’s essential for tackling twenty-first-century problems.
More to Explore
On the History of the Artificial Womb
What’s the best discovery you’ve made in your research?
False dichotomies are everywhere in ethics. Debates about factory farming focus on whether people should strictly omit all animal products from their diet (to go vegan or at least vegetarian) or just eat whatever they want. But I’ve argued, with my collaborator Victor Kumar, that there’s a distinct reducetarian path: most people should imperfectly reduce their consumption of animal products. The appropriate level of reduction all depends on the person and their circumstances. Similarly, does neuroscience show that we have free will or that it’s just an illusion? I think a careful look at the evidence suggests a third option: we have free will, but less than is commonly presumed. When it comes to neurological differences, like autism and ADHD, the false choice is between viewing them as either deficits or mere differences. But they can be one or the other (or both), depending on the person and their circumstances. The same goes for addiction: Is it a brain disease or a moral failing? I’ve argued for a neglected third route: it’s a disorder that nevertheless involves varying levels of control depending on the individual. Throughout moral and political debates, false dichotomies seem to dominate, but in my view, nuance should be the norm.
Do you have a favorite classroom moment?
I’m a dad, so it only seemed natural to start a tradition of ending each week of class with a dad joke. Some of my favorite teaching moments are when students offer up their own jokes or word play. Recently for a Jeopardy-style review session in my Ethics of AI class, a group of students deployed the power of pun to name themselves “ChatGPTeam.” Once in a seminar on moral progress, we got on the topic of sexism, and I said, “Y’all know what mansplaining is, right?” Without missing a beat, one of the students joked, “No, explain it to us.” I nearly fell out of my chair in laughter! Of course, too much fun can derail discussion. It can be tricky maintaining the right balance of humor and command of the room. But striking that balance helps students feel comfortable enough to share their thoughts, even ones that challenge me. Recently I taught an article of mine about reducetarianism, and I was pleased that some students articulated concerns about how minor dietary change isn’t enough to move us away from factory farming. These are my favorite teaching moments because they connect me to my students as real people, not mere vessels for imparting knowledge.
What’s the next big thing in your field?
At the risk of being trendy… I’m going to say it’s AI. A year ago, I was dubious. Artificial intelligence has consistently been overhyped by entrepreneurs who want to sell stock or by ethicists who cry wolf about AI’s existential threat to humanity. But this time it looks like there might be a real wolf. There’s a risk of alarmism, to be sure, but AI is going to influence every aspect of modern human life—medicine, law, education, politics, dating, you name it. I’m finishing up teaching this new Ethics of AI class, and it’s clear there are some serious issues about safety, privacy, fairness, and increasing isolation from real human connections. AI might not destroy or enslave us, but we should all take seriously how it will transform employment, relationships, and democracy. I no longer believe it’s an overhyped flash in the pan, and scholars are already taking it seriously. There will be shoddy work out there that is too alarmist or too dismissive, but I’ll be paying attention to the best analyses of how we should grapple with the age of AI. I recently finished Moral AI: And How We Get There, co-written by a neuroscientist, philosopher, and computer scientist. The authors provide a nicely balanced view of AI’s prospects and pitfalls.
Weekly Newsletter
What’s on your bedside table? What’s your next read?
I can’t get enough of the fascinating moral dilemmas and mysteries of real life, so my “leisure” reading tends to be more nonfiction. Next up is a choice between two books on rather different topics. One is Hungry Beautiful Animals by Matt Halteman, which provides a “joyful case for going vegan” rather than a grievance-driven call to tolerate a plant-based diet. The other is Ezra Klein’s new book with Derek Thompson, Abundance. They argue that for too long progressives have supported politicians and institutions that pass laws and write checks for good causes without caring enough about whether anything good gets done. The book is all about excessive bureaucracy, public distrust in government, and how a progressive response to the current political moment requires a politics of abundance—in job opportunities, housing, education, and more—rather than scarcity, waste, red tape, and division. I suppose both books focus on the positive; I can’t wait to dig into either of them.
Support JSTOR Daily! Join our membership program on Patreon today.