The icon indicates free access to the linked research on JSTOR.

As part of our Conversations on Intellectual Humility series, JSTOR Daily Editor-in-Chief Catherine Halley talks to cognitive scientist Heng Li about the emerging relationship between artificial intelligence, technology, and intellectual humility. Li believes intellectual humility will become an increasingly important component of AI research, pushing the field to promote an informed and critically balanced engagement with AI. AI tools such as Chat GPT aren’t a replacement for human intelligence, but a complement; intellectual humility can help us navigate the balance between trust and critical engagement with them. We should approach AI as a complement to our own reasoning that requires our engagement, discernment, and oversight. Intellectual humility also invites a development approach that respects and incorporates diverse perspectives, thus ensuring that AI tools are inclusive and considerate of the full tapestry of human experience. By fostering an AI that understands and respects its bounds, we can create a future that leverages the best of what both humans and machines have to offer.

Transcript

Sara Ivry: Hey, everyone. Welcome. I’m Sara Ivry, features editor at JSTOR Daily. We’ve got a special podcast series up right now about intellectual humility and how it might work in different contexts. Broadly defined, intellectual humility means an openness to being wrong. In this episode, JSTOR Daily’s Editor-in-Chief, Cathy Halley speaks with Heng Li. Heng Li is a cognitive scientist at Sichuan International Studies University in China. They talk about the value of intellectual humility in our relationship to artificial intelligence and ChatGPT. It’s a wonderful conversation, and I hope you enjoy it.

Cathy Halley: I just wanted to give you an opportunity to introduce yourself and tell us: what is your background and how did you start studying AI and intellectual humility?

Heng Li: Thank you for the invitation. I obtained my PhD study in the UK. After graduating, I returned to China to work at Sichuan International Studies University, as a cognitive and social psychologist. My expertise centers around the personality and individual differences, particularly with a focus on cognitive processing. I have a profound commitment to principles of diversity, equality, and inclusion, which has directed my attention towards underrepresented groups in psychological research. So, with the rapid advancement of AI, I recognized an opportunity to contribute meaningful insights into how AI is perceived and used across different cultures. Specifically, I want to understand the dynamics in China as existing literature, many are focused on WEIRD populations—those from Western, educated, industrialized, rich, and democratic societies—left a significant gap. So, this curiosity led to the conception of my research initiative, CSI, which stands for ChatGPT, Society, and Interaction. So, my aim was not just to map personality antecedents of AI use in Chinese people, but also to tap into the complex relationship between technology, culture, and individual differences.

I first encountered the concept of intellectual humility within the field of positive psychology. I discovered its wide range applications across, educational, social, and religious studies domain. My own research contributed to this discourse with a publication in the Journal of Social Science and Medicine, where I explored how intellectual humility can diminish “natural-is-better” bias. This term describes the preference to view natural items as inherently superior to their artificial counterparts. So, the main idea of my paper argued that individuals with a higher degree of intellectual humility might recognize the limits of their knowledge. Consequently, they are more open to the idea that natural products don’t always hold an upper hand over artificial ones. So, with the rising prominence of ChatGPT globally, opinions on this emerging technology appear to be polarized. The observations spurred me to probe further into how intellectual humility might shape one’s reception, at least, toward ChatGPT. So, in my research, I planned conventional psychological methodologies with innovative data science techniques to parse out the nuanced patterns in AI acceptance.

So, the published paper you’ve read is part of a series of investigations, looking at how variables like trust, fear of technology, social influence, and cognitive openness contribute to the acceptance or resistance of AI tools such as ChatGPT. So, I believe understanding these factors is crucial not only for academic purposes but also for the developers and policy-makers who aim to make AI technologies more accessible and user friendly across diverse cultures. Moving forward, I intend to extend the work to other, non-WEIRD populations and explore interventions that can promote informed and critically balanced engagement with AI. My goal is to foster a dialogue between technology and psychology that respects and incorporates diverse perspectives, thus ensuring AI development is inclusive and considerate of the full tapestry of human experience. The results of this ongoing research, I believe, are slated for publication soon, and I anticipate that they will offer a more complete understanding of the multi-faceted reasons people either accept or reject emerging technologies like ChatGPT.

Halley: Oh, that’s fascinating. I hadn’t heard that expression—WEIRD—before. But that’s very interesting. And I now understand the end goal of the work that you’re doing. You know, one thing I’m thinking about is you’ve written about the limitations of AI. And so one particular thing I think of when I think about the limitations of AI is the hallucinations that happen, for instance. So AI is telling me something that’s not true. I think this happened in ChatGPT 3, I asked it to tell me about JSTOR Daily, the publication that I edit, and it told me that we had won a Webby Award. Now, I wish we had won a Webby award, but we didn’t. And it was so confident in what it had to say, and so I asked it several times, I still have the chat, I said, “Are you sure we won a Webby Award?” And it invented a URL and said, “Yes, it says that on the Webby Awards page on this URL.” You follow the URL, and there is no page there. It’s a 404 error, of course, and I found that very fascinating. And I’m wondering, should humans be intellectually humble vis-à-vis ChatGPT, given these limitations?

Li: Yes, indeed. I think that the tendency of generative AI to hallucinate is a significant limitation and one that I have discussed at length in my work. This characteristic highlights the need for intellectual humility when engaging with tools like ChatGPT, being intellectually humble versus AI, means recognizing and being critically aware of these limitations while also being open to the benefits that AI offers. Intellectual humility urges us to question information critically, regardless of the source, and extends to AI-generated content. It prompts us to verify facts, cross-reference information, and maintain a healthy skepticism, especially knowing that AI systems can sometimes produce confident, yet erroneous, outputs. So accepting and being open to ChatGPT is not without danger, if done uncritically. The hallucination problem serves as a reminder of why human oversight is essential; it’s crucial to remember that AI is a tool. Yeah, a very powerful one that can amplify our capacities but also our missteps if we rely on it to too complacently. So rather than seeing ChatGPT as a replacement for human intelligence, we should approach it as a complement that requires our engagement, discernment, and oversight. Doing so, I think, allows us to harness the strengths of AI, such as handling large datasets and produce rapid insights, while also guarding against its weakness. Intellectual humility, in this context, becomes our mediator, I believe. It can help us navigate the balance between trust and critical engagement with the tool

Halley: Have you seen in your work improvements in some of the systems? I’m wondering, I’m thinking particularly about the differences between ChatGPT 3 and ChatGPT 4 because I’m familiar with them. But in general, have you seen these kinds of improvements taking place within the AI systems?

Li: Yes, I think so. So, I think improving the generative AI. So, I think to make it more intellectually humble, I think, is certainly a goal worth pursuing, especially as we see these systems become more integrated into our lives. The transition we observed from ChatGPT 3 to ChatGPT 4, I think, reflects an intentional design evolution aimed at addressing earlier limitations. I think that this includes the issue of the hallucinations that we have just discussed or confidently presented misinformation. Yes.

Halley: That’s very interesting. I was thinking about some of your work and some of the other pieces that I’ve read, and I know a lot of people talk about intellectual humility as a human trait. So I think I heard you refer to the system itself being intellectually humble. Am I right about that? Do you feel like the system can be intellectually humble?

Li: I think building AI with intellectual humility is not only advisable but, arguably, necessary for several reasons. First, I think it aligns with the development of AI with key ethical principles such as transparency, accountability, and trustworthiness. So by designing AI systems that can recognize and communicate their limitations, we foster greater trust between humans and AI. This intellectual humility in AI can prompt users to critically evaluate AI outputs, leading to more informed decision making rather than blind reliance on AI-generated content. So, I think the intellectual humility built into AI can serve as a foundation for continuously learning and adaption. It promotes a design that’s inherently oriented towards self-improvement. For example, AI, not aware of its limitations, may be programmed to seek out additional data or human input when it encounters questions or situations beyond its knowledge or knowledge base, or when its confidence level in giving an answer is low. Furthermore, I think intellectual AI can directly address biases. If an AI is designed to question its database, recognize potential biases, and adjust accordingly, I think it can mitigate the propagation of this bias in its responses, incorporate safeguards such as auditing algorithms for bias, and enhancing the diversity of training data and enabling feedback mechanisms for users to report and correct error or biases encountered. In a world increasingly reliant on AI, embedding intellectual humility can ensure that systems remain adaptable and responsibly increase human capacities instead of overstepping and or causing unintended harms. So, I think AI with intellectual humility would be designed as our dynamic partner to human intellect, constantly evolving its understanding and helping to create a more equitable digital ecosystem.

Halley: Yes, that makes a lot of sense. The way you talk about it, it sounds so much more optimistic than most of the people that I see who are worried that the AI is going to take their job, for instance. It’s always set up as an opposition between the humans and the machines. And I hear you talking about a cooperation between humans and machines. So many dystopian narratives in films, for instance, they often show that the computers are going to take over.

Li: Yeah, yeah. Yeah, absolutely.

Halley: Do you feel like we’ll get past that? And with your experience interviewing people, do you imagine us getting past the point where we’re afraid of the machines taking over? Do you see us getting past that dichotomy?

Li: In my opinion, I think casting the future as a dichotomy between humans and machines is a compelling narrative device, but it does not reflect the more complex and hopeful reality of our relationship with technology. So I propose that a more correct narrative should be that we should see humans and AI not as enemies, but as collaborators. Both can bring unique strengths to our shared challenges. So, this is not about selling control to machines. It’s about partnership. AI can process data at an unimaginable scale and speed, which can complement human creativity, empathy, and ethical judgment. So instead of a zero sum game, where one’s gain is the other’s loss, we should aim for future, where symbiosis of human and artificial intelligence amplifies our collective potential. So, I think intellectual humanity plays a critical role in this. It’s about recognizing that despite the power and the potential of AI, it remains a tool created by and for humans. It’s about understanding that AI should be designed with an awareness of its limitations and should be cautious learning from and adapting to human feedback. So, this humility in AI designs also means acknowledging human values and the diversity of human experience as central to the development of technology. So, for AI to be a partner rather than a ruler, it needs to respect the primacy of human decision-making in areas that most affect our lives. So when AI systems are imbued with intellectual humility, I think they can become better at recognizing when to defer to human judgment, such as a moral or complex emotional context. So my general idea is that so by fostering an AI that understands and respects its bounds, we can create a future that leverages the best of what both humans and machines have to offer. So in doing so, we are not just avoiding dystopian outcomes, we are actively building towards a shared and prosperous human-machine future.

Halley: Yes, I think your work is so important because it’s thinking about how do we introduce cognitive differences in some ways into the system and make sure that it recognizes these and accommodates itself to understand and recognize what strengths different people have. And the diversity of input helps create a diversity of outputs. But I’m wondering if you’re seeing people start to trust the AI.

Li: Okay. So I think trust in early relationship, whether with humans or AI, is built on a foundation of reliability, understanding, and the capacity to evolve from mistakes. So with AI, trust emerges from knowing that it is designed with robustness and accountability in mind. So first, we must ensure that AI systems are reliable. That is, they perform consistently under a variety of conditions and can handle the tasks for which they are designed. So, to trust AI, we need a transparency in how these systems work and make decisions. If users, including our university students, understand the reasoning behind AI’s conclusions, they can better anticipate its reliability and the contexts where it may falter. This understanding also extends to knowing when and how AI makes mistakes. By making AI systems auditable, we create the opportunity to review and learn from errors, thus preventing future recurrences. So, fostering trust in AI is an ongoing process that involves legislation, standards, and ethical guidelines so that we can ensure AI systems are used responsibly and for the public good. Regulators, developers, and users must collaborate to create an ecosystem where AI benefits are maximized and its risks are minimized. So we will learn to trust AI by creating systems that are transparent, accountable, and culpable of learning from their mistakes, much like we would a trust a human professional.

Halley: I’m wondering if we could just think about the future a little bit. How do you think intellectual humility will play a role in AI research and development in the future?

Li: I think intellectual humility is critical for the future of AI research and development, largely because it encourages a culture of continuous improvement and a responsible innovation. I think this quality prompts researchers and developers to remain open to new evidence and ideas, to question assumptions, and to revise their approaches in light of new findings. So, inherently intellectual humility in AI means building systems that can challenge their own decisions, learn from feedback, and adapt to over time without human arrogance or bias. This is essential in developing AI that is robust, fair, and transparent, making it more trustworthy and effective.

So, there are notable fields of study where the intersections of AI and intellectual humility in particular, is particularly significant. For instance, in the realm of machine ethics and ethics, I think scholars are exploring how to encode humility into AI decision-making frameworks to ensure these systems can handle ethical nuances and to recognize when to defer to human judgment. In addition, I think, interdisciplinary research is also fascinating, where insights from psychology, philosophy, and cognitive science are informing AI development. Some scholars are proposing that just as humans benefit from intellectual humility, so too should AI systems exhibit a form of artificial humility to improve how they interact with and learn from the world around them. So, one area that’s gaining momentum is the development of AI that can engage in reflective learning, a process that not only adjusts based on new data, but also reevaluates its learning methods. I think this approach could be crucial for the progress in AI that operates in dynamic and complex real world situations.

So, I think, to sum up my opinions, I think intellectual humility will likely become an increasingly critical component of AI research, pushing the field towards systems that are better able to integrate into our complex, ever-changing world. And it invites a future where AI’s not just powerful, but also wise. They need operations consistently allied with, evolving matrix of human values and ethics.

Halley: Thank you so much. I’ve learned so much from listening to you talk about your work and talk about AI, which—there are so many wonderful possibilities there. And I’m so glad to hear you talk about the strengths of humans and the strengths of computers, and how you can introduce into these systems a sense of learning from its mistakes. Which is, I think in some ways, the ultimate form of intellectual humility. So thank you for that. Is there anything else you’d like to share with us?

Li:  I have one more thing that I want to add is that as for specific scholars and or thought leaders, I think because the work of people like Stuart Russell, who advocates for aligning machine intelligence with humor, objectives, and values, I think is incredibly insightful. I encourage people to read this classic work. I think Russell’s focus on creating beneficial AI aligns closely with the principles of intellectual humility. I know that there are some initiatives on AI research, which bring together academics, researchers, and industry experts building a collaborative environment for addressing the challenges of AI nowadays. So I think within these collaborations, we can find a more promising way to see how AI can incorporate intellectual humility into its very foundation. Yeah.

Halley: Thank you. I will get that list from you of people you would recommend, and I will include it with the podcast in the notes so that people can go and look at that research as well.

Li: Yeah. Thank you very much, Catherine. Yeah. Very nice to talk to you.

Halley: Thank you.

Ivry: That was Heng Li and Cathy Halley talking about artificial intelligence and intellectual humility. We’ve got other fabulous conversations about intellectual humility, what it is and how it might be applied in the classroom, in a bar, in a religious sanctuary, and at a doctor’s office on our website. That website is daily.jstor.org. We’ve also got a reading list about intellectual humility that you can check out, and we sure do hope you will. I’m Sarah Ivry, the features editor at JSTOR Daily. This conversation and all the conversations in our series was produced by Julie Subrin with help from JSTOR Daily’s Cathy Halley and from me.

Funding for this project was provided by UC Berkeley’s Greater Good Science Center, as part of its “Expanding Awareness of the Science of Intellectual Humility” initiative. That initiative is supported by the John Templeton Foundation. Thank you so much for listening.

Listen to the rest of the “Conversations on Intellectual Humility” series on the JSTOR Daily website, or wherever you get your favorite podcasts.


Support JSTOR Daily! Join our membership program on Patreon today.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Daedalus, Vol. 151, No. 2, AI & Society (Spring 2022), pp. 43–57
The MIT Press on behalf of American Academy of Arts & Sciences