Meet Dr Jim Everett

Emily Collins
Dr Jim Everett

Dr Jim A.C. Everett is a Reader (Associate Professor) at the University of Kent, specializing in moral judgement, perceptions of moral character, and the moral psychology of artificial intelligence. Whether it’s political leaders, fictional characters or computers making the decisions, Jim’s research is helping us understand why we respond to their actions in the way we do. 

How did you come to be internationally recognised as an expert in the psychology of utilitarianism? 

I first became interested in moral judgement and altruism during my doctorate at the University of Oxford, during which I received a Fullbright Fellowship to work at Harvard University. Some of my earlier work questioned what influences people to donate to charity, the stigma around Islamic head coverings, and whether religious people are more prosocial than atheists. This laid the foundation for my interdisciplinary research that draws on ethical theories from philosophy to enrich our moral psychology. For me, my ideal approach to research is one in which I move from the philosopher’s armchair to the psychologist’s laboratory, and then back again.   

Where this has come out the most is in my work in utilitarianism. Utilitarianism is a highly influential – and controversial – theory that at root says that morality is just impartially maximising welfare: the greatest good for the greatest number. In my work I’ve been studying how ordinary people think about morality, and how this relates to utilitarianism. While previous work has largely focused on “trolley dilemmas” that look at how people think about the acceptability of causing harm to some individuals for the greater good, my work has tried to understand what underlies the more “positive”, impartial side of utilitarianism too – what we call “impartial beneficence”. For example, how do people think about the importance of helping others in distant countries? Is it morally acceptable for people to spend money on frivolous luxuries while others live in poverty?  

Where does your interest in moral character come from? 

My interest in moral character and trust originally came from thinking about the social consequences of making utilitarian moral judgements which promote the greatest amount of good for the greatest number of people. Morality is incredibly important to how we perceive other people, yet traditionally we have known little about how the different kinds of moral judgments people make and how the behaviours they perform impact person perception. For example: think about someone who chooses to spend their time volunteering to help strangers rebuild their house instead of spending time with their sick mother. Both are “moral” actions, but we might form a different impression about someone based on which of these “moral” actions they do.

In my work I have been trying to explore how different kinds of decision makers are perceived, and whether this could even help explain why we tend to make certain kinds of moral judgments in the first place.   

It sounds as though anyone striving to gain trust could learn a thing or two from your research. Any tips? 

In 2020, I led a study which used my two-dimensional model of utilitarianism to ask the question of how trust in our leaders effects our response during a global health crisis, with a focus on the covid-19 pandemic. I worked with researchers at Yale University to assemble a multidisciplinary team of 37 international researchers to carry out experiments involving nearly 24,000 people in 22 countries. 

We found that participants showed more trust in leaders who endorsed utilitarian views in impartial beneficence dilemmas, for example arguing that medicine and PPE should be sent wherever in the world it can to do the most good, rather than favouring their own citizens. But participants showed less trust in leaders who endorsed utilitarian views in instrumental harm dilemmas, such as those who suggested we must be willing to prioritise some over others for the greater good. These insights, published in Nature Human Behaviour, may prove useful in guiding how leaders approach similar moral dilemmas in the future. 

We have recently followed this up in a new paper published in Psychological Science in which we extend this to views about vaccine nationalism. We showed that while many countries have pursued nationalistic policies about vaccines (keeping excess vaccines within the country instead of redistributing to countries who do not have enough), in actuality participants trust redistributive leaders more than nationalistic leaders. Importantly, we showed that professional civil servants had the opposite intuition and wrongly predicted higher trust in nationalistic leaders. These results continue to show how the decisions that leaders make when faced with moral dilemmas influences how they’re trusted, while also demonstrating that oftentimes those in power can misjudge the public – perhaps because they overestimate other’s self-interest.  

Does the same apply when it’s artificial intelligence making the decisions? 

That’s a really important question, and exactly what I’m seeking to find out. My work here is really driven by the idea that machine morality is as much about human moral psychology as it is about the philosophical and practical issues of building artificial agents. We need to understand how, when, and why people trust artificial agents to make morally relevant decisions – and consider both the positive and negative consequences of this trust. 

Following from an earlier 2021 £300,000 grant from the Economic and Social Research Council, I have recently been awarded 1,700,000 Euros from the European Council to continue my work. From 2024 to 2028, my new project will integrate psychology and philosophy to explore how and when humans trust AI agents that act as ‘moral machines’. Drawing from classic models of trust and recent theoretical work from moral psychology on the complexity of trust in the moral domain, I am exploring the characteristics of AI agents that predict trust, the individual differences that make us more or less likely to trust AI agents, and the situations in which we are more or less likely to trust in AI. 

In June 2023, I organised the Moral Psychology of AI Conference which brought together leaders in the field of morality, psychology and AI with researchers from various disciplines to share research and provoke discussions surrounding the moral psychology of artificial intelligence. 

 

Dr Jim Everett completed his BA, MSc, and D.Phil at the University of Oxford, before receiving a Fulbright Fellowship to work at Harvard University, and a Marie-Sklodowska-Curie Postdoctoral Fellowship to work at Leiden University. Jim has received numerous awards for his work, including a highly prestigious Philip Leverhulme Prize in 2021 in recognition of his internationally recognized early career contributions to psychology. As well as receiving the Theoretical Innovation Prize from the Society for Personality and Social Psychology for his work on utilitarianism, he has the honour of receiving early career awards from the  three largest international societies in social psychology: the 2020 Early Career Award from the European Association of Social Psychology, the 2021 “Rising Star” Award from the Association for Psychological Science; and the 2021 Early Career Trajectory Award from the Society for Personality and Social Psychology. 

Jim’s research has been featured in The Times, The Guardian, The Daily Mail, The New York Times, Scientific American, and more, and he is open to being featured in print, on TV and the radio.