TRUSTAI: Exploring How and When Humans Outsource Moral Decisions to AI Agents.

Dr Jim Everett wins prestigious ERC grant for ‘TRUSTAI’, a multi-lingual global project on ‘moral machines’.

Dr Jim Everett, a Reader from the University’s School of Psychology has been awarded a European Research Council Starting Grant of 1.7 million Euros for a five-year project examining the trust in moral machines.

His project – TRUSTAI – draws on psychology and philosophy to explore how and when humans trust AI agents that act as ‘moral machines’. Jim will work with a team of researchers at Kent to explore the characteristics of AI agents that predict trust; the individual differences that make us more or less likely to trust AI agents; the situations where we are more likely to ‘outsource’ moral decisions to AI agents; and how these findings should be used to design AI agents that warrant our trust. 

The project will draw diverse research methodologies including online surveys, qualitative analysis, economic games, in-person behavioural experiments, and empirical ethical analysis. As well as running controlled experiments in 9 different countries, Jim will work with Dr Edmond Awad to create a new data collection platform that will run online experiments globally in at least 3 different languages.

Jim, despite the early stage of his career, is already recognised as a leading international expert in moral psychology. This prestigious grant comes on top of a string of recent successes in the last two years, including the Early Career Award from the European Association of Social Psychology (2020); the Rising Star Award from the Association for Psychological Science (2021); our own Starting Researcher Prize (2021); the Early Career Trajectory Award from the Society for Personality and Social Psychology (2021), and the Philip Leverhulme Prize (2021).

I am beyond grateful to receive this funding from the European Research Council.’ Jim told us ‘Machine systems are increasingly required to display not just artificial intelligence but also artificial morality. These moral machines are not just being tasked with processing ethically relevant information, but even making moral recommendations. My project will bring the latest theories and methods from moral psychology to understand not only when, why, and how we trust such moral machines – but whether such trust is warranted in the first place. This project would not be possible without the support of the ERC and I am so excited and humbled to have this opportunity.” 

Last updated