‘We need to leverage ‘good’ AI to prevent ‘bad’ AI’

Olivia Miller
Picture by Pixabay

As the Government prepares to host an international AI safety summit in November, it has expressed fears around artificial intelligence (AI) technology developing beyond control and being exploited by bioterrorists. It is a concern from the Government that this could lead to vast national security issues. Experts from the University’s Institute of Cyber Security for Society (iCSS) and the School of Computing comment…

Professor Shujun Li, Director of iCSS, said: ‘Given the already observed difficulties of reaching an AI governance framework and regulations for all nations and researchers to agree on and strictly adhere to, I believe the key of preventing AI from being out of our control is to develop new techniques to allow humans and AI to interact with each other more effectively and more efficiently.

‘One aspect is to detect any signs that a fully automated AI system could go wild and let human operators to take control back and shut down the AI system. Another aspect is to never let a fully automated AI system make crucial decisions without any appropriate human involvement and approval. It will be the case that we (as human users) will still need to rely on AI to do some of the work, e.g. monitoring what another AI system is doing and when it can go wrong.

‘In other words, we will need to leverage ‘good’ AI to prevent ‘bad’ AI, with human users in the loop as the ultimate decision makers. More and more researchers from different disciplines have realised the urgent need for more research on trustworthy and responsible AI, for which all national governments and research funders should consider increasing their funding in the future.’

Dr Dominique Chu, member of iCSS and Senior Lecturer at the University’s School of Computing, added: ‘There may well be a danger of rogue bioterrorists exploiting AI to do harm, but this calls for actions on bio-tech regulations in the first place. After all, even the most powerful AI we have, or can imagine, will not by itself produce any harmful substances.

‘What we should worry more about is governments and other powerful actors running out of control and using the power of AI to control the population and undermine democracy. China is currently showcasing the opportunities of AI in this respect.’

The University’s Press Office provides the media with expert comments in response to topical news events. Colleagues who would like to learn more about how to contribute their expertise or how the service works should contact the Press Office at pressoffice@kent.ac.uk