University of Southampton Secures £12 Million for UK Projects Tackling Rapid AI Advances

A series of breakthrough AI projects has been awarded £12million to address the challenges of rapid advances in artificial intelligence.

Three initiatives in the UK will look to tackle emerging concerns of generative and other forms of AI currently being built and deployed across society.

The projects cover the health and social care sectors, law enforcement and financial services.

An additional two projects, funded by UKRI, are looking at both how responsible AI can help drive productivity and how public voices can be amplified in the design and deployment of these technologies.

Funding has been awarded by Responsible AI UK (RAi UK) and form the pillars of its £31million programme that will run for four years.

RAi UK is led from the University of Southampton and backed by UK Research and Innovation (UKRI), through the UKRI Technology Missions Fund and EPSRC. UKRI has also committed an additional £4m of funding to further support these initiatives.

Professor of Artificial Intelligence Gopal Ramchurn, from the University of Southampton and CEO of RAi UK, said the projects are multi-disciplinary and bring together computer and social scientists, alongside other specialists.

He added: “These projects are the keystones of the Responsible AI UK programme and have been chosen because they address the most pressing challenges that society faces with the rapid advances in AI.

“The projects will deliver interdisciplinary research that looks to address the complex socio-technical challenges that already exist or are emerging with the use of generative AI and other forms of AI deployed in the real-world.

“The concerns around AI are not just for governments and industry to deal with – it is important that AI experts engage with researchers and policymakers to ensure we can better anticipate the issues that will be caused by AI.”

Since its launch last year, RAi UK has delivered £13milliion of research funding. It is developing its own research programme to support ongoing work across major initiatives such as the AI safety institute, the Alan Turing Institute, and BRAID UK.

RAi UK is supported by UKRI, the largest public funder of research and innovation, as part of government plans to turn the UK into a powerhouse for future AI development.

Dr Kedar Pandya, UKRI Technology Missions Fund SRO and Executive Director at EPSRC said: “AI has great potential to drive positive impacts across both our society and economy.

“This £4m of funding through the UKRI Technology Missions Fund will support projects that are considering the responsible use of AI within specific contexts.

“These projects showcase strong features of the responsible AI ecosystem we have within the UK and will build partnerships across a diverse set of organisations working on shared challenges.

These investments complement UKRI’s £1bn portfolio of investments in AI research and innovation, and will help strengthen public trust in AI, maximising the value of this transformative technology.”

 

Using AI to support police and courts

The £10.5 million awarded to the keystone projects was allocated from the UKRI’s Technology Missions Fund investment at the inception of RAi UK last year.

This includes nearly £3.5million for the PROBabLE Futures project, which is focusing on the uncertainties of using AI for law enforcement.

Its lead Professor Marion Oswald MBE, from Northumbria University, said that AI can help police and the courts to tackle digital data overload, unknown risks, and increase operational efficiencies.

She added: “The key problem is that AI tools take inputs from one part of the law enforcement system but their outputs have real-world, possibly life changing, effects in another part – a miscarriage of justice is only a matter of time. Our project works alongside law enforcement and partners to develop a framework that understands the implications of uncertainty and builds confidence in future probabilistic AI, with the interests of justice and responsibility at its heart.”

Limited trust in large language models

Around £3.5million has also been awarded to a project addressing the limitations of large language models, known as LLMs, for medical and social computers.

Professor in Natural Language Processing Maria Liakata, from Queen Mary, University of London, said: “LLMs are being rapidly adopted without forethought for repercussion.

“For instance, UK judges are allowed to use LLMs to summarise court cases and, on the medical side, public medical question answering services are being rolled out. Our vision addresses the socio-technical limitations of LLMs that challenge their responsible and trustworthy use, particularly in medical and legal use cases.”

Power back in hands of people who understand AI

The remaining £3.5million is for the Participatory Harm Auditing Workbenches and Methodologies project led from the University of Glasgow.

Its aim, said principle investigator Dr Simone Stumpf, is to maximise the potential benefits of predictive and generative AI while minimising potential for harm arising from bias and “hallucinations”, where AI tools present false or invented information as fact.

She added: “Our project will put auditing power back in the hands of people who best understand the potential impact in the four fields these AI systems are operating in. By the project’s conclusion, we will have developed a fully-featured workbench of tools to enable people without a background in artificial intelligence to participate in audits, make informed decisions, and shape the next generation of AI.”

Additional £4million from UKRI

UKRI have invested an additional £4million of support through the UKRI Technology Missions Fund to both support the keystone projects and additional satellite projects.

£750k has been awarded to The Digital Good Network, The Alan Turing Institute and The Ada Lovelace Institute to ensure that public voices are attended to in AI research, development and policy.

The project will synthesise, review, build and share knowledge about public views on AI and engaging diverse publics in AI research, development and policy. A key aim of the project will be to drive equity-driven approaches to AI development, amplifying the voices of underrepresented groups.

Project lead Professor Helen Kennedy said: “Public voices need to inform AI research, development and policy much more than they currently do. This project represents a commitment from UKRI and RAI UK to ensuring that happens. It brings together some of the best public voice thinkers and practitioners in the UK, and we’re excited to work with them to realise the project’s aims.”

A further £650k has been awarded to The Productivity Institute to gain insights on how the uptake of responsible AI can be in incentivised through incentive structures, business models and regulatory frameworks.

The Institute wishes to better understand how responsible AI can drive productivity and ensure the technologies are deployed responsibly across society and enhance the UK’s prosperity.

Project lead Professor Diane Coyle said: “This is an opportunity for the UK to drive forward research globally at the intersection of technical and social science disciplines, particularly where there has been relatively little interdisciplinary research to date.

We are keen to enhance connections between the research communities and businesses and policymakers.”