Queen Mary Initiative Addresses Risks of ChatGPT-Like Systems in Healthcare and Law
The Queen Mary University of London professor has been awarded a £4.38 million grant to tackle a pressing challenge in the field of Artificial Intelligence (AI). Professor Maria Liakata, a leading expert in Natural Language Processing (NLP), Turing AI Fellow, will spearhead a highly competitive RAI UK Keystone project to address the critical issue of sociotechnical limitations in Large Language Models (LLMs). Funded by a £31 million strategic investment from the UK Government, the RAI UK Keystone programme is recognised as a hallmark of excellence in responsible AI research.
Large Language Models (LLMs), like those used in ChatGPT and virtual assistants, are cutting-edge artificial intelligence algorithms trained on massive amounts of text data. They can generate human-like text as well as creative content, translate across languages, and answer questions in an informative way. However, their rapid adoption, particularly in safety-critical domains like healthcare and law, raises serious concerns.
“Through this project,” says Professor Liakata, “we have a real opportunity to harness the potential of LLMs for better services and efficiencies in healthcare and law, while mitigating the risks stemming from deploying poorly understood systems.”
Despite known limitations such as biases, privacy leaks, and lack of explainability, LLMs are finding their way into sensitive areas. In the legal system, for instance, judges are already using ChatGPT to summarise court cases. However, what happens if an LLM gets the chronology wrong or reinforces existing racial biases in parole decisions? Similarly, public medical question-answering services powered by LLMs could provide inaccurate or biased information due to limitations in the underlying technology.
Professor Liakata emphasises, “The potential for harm is significant. This project aims to ensure that society reaps the benefits of LLMs while ensuring the prevention of negative consequences.”
The project prioritises healthcare and law due to their critical role in the UK economy and the potential for both significant risks and groundbreaking advancements, and will focus on two key objectives:
- Evaluation benchmark: A comprehensive set of criteria, metrics, and tasks will be developed to evaluate LLMs across many real-world settings and applications. This will involve collaboration with partners such as diversified companies like Accenture, Bloomberg, Canon Medical and Microsoft as well as the NHS and service users to ensure the benchmark reflects real-world needs.
- Mitigating solutions: Researchers will develop innovative machine learning methods informed by legal, ethical, and healthcare expertise. These solutions will address LLM limitations identified by the evaluation benchmark, with the aim of being readily incorporated into existing and future LLM-powered systems.
“Professor Liakata’s project is a timely and crucial endeavour. Responsible development and deployment of AI like LLMs are essential to ensure public trust and maximise their potential benefits across various sectors. Queen Mary is proud to support this research that aligns perfectly with our commitment to responsible AI innovation,” says Professor Wen Wang, Vice-Principal and Executive Dean for Science and Engineering, Queen Mary University of London.
“These projects are the keystones of the Responsible AI UK programme. They have been chosen by the community because they address the most pressing challenges that society faces with the rapid advances in AI. We are excited to be announcing these projects at CogX in Los Angeles where some of the most influential AI representatives from industry and government are present,” said Professor Gopal Ramchurn, CEO of Responsible AI UK (RAI UK).
He added: “The concerns around AI are not just for governments and industry to deal with. It is important that AI experts engage with researchers from other disciplines and policy makers to ensure that we can better anticipate the issues that will be caused by AI. Our keystone projects will do exactly that and work with the rest of the AI ecosystem to bring others to our cause and amplify the impact of the research to maximise the benefit of AI to everyone in society”.
Professor Liakata concludes, “By focusing on these crucial areas, we can ensure that LLMs are developed and deployed responsibly, ultimately transforming healthcare and legal services while safeguarding the public.”