Siemens Funds King’s Computer Scientist to Advance Trustworthy and Reliable AI Development
Professor Elena Simperl from the Department of Informatics has been awarded a grant by Siemens to develop a new approach to creating trustworthy and reliable AI, able to comply with emerging legal regulations.
The work, funded through the distinguished Hans Fischer Senior Fellowship in conjunction with the Technical University of Munich, aims to develop best practice on the use knowledge graphs and to train safe and trusted AI.
This will then feed into the creation of a legal compliance demonstrator, technology that will guide compliance professionals auditing industrial AI applications to check their legality. This will help set the guard rails for AI developers to create technology in line with the law and empower organisations to confidently adopt AI into their own operations across a span of contexts.
This project joins a growing chorus of work at King’s, including that being undertaken by the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence, to further AI which is safe, verifiable and understandable.
Professor Elena Simperl explains the impetus behind the project, “The deployment of AI promises to revolutionise productivity in the workplace, with a recent explosion of AI-enabled tools making an impact everywhere from autonomous warehouses to chatbots.
AI’s capacity to generate factually wrong or fake information, which it confidently presents to users as the truth, has brought forward the need to address its failings surrounding trustworthiness, transparency, accountability and fairness.
Professor Elena Simperl
“However, AI’s capacity to generate factually wrong or fake information, which it confidently presents to users as the truth, has brought forward the need to address its failings surrounding trustworthiness, transparency, accountability and fairness.
“In the next few years this is likely to become legally binding, with new pieces of legislation and regulatory frameworks like the EU AI Act and the AI Bill of Rights in the USA emerging to regulate AI.
“Knowledge graphs are increasingly being used to train AI tools and this can present an opportunity but also challenge when it comes to improving the reliability, accountability and fairness of AI.”
Knowledge graphs are general purpose, machine-readable databases that bring together individual points of data and contextual reasoning to help applications like web search engines, platforms such as Wikipedia or intelligent assistants like Alexa and Siri deliver facts alongside their provenance. They are increasingly being used to train AI across a range of applications including supply chain management, procurement, and healthcare, as they provide a convenient way for developers to train models on a large tranche of domain specific data.
However, with the increased deployment of knowledge graphs, engineering them increasingly relies on opaque AI models to combine vast collections of heterogeneous sources of data, which lacks transparency, accountability and fairness.
“It’s an honour to receive such a prestigious fellowship from TUM-IAS and Siemens; the opportunity to work with esteemed colleagues like Professor Klaus Diepold and Honorary Professor Sonja Zillner on this hugely important project is a vital step to ensure AI is socially responsible.”
Professor Elena Simperl
Professor Simperl said, “If individuals are unable to verify how these knowledge graphs come to their conclusions or if they are acting in a discriminatory way, this presents an insurmountable obstacle to building trust in the models these graphs are used to train. Professor Simperl’s project ‘TrustKG’ aims to create a blueprint that enables developers to produce knowledge graphs that are transparent and reliable, making it easier for them to train AI that is trustworthy and compliant with the law.
Professor Simperl said, “This work draws on insights from across human-computer interaction and the social sciences to place people at the heart of the creation of these knowledge graphs. With human input we can ensure that the data within is transparent, accountable, and fair at a scale not previously possible. By introducing ‘human-in-the-loop’ elements to knowledge graphs to provide conversational explanations and identify bias, this will empower the construction of AI that humans can believe in and deploy safely.
“It’s an honour to receive such a prestigious fellowship from TUM-IAS and Siemens; the opportunity to work with esteemed colleagues like Professor Klaus Diepold and Honorary Professor Sonja Zillner on this hugely important project is a vital step to ensure AI is socially responsible.”