The European Research Council has awarded an Imperial College London researcher €2.5m to make AI more easily available in the areas of health and law.
Professor Francesca Toni will use the funds to make artificial intelligence (AI) and its algorithms less opaque and more explainable to people from different backgrounds. This way, explanations of AI could become customisable to users in fields like medicine and law so that they can better trust and benefit from machine learning and AI.
The project, called “Argumentation-based Deep Interactive eXplanations” (ADIX), aims to empower both scientists and non-scientists to benefit from machine learning. Applications could include explaining why certain news items are deemed ‘fake news’, or explaining why advertising algorithms target certain people.
Professor Toni, of Imperial’s Department of Computing, said: “Today’s AI landscape is permeated with data and powerful methods with the potential to impact a wide range of human sectors, including healthcare and the practice of law. Yet, this potential is hindered by the opacity of most data-centric AI methods and it is widely acknowledged that AI cannot fully benefit society without addressing its widespread inability to explain its outputs, causing human mistrust and doubts regarding its regulatory and ethical compliance.
“Transparency will enable users to better trust AI and gain more benefit from it. ADIX aims to facilitate this.”
While extensive research efforts are being devoted towards making AI more explainable and transparent, they are mostly focused on engineering shallow explanations which provide little transparency on how algorithms reach conclusions. This revolves around the idea that AI is like a ‘black box’, where information goes in and answers come out – but we currently don’t have enough explanation of what it does to get these answers.
Using computational argumentation as the underpinning technology, ADIX aims to define a new scientific way of working with interactive explanations that can be used alongside a variety of data-centric AI methods to explain their outputs by providing justifications in their activities –using human feedback on the explanations to refine these outputs.
When a machine argues – with itself, another machine or a person – it carries out a process similar to human deliberation. It takes one or more assertions and rules and combines them to arrive at a reasoned conclusion.One notable feature of this process is that it is accessible, firstly to expert users, who can view and understand the encoded arguments the machine has generated, and secondly to end users, as long as the AI is designed to represent its arguments in a way that ordinary users can understand, for example in natural language.
Professor Toni added: “ADIX will lead to a radical rethinking of explainable AI that can work in synergy with humans within a human-centred but AI-supported society.”
The ERC Advanced Grants are funded under the EU’s Horizon 2020 initiative.
Mariya Gabriel, European Commissioner for Innovation, Research, Culture, Education and Youth, said: “I am pleased to see more women applying for these prestigious grants – and winning them. For Europe to be competitive globally, we need to nurture all top talent that can push the frontiers of our knowledge. Although the trend is positive, the battle is not yet won and that is why we are making gender balance a priority in Horizon Europe. I welcome the great efforts made by the ERC’s Scientific Council in this respect, and I hope to see more public authorities, universities and research institutions encourage women’s participation in all fields of science.”
ERC President Professor Jean-Pierre Bourguignon said: “We look forward to seeing what major insights and breakthroughs will spring from this investment and trust. We are pleased with the continued positive trend for women researchers showing that ERC’s sustained efforts on this matter pay off.”