University of Mannheim Leads Project on AI for Comprehensible Medical Decisions
Knowledge graphs influence our lives every day, and yet they are little known to the public. For example, if you look for film recommendations on a streaming platform, you often have them to thank for the answer. Knowledge graphs represent an essential aspect of artificial intelligence (AI) and generally describe a model that is used to search for and link information.
The aim of the LAVA research project, led by Prof. Dr. Heiko Paulheim, is to automatically create and improve knowledge graphs that are incorporated into AI. Paulheim holds the chair for data science at the University of Mannheim and LAVA stands for “solutions for the automated improvement and enrichment of knowledge graphs”. The Mannheim computer scientist is implementing the project in collaboration with the Karlsruhe-based company medicalvalues GmbH, with whom he has been cooperating since 2023 on a project for AI-based detection of diabetes. Medicalvalues specializes in AI solutions for medical diagnostics in laboratories and clinics. Unlike film recommendations, in this area it is essential that the AI used works reliably and trustworthy.
LAVA’s joint project goal is a certified medical device that will make it easier for doctors to make quick and precise diagnoses in the future. In the case of a rare disease, for example, data such as X-rays, blood values and other relevant measurements will be brought together and linked using a knowledge graph to make it easier for the treating physician to decide on further steps. Paulheim’s team is contributing software modules that allow this knowledge graph to be kept up to date and error-free.
“Our goal is to provide reusable, well-documented components for white-box AI,” explains Paulheim. White-box AI refers to models that make it transparent how decisions are made – for example by using knowledge graphs that are also understandable to humans. Users can therefore understand which data underlies a decision – unlike black-box models such as Chat GPT, where it is not possible to understand the answers. “An AI is only trustworthy if humans can understand every decision and intervene if wrong decisions are made,” Paulheim continues. At medicalvalues, it should be possible for the AI to suggest extensions to the knowledge graph in the future, but each of these extensions can be checked by medical staff and corrected if necessary.
With his idea, the Mannheim-based AI developer was able to prevail among 600 participants at a DATIpilot pitch in Darmstadt. The project will receive funding from the Federal Ministry of Education and Research (BMBF) totaling 300,000 euros for the next 18 months.