Jump to content

Trustworthy AI Made in Mannheim

How can we use artificial intelligence (AI) to make comprehensible medical decisions? A new project at the University of Mannheim, which has received funding from the Federal Ministry of Education and Research (BMBF), will analyze this question.

Knowledge graphs influence our lives daily and yet the public knows little about them. Anyone looking for movie recommendations on a streaming platform, for example, often has to thank knowledge graphs for providing the answer. Knowledge graphs are an essential aspect of artificial intelligence (AI) and generally describe a model that is used to search for and link information.

The aim of the LAVA research project, led by Professor Dr. Heiko Paulheim, is to automatically create and improve knowledge graphs that are built into AI. Paulheim holds the Chair of Data Science at the University of Mannheim. LAVA stands for “solutions for the automated improvement and enrichment of knowledge graphs”. For the project, the computer scientist from Mannheim collaborates with the Karlsruhe-based company medicalvalues GmbH. Paulheim has already been working with this company on an AI-based diabetes detection project since 2023. Medicalvalues specializes in AI solutions for medical diagnostics in laboratories and hospitals. Unlike movie recommendations, the AI used in this field must be reliable and trustworthy.

The joint project goal of LAVA is a certified medical product that will help doctors to make quick and precise diagnoses. In case of rare diseases, for example, the plan is to compile data such as X-ray images, blood values and other relevant measurements and link this data with the help of a knowledge graph to make it easier for the doctor to decide on further treatment for the patient. Paulheim's team contributes software modules that so that it is possible to keep the knowledge graph up-to-date and error-free at all times.

“Our aim is to provide reusable, well-documented components for white-box AI,” explains Paulheim. White-box AI refers to models that make it transparent how decisions are made – for example, by using knowledge graphs that are also understandable for humans. Users can therefore understand on which data a decision is based on. This contrasts with black box models such as Chat GPT, where it is not possible to understand the answers. “AI is only trustworthy when humans can understand every decision and intervene in the event of wrong decisions,” Paulheim continues. With medicalvalues, for example, AI may suggest extensions to the knowledge graph but medical staff can check each of these extensions and corrected them, if necessary.

With this idea, the Mannheim-based AI developer was successful at a DATIpilot pitch in Darmstadt, in which more than 600 participants took part. The project will receive funding from the Federal Ministry of Education and Research (BMBF) in the amount of 300,000 euros for the next 18 months.

About DATIpilot

The BMBF's new DATIPilot funding line aims to promote innovation related to research. It is aimed at stakeholders from research and society who want to implement transfer-oriented project ideas to tackle current social challenges.

A total of 3,000 projects teams submitted a brief outline to apply for the “Innovation Sprint” module of the funding line in which a specific, creative transfer or innovation idea is developed. 600 of these submitted project ideas were categorized as worthy of funding. The project teams were invited to one of the pitch events taking place in eight cities across Germany – including Darmstadt. The project teams presenting there also acted as a jury and selected around 25 percent of the project ideas for funding.

Website address: https://www.gesundheitsindustrie-bw.de/en/article/press-release/trustworthy-ai-made-mannheim