How do we ensure that ethics and artificial intelligence go hand in hand in public healthcare?

Public healthcare can achieve high efficiency by using artificial intelligence. But it creates a great need to deal with the ethical problems that go with it. A new research project will help promote ethically responsible use and development of artificial intelligence.

En sundhedspersonale kigger på en mammografiscanning på en computer.
Adobe Stock

Artificial intelligence, AI, is already part of everyday life in public healthcare, where it can relieve healthcare professionals and help improve diagnostics and treatment. However, technological advances also pose ethical challenges.

Therefore, it is important to prepare thoroughly before artificial intelligence is used. A new research project will investigate how artificial intelligence affects healthcare practice. There is a special focus on ethics.

»Society will benefit from the research because the project provides knowledge about how we can promote ethically responsible development and use of artificial intelligence within Danish public healthcare, « says Anne Gerdes, Professor at the Department of Design, Media and Educational Science at the University of Southern Denmark (SDU) and leader of the Center for AI Ethics.

Placing patients and healthcare professionals first

The research project consists of three interrelated parts. The first part examines patients' values and perspectives on artificial intelligence. The second part looks at how the responsibility and professional judgment of healthcare professionals are affected when artificial intelligence is in play. The third part focuses on how AI systems can be developed ethically and responsibly.

The patient perspective is based on breast cancer screenings that are offered to all women in Denmark over the age of 50.

From the end of 2025, artificial intelligence will become a regular part of the screening process in the Region of Southern Denmark, where the research project will be carried out. Previously, two imaging specialists had to review screening images, and in the future, AI will be involved in the process together with the doctors.

The research project will investigate how this technological development is experienced and understood from the patients' perspective. 
Here, a PhD student will explore ethical questions and interview screened women about their concerns regarding AI assessment.

»The patients' voices must be included. Their perspectives contribute to AI being developed and implemented in a fair and responsible manner in public healthcare, « Anne Gerdes elaborates.

But AI can also have consequences for healthcare professionals' clinical judgment.

»If you outsource many normal images to an AI algorithm, it may become difficult in the long run for radiologists to see enough images to maintain their skills, « says Anne Gerdes.

Therefore, the second part of the project addresses what AI means for healthcare professionals. This part of the project is also linked to existing research projects that explore the potential of AI technology for X-ray diagnostics at Odense University Hospital (OUH).

AI can improve workflows for the benefit of patients and staff. But when AI is introduced, accountability and professional judgment must still remain paramount, « Anne Gerdes emphasizes.

Wants to embrace both AI developers and clinicians

The third and final part of the project builds on the results from the previous parts and aims to promote the development of tools that can support the responsible development of AI systems.

Here, the PhD student will investigate how to develop guidelines and tools so that they can support interdisciplinary teams of clinicians and AI developers during the development of AI solutions that are professionally robust and ethically sound.

It also draws on an AI tool developed under the leadership of doctor and PhD student Frederik Duedahl.

»I hope that we can create a good insight into the ethical issues that exist in connection with artificial intelligence in Danish public healthcare. And that we gain knowledge about how guidelines and tools to proactively develop ethically responsible artificial intelligence can be produced so that they support interdisciplinary teams as much as possible, « says Anne Gerdes and continues:

»Because that is one of the big hurdles. There are a lot of tools for responsible development of AI systems, but they often have to be used in interdisciplinary teams, where clinicians and computer scientists have different insights into the field. The challenge is to get common ground.«

So, the hope is to improve the tools available for responsible development of AI systems so that they support the development work in the best possible way.

In addition to Anne Gerdes, the project group consists of Benjamin Schnack Rasmussen, Clinical Research Manager at the Centre for Clinical Artificial Intelligence at SDU, and doctor and PhD student Frederik Duedahl.