- Story
Can AI cut healthcare costs?
20.11.2024 Healthcare costs have been rising in Switzerland for years. Solutions are complex. Can AI help? A number of different projects are currently underway at Bern University of Applied Sciences to research the use of AI in medicine.
Key points at a glance
- With ChatGPT, the general public got to know language models and generative AI. Since then, generative AI models have been used in many places.
- Will this also result in efficiency gains or even cost savings in the healthcare sector?
- Prof. Dr Kerstin Denecke outlines the projects currently underway at Bern University of Applied Sciences, discusses what she hopes to learn from them and reveals developments that cause her concern.
What projects using generative artificial intelligence are currently underway at the Institute for Patient-centered Digital Health?
Kerstin Denecke: One project we are working on with the Inselspital is about ensuring the quality of radiology reports. We use generative AI to check whether all relevant information is included in the diagnostic reports dictated by radiologists. Part of the project was to program a chatbot that interacts with patients before treatment, collecting and processing additional information and answering their questions about the examination. This makes the care team’s job easier and improves the quality of the treatment. In other projects, we are investigating how generative AI can be used to support clinical documentation. This involves compiling discharge reports from information on the patients’ stay at the hospital, or automatically extracting reportable information from existing data, such as the information that must be reported to an implant register. The extracted information can also be used to assess risks. In one of our projects, we want to use AI to recognise indicators that point to an increased risk of hospital-acquired infections, which can even save lives.
Are these technologies already in use?
In some cases, the methods developed by the projects are already being trialled by selected doctors, with a view to gathering feedback on usability and, above all, quality. As always with digital projects, however, it is important that the new technologies are also integrated into the existing system landscape and processes, which can sometimes be a lengthy process. Putting AI-generated exit reports into practice will probably happen quickest. Hospitals are under pressure to provide solutions because generative AI is so readily available with ChatGPT and the like. There is a need to prevent doctors from using solutions for clinical documentation that fall foul of data protection law. But it is also still important for the generated reports to be checked and approved by experts. They would need to examine each report carefully, even if the generated reports appear correct at first glance. To help them with this, we are working on a supporting tool that checks whether all relevant information has been included and that no additional information has found its way into the generated reports.
Are these systems rolled out widely or are they individualised solutions for a specific hospital?
In our projects, we collaborate with individual hospitals. However, the companies involved in the projects are interested in making the resulting methods and systems available to others in their software solutions. For example, the methods for AI-generated report writing can be integrated into existing clinical information systems and so be used directly during the documentation process.
Are there also projects that use AI in diagnostics?
I am working with researchers from the Department of Neurology at Inselspital on AI-based methods to help diagnose complex sleep disorders more quickly. System checkers are already common on the internet and in apps. These systems ask about symptoms and provide likely diagnoses or recommendations on whether the user urgently needs to see a doctor. Decision-support systems for clinical use and research have been around for a long time.
What about the inherent security issues?
One aspect is that of data security. The language models in our projects are always run locally and we only ever work with anonymised data. The hospitals we collaborate with need to consider whether they want to rely on freely available language models or become dependent on commercial providers. When using freely available language models as a basis, the content of these models must be critically examined and called into question. The key here is to avoid algorithm errors caused by poor or distorted data.
Another aspect is the necessity to guarantee patient safety when AI is used in the context of medicine. Any recommendations by an AI tool must undergo critical scrutiny. Medical staff, but also patients, are going to need the relevant skills to do this in future. I still see a lot of potential areas of research here. As a patient, how do I evaluate a diagnosis or treatment suggestion from an AI tool? What skills do healthcare professionals need to use AI tools safely? Another of my concerns is about the risks of AI-based solutions used by patients alone, i.e. without medical supervision. Currently there is plenty of enthusiasm everywhere about the many possibilities surrounding generative AI, but hardly anyone is talking about the potential risks. We do not yet know what undesirable side effects and interactions AI-based tools may have when used by patients. Here it is important that, as researchers, we can closely support projects, investigate risks and integrate appropriate countermeasures.
«When it comes to uses of AI in diagnostics, I hope that we proceed with all due caution.»
What will the future bring?
Over the next few years, we will doubtlessly make great progress in the use of AI for administrative tasks in the healthcare sector. I see great potential here, which can lead to greater efficiency. When it comes to uses in diagnostics and, above all, in therapy, I hope that we proceed with all due caution. And that we don’t do away with human contact. I believe that the best way to identify individual needs – in future too – is through direct human contact, and that we can respond to people’s needs better if we seek direct dialogue. Healthcare staff simply don’t have time for this at the moment, but maybe one day AI will create the time for this.