Categories
Health tech News

Visit from the minister: Danish digital research as a driving force

28 November 2024

Visit from the Minister: Danish digital research as a driving force

Artificial Intelligence and humans are better together than apart. On a sunny autumn day in November, Minister for Digitalization Caroline Stage Olsen and Chairman of the Health Committee in the Capital Region of Denmark Christoffer Buster Reinhardt got a first-hand look at CAMES, a simulation training center at Rigshospitalet. Here, DIREC had invited them to a talk about how the center is working to make Denmark one of the leading countries in Europe within digital research and innovation.

Thomas Riisgaard Hansen, Director of DIREC, shared three key recommendations with the Minister:

  1. Unite Denmark’s digital research and innovation ecosystem

    By aligning Denmark’s diverse initiatives, we can unlock powerful synergies that amplify efforts and multiply outcomes.
  2. Invest broadly in emerging digital technologies

    While AI dominates current focus, other transformative technologies may take the lead by 2030. Denmark must priortize AI development while fostering a broad base of innovation to ensure long-term growth and adaptability.
  3. Leverage Denmark’s unique strengths to drive growth and enhance public solutions
    Rather than emulating Silicon Valley, Denmark should build on its own research and industrial strengths to create distinctive successes. Strategic, targeted investments in these areas will deliver the greatest impact.
AI as a mentor in healthcare

The Minister was also introduced to a concrete example of how research, innovation, and entrepreneurship can converge. Professors Aasa Feragen and Martin G. Tolsgaard presented the Explain Me project, funded by DIREC.
 
This collaboration between researchers from DTU, KU and clinicians at CAMES explores how artificial intelligence can serve as a mentor for less experienced healthcare professionals. AI provides guidance to perform high-quality scans, reducing the need for advanced expertise.

Currently, significant disparities exist in the quality of ultrasound scans for pregnant women. Clinicians often struggle to identify high-risk pregnancies that require timely intervention. This highlights the need for standardized ultrasound quality across practitioners and hospitals.
 
By leveraging AI-driven decision support systems and explainable AI models, the project has achieved signifcant advancements in diagnosing high-risk pregnancies. For instance, the technology can detect nearly 25% more cases of premature birth risk — a condition that poses serious dangers to infants and is costly for the healthcare system.
 
The promising outcomes of the Explain Me project have led to the establishment of a spinout company, Prenaital, dedicated to commercializing this research and making it accessible in clinical settings.

Learn more about the Explain Me project here.
 

Categories
AI Health tech News

AI against kidney cancer

2 July 2024

AI against kidney cancer: Reducing over-treatment and saving millions for society  

Every year, kidney tumor patients endure significant suffering due to unnecessary biopsies and surgeries. The current diagnostic methods leave much to be desired. Therefore, a research team from the University of Copenhagen, Roskilde University, and the Urology Department at Zealand University Hospital is developing an explainable artificial intelligence (XAI) to assist nephrologists and patients with accurate diagnoses.

Nessn Azawi with a CT scanner, which is used for scans of kidney cancer.

Kidney cancer is one of the most over-treated cancers in Denmark. The available scan images are often unreliable, with one in five CT scans yielding false positives. This means that up to 27 percent of kidney tumor patients undergo painful biopsies and surgeries without having cancer.

To address this, a newly developed AI model is currently being tested at Zealand University Hospital. It surpassed experienced doctors in diagnosing kidney cancer based on scan images. The problem is, however, is that doctors cannot explain the model’s conclusions. This hinders the AI model’s widespread adoption.

In the research and innovation project EXPLAIN ME, funded by the Digital Research Centre Denmark, a team of researchers from the University of Copenhagen, Roskilde University, and the Urology Department at Zealand University Hospital are working to interpret the model’s conclusions.

“Although it is tempting, we cannot simple leave such significant decisions to AI. We need to fully understand its neural patterns from the outset before we can implement it in practice,” says Nessn Azawi, Chief Physician at Zealand University Hospital’s Urology Department and Associate Professor at the University of Copenhagen.

Significant savings for society

As part of the EXPLAIN-ME project, Nessn Azawi and his research team have been working since 2022 to develop explainable artificial intelligence (XAI) that can guide nephrologists on when surgery is necessary, and crucially, explain why.

The 1,000 Danish patients diagnosed with kidney cancer each year rarely show symptoms until the cancer is advanced. The significant diagnostic uncertainty leads to many patients being over-treated. According to Nessn Azawi, AI-based diagnosis could reduce the treatment process by 2-4 weeks and save the healthcare system approximately 15-25 million kroner annually. These positive outcomes would be maximized if the technology is adopted throughout the Nordic region.

“We over-treat around 30,000 kidney cancer patients in the Scandinavian countries. Improving diagnosis would have significant positive effects for both society and the patients,” says Nessn Azawi.

A multidisciplinary effort

Researchers have already tested the AI model at Roskilde University with promising results. The next milestone is to develop a model with a more detailed dataset that can provide nephrologists with accurate kidney cancer diagnoses supported by solid evidence. This has been the focus of PhD student Daniel van Dijk Jacobsen from Roskilde University’s Department of People and Technology for the past two years.
“The challenge is that we don’t know what the model is analyzing when it makes the diagnosis. It’s about identifying the patterns the model detects at the pixel level and then conveying that information to the doctors,” he says.

Thus, it has been essential to work across disciplines, incorporating ethnographic observation studies during patient interactions, participatory design, and ongoing discussions with the medical staff at Zealand University Hospital.

“I find that doctors are enthusiastic about exploring technological possibilities, as they are eager for assistance in achieving more precise diagnostics. They want to be able to compare the patient’s history with the machine’s diagnosis and make decisions based on a better foundation than they currently have,” says Daniel van Dijk Jacobsen.

By analyzing CT scans, artificial intelligence can assess the likelihood of whether a tumor is malignant or benign and assist doctors in determining if surgery is needed.

At present, the researchers are seeking additional funding to support their goal of implementing the model in Danish hospitals within a few years. DIREC has supported the EXPLAIN-ME project with 7.39 million kroner from 2023 to 2025. In addition to kidney cancer diagnostics, the project focuses on ultrasound scans of pregnant women and robotic surgery.

What is Explainable Artificial Intelligence (XAI)?

Explainable artificial intelligence aims to elucidate the rationale behind AI model outputs, thereby enhancing trust in their decisions. Machine learning models are growing in complexity, and they are increasingly relied upon for critical decisions. Explainable artificial intelligence enables users to discern the model’s training data and evaluate the accuracy of its outputs, among other capabilities.

 

Categories
AI Health tech News

AI will be “lane assist” for healthcare professionals in ultrasound scans of pregnant women

23 May 2024

AI will be “lane assist” for healthcare professionals in ultrasound scans of pregnant women

After two years of collaboration, a team of researchers from Danish universities has developed an artificial intelligence capable of evaluating the quality of ultrasound scans of pregnant women, drawing insights from experienced physicians. This innovation aims to enhance the quality of scans not only within Denmark but also in developing nations.

Ultrasound scanning during pregnancy is a challenging discipline. Many practitioners have dedicated their careers to capturing precise fetal images using only a small probe and a screen. The pursuit of detecting fetal anomalies is often challenged by factors such as ultrasound beam alignment, layers of fat, and organ positioning, contributing to the difficulty in achieving clear and interpretable images.

Presently, there exists considerable variability in the quality of ultrasound scans of pregnant women, with evidence indicating a correlation between the expertise of clinicians and the detection of growth abnormalities. This underscores the need to standardise scan quality across clinicians and medical facilities. Here, artificial intelligence can serve as a mentor to less experienced practitioners.

Doctors train the algorithm

As part of the EXPLAIN-ME project, a group of researchers has been working since 2021 to create an explainable artificial intelligence (XAI) designed to guide healthcare professionals in performing high-quality scans without deep expertise. A significant milestone in the project has been the development of an algorithm that, based on criteria set by experienced doctors, matches the level of experienced clinicians in selecting quality scan images.

“Ultrasound scanning requires substantial expertise and specialized skills. Obtaining high-quality images is challenging, leading to great variations in scan quality across different hospitals. We hope that our project can level out these quality differences,” says Aasa Feragen, project leader of the EXPLAIN-ME project and professor at DTU Compute.

Close collaboration between theory and practice

With an effective AI model in place and eighteen months remaining until the project’s completion, the focus is now to determine the best way of conveying the model’s guidance to healthcare professionals—an aspect often overlooked in the research world.

“We work very closely with doctors and sonographers. It’s crucial for us, as technical researchers, to understand what is needed for our models to make a real impact in society,” says Aasa Feragen.

The PhD student Jakob Ambsdorf has gained invaluable insights into healthcare professionals’ needs through his engagement with the Copenhagen Academy for Medical Education and Simulation (CAMES) at Rigshospitalet.

“I’ve spent a lot of time in the clinic at Rigshospitalet to identify the challenges faced by staff. We’ve learned that sonographers don’t necessarily need help with diagnosis but rather with enhancing image quality. Thus, instead of trying to imitate human decisions, we aim to refine the surrounding factors. For instance, we recommend slight adjustments to the probe’s positioning or settings to enhance image clarity. It’s like a lane-assist for sonographers and doctors,” he says.

Potential for global expansion

With the project set to conclude in 2025, the primary objective is to expand upon the XAI model to equip less experienced healthcare personnel worldwide with the tools for conducting advanced scans. The XAI model, developed by the University of Copenhagen, has already undergone trials using data from Tanzania and Sierra Leone.

“In the long term, the model can be used in areas with limited access to high-quality equipment and specialised personnel,” concludes Jakob Ambsdorf.

DIREC has provided support to the EXPLAIN-ME project with a grant of DKK 7.39 million. Beyond ultrasound scans, the project also addresses the diagnosis of kidney tumors and robotic surgery.

What is explainable artificial intelligence (XAI)?

Explainable artificial intelligence aims to explain the rationale behind AI model outputs, fostering trust in their decisions. As machine learning models grow in complexity and are increasingly employed for critical decisions, XAI enables users to understand the data on which they were trained and assess the accuracy of the output.

Categories
DIREC opinion Health tech News

Can AI help a health system under pressure?

27 OCTOBER 2022

Can AI help a health system under pressure?

In Denmark we have a shortage of medical specialists and nurses. In 2025, according to the Danish Nurses’ Organization, there will be a shortage of at least 6,000 nurses, and the lack of medical specialists is also a huge problem for a hospital sector under a historically high pressure.

Categories
AI Health tech News

How do we become better at using artificial intelligence in healthcare?

17 OCTOBER 2022

How do we become better at using artificial intelligence in healthcare?

There is an increasing demand in Denmark for new and more advanced healthcare services. In the coming years, there will be more elderly people with treatment needs and a decreasing youth population to take care of the elderly. The challenges call for us to think differently, so that we can jointly develop a well-functioning healthcare system that can provide the best treatment methods.

The use of artificial intelligence is an important part of the solution when resources need to be optimized and we need to think differently. But is our healthcare system ready to implement the new solutions, and what challenges will arise in the meeting between digital research and everyday life in a busy hospital?

“Artificial intelligence and machine learning can improve the ways we prevent and diagnose diseases, optimize treatments, increase quality and reduce errors. A huge number of technological innovations are emerging right now, many of which are promising research-based AI solutions, and yet it is a challenge to get them tested and implemented in the healthcare sector, says Thomas Riisgaard Hansen, director of Digital Research Centre Denmark (DIREC). 

What is holding the development back and what are the actual challenges? Is it that technology is getting closer, but still too limited and full of errors to create actual value in the healthcare sector? Is it that data and legislation complicate the development of algorithms? Is it that the healthcare system has problems incorporating new technology and changing work processes? Is it a lack of resources and money? Or does the problem lie elsewhere? This hot topic was discussed in the session ‘How to navigate the challenges of implementing groundbreaking AI in the healthcare sector’ at this year’s Digital Tech Summit. 

“It is a major task to use the technological opportunities in the healthcare system and it also requires us not to be deceived by dazzling promises about what the technology can do but, instead, we must work purposefully to exploit the actual opportunities and to remove or reduce the barriers that interfere,” says Thomas Riisgaard Hansen, who has worked with health innovation for 20 years and moderated the panel discussion. 

He was accompanied by technology companies, researchers, innovators, and health professionals, who gave their own take on how we can jointly support the development and implementation of new solutions that will benefit patients and staff.

The session presented three concrete cases about implementation of AI in the Danish healthcare system:  

Getting Access to Health Data and Ways to Leverage it in the Health Sector
Henrik Løvig, Enversion & Gitte Kjeldsen, Danish Life Science Cluster

Getting AI innovations implemented internationally
Mads Jarner Brevadt, Co-founder & CEO, Radiobotics & Janus Uhd Nybing, Ledende Forskningsradiograf, Bispebjerg og Frederiksberg Hospital samt Medstifter, Radiologisk AI Testcenter RAIT

Getting Research Implemented in the Daily Practices in a Hospital Setting
Mads Nielsen, Professor, KU andIlse Vejborg, Head of Department, Rigshospitalet

Each case is based on experiences with the implementation of artificial intelligence in the healthcare system and highlighted the challenges and best practices that have been identified from the perspective of the technology developers and not least of the healthcare professionals.

The session was organized by DIREC, Pioneer Centre for AI, CBS, DTU, and Danish Life Science Cluster. 

 

 

 

Categories
AI Health tech News

Explainable AI to increase hospitals’ use of AI

26 November 2021

Explainable AI to increase hospitals' use of AI

In a new DIREC project, AI researchers are collaborating with hospitals to create more useful AI and AI algorithms that are easier to understand.

AI (artificial intelligence) is gradually gaining ground in assistive medical technologies such as image-based diagnosis, where artificial intelligence analyzes CT scans with superhuman precision. AI, on the other hand, is rarely designed as a collaborator for healthcare professionels.

In a new human-AI project EXPLAIN-ME – supported by the national research center DIREC, AI researchers together with medical staff will develop explanatory artificial intelligence (Explainable AI – XAI) that can give clinicians feedback when training in hospitals training clinics.

“In the Western world, about one in ten diagnoses is judged to be incorrect, so patients do not get the right treatment. The explanation may be due to a lack of experience and training. Our XAI model will help the medical staff make decisions and act a bit like a mentor who gives advice and response when they train,” explains Professor at DTU Compute and Project Manager Aasa Feragen.

In the project, DTU, the University of Copenhagen, Aalborg University, and Roskilde University collaborate with doctors at the training and simulation center CAMES at Rigshospitalet, NordSim at Aalborg University Hospital, and oncologists at the Department of Urology at Zealand University Hospital in Roskilde.

Ultrasound scan of pregnant women


At CAMES, DTU and the University of Copenhagen will develop an XAI model that looks over the shoulder of doctors and midwives when they ultrasound scan ‘pregnant’ training dolls at the training clinic.

In the field of ultrasound scanning, clinicians work on the basis of specific ‘standard plans’, which show different parts of the fetus’ anatomy to make it easier to see and react in case of complications. The rules are implemented in the XAI model, which is integrated into a simulator that gives the doctor ongoing feedback.

“It would be great if XAI could help less trained doctors to do scans that are on a par with the highly trained doctors.”
Professor and Projekt Manager Aasa Feragen

The researchers train the artificial intelligence on real data from Rigshospitalet’s ultrasound scans from 2009 to 2018, and it is primarily images from the common nuchal scan and malformation scans that are offered to all Danish pregnant women approximately 12 and 20 weeks into the pregnancy. When the XAI models will be ready to use at the training clinic, first they have to check whether the model also works in the simulator, since the EAI model is trained on real data, while the training doll is artificial data.

According to doctors, the quality of ultrasound scans and the ability to make accurate diagnoses depends on how much training the doctors have received.

“If our model can tell the doctor during the scan that a foot is missing in the picture, the doctor may be able to learn faster. If we get the XAI model to tell us that the probe on the ultrasound device needs to be moved a bit to get everything in the picture, then maybe it can be used in medical practice as well. It would be great if XAI could help less trained doctors to do scans that are on a par with the highly trained doctors,” says Aasa Feragen.

Research associate professor and head of CAMES’ research team for artificial intelligence Martin Grønnebæk Tolsgaard emphasizes that many doctors are interested in getting help from AI technology to find the best treatment for patients. Here is explainable AI the way to go.

“Many of the AI models that exist today do not provide very good insight into why they come to a particular decision. It is important for us to become wiser on that. If the model does not explain why it comes to a given decision, then clinicians do not believe in the decision. So if you want to use AI to make clinicians better, then we need good explanations, like Explainable AI.”

Ongoing feedback on robotic surgery


Robotic surgery allows surgeons to perform their work with more precision and control than traditional surgical tools. It reduces errors and increases efficiency, and the expectation is that AI will be able to improve the results further.

In Aalborg, the researchers will develop an XAI model that supports the doctors in the training center NordSim, where both Danish and foreign doctors can train surgery and operations in simulators on e.g. pig hearts. The model must provide ongoing feedback to the clinicians while they are training an operation without interfering, says Mikael B. Skov, professor at Department of Computer Science at Aalborg University:

“Today, it is typically the case that you only get to know if you should have done something different when you have finished training an operation. We would like to look at how you can come up with this feedback more continuously to better understand whether we have done something right or wrong. The feedback should be done in such a way that the people learn faster and, at the same time, make fewer mistakes before they have to go out and do real operations. We, therefore, need to look at how to develop different types of feedback, such as warnings without interrupting too much”.

Image analysis in kidney cancer


Doctors often have to make decisions under time pressure, e.g. in connection with cancer diagnoses to prevent cancer from spreading. A false-positive diagnosis, therefore, could cause a healthy kidney removed and other complications to be inflicted. Although experience shows that AI methods are more accurate in assessments, clinicians need a good explanation of why the mathematical models classify a tumor as benign or malignant.

In the DIREC project, researchers from Roskilde University will develop methods in which artificial intelligence analyzes medical images for use in diagnosing kidney cancer. Clinicians will help them understand what feedback is needed from the AI models to balance what is technically possible and what is clinically necessary.

“It is important that the technology can be included in the hospitals’ practice, and therefore we focus in particular on designing these methods within ‘Explainable AI’ in direct collaboration with the doctors who actually use it in their decision-making. Here we draw in particular on our expertise in Participatory Design, which is a systematic approach to achieve the best synergy between what the AI researchers come up with in terms of technological innovations and what doctors need,” says Henning Christiansen, professor in computer science at the Department of People and Technology at Roskilde University.

About DIREC – Digital Research Centre Denmark

The purpose of the national research centre DIREC is to bring Denmark at the forefront of the latest digital technologies through world-class digital research. To meet the great demand for highly educated IT specialists, DIREC also works to expand the capacity within both research and education of computer scientists. The centre has a total budget of DKK 275 million and is supported by the Innovation Fund Denmark with DKK 100 million. The partnership consists of a unique collaboration across the computer science departments at Denmark’s eight universities and the Alexandra Institute.

The activities in DIREC are based on societal needs, where research is continuously translated into value-creating solutions in collaboration with the business community and the public sector. The projects operate across industries with focus on artificial intelligence, Internet of Things, algorithms and cybersecurity among others.

Read more at direc.dk

EXPLAIN-ME

Partners in the project EXPLAIN-ME: Learning to Collaborate via Explainable AI in Medical Education

  • DTU (DTU Compute – Department of Mathematics and Computer Science)
    University of Copenhagen
  • Aalborg University
  • Roskilde University
  • CAMES – Copenhagen Academy for Medical Education and Simulation at Rigshospitalet in Copenhagen
  • NordSim – Center for skills training and simulation at Aalborg University Hospital
  • Department of Urology at Zealand University Hospital in Roskilde

Project period: 1 October 2021 to 30 April 2025

Contact: 
Aasa Feragen
DTU Compute
M: +45 26 22 04 98
afhar@dtu.dk

Anders Nymark Christensen
DTU Compute
+45 45 25 52 58
anym@dtu.dk

Categories
DIREC opinion Health tech News

Redefining healthcare – a conversation with Managing Director of DIREC, Thomas Riisgaard Hansen

REDEFINING HEALTHCARE

A conversation with Managing Director of DIREC, Thomas Riisgaard Hansen

CEO and President of Falck, Jakob Riis shares his conversation with Managing Director of DIREC (Digital Research Centre Denmark), Thomas Riisgaard Hansen. Thomas considers digitalisation an imperative for health as a mean not only for optimisation but also to create better outcomes.