23 May 2024
AI will be “lane assist” for healthcare professionals in ultrasound scans of pregnant women
After two years of collaboration, a team of researchers from Danish universities has developed an artificial intelligence capable of evaluating the quality of ultrasound scans of pregnant women, drawing insights from experienced physicians. This innovation aims to enhance the quality of scans not only within Denmark but also in developing nations.
Ultrasound scanning during pregnancy is a challenging discipline. Many practitioners have dedicated their careers to capturing precise fetal images using only a small probe and a screen. The pursuit of detecting fetal anomalies is often challenged by factors such as ultrasound beam alignment, layers of fat, and organ positioning, contributing to the difficulty in achieving clear and interpretable images.
Presently, there exists considerable variability in the quality of ultrasound scans of pregnant women, with evidence indicating a correlation between the expertise of clinicians and the detection of growth abnormalities. This underscores the need to standardise scan quality across clinicians and medical facilities. Here, artificial intelligence can serve as a mentor to less experienced practitioners.
Doctors train the algorithm
As part of the EXPLAIN-ME project, a group of researchers has been working since 2021 to create an explainable artificial intelligence (XAI) designed to guide healthcare professionals in performing high-quality scans without deep expertise. A significant milestone in the project has been the development of an algorithm that, based on criteria set by experienced doctors, matches the level of experienced clinicians in selecting quality scan images.
“Ultrasound scanning requires substantial expertise and specialized skills. Obtaining high-quality images is challenging, leading to great variations in scan quality across different hospitals. We hope that our project can level out these quality differences,” says Aasa Feragen, project leader of the EXPLAIN-ME project and professor at DTU Compute.
Close collaboration between theory and practice
With an effective AI model in place and eighteen months remaining until the project’s completion, the focus is now to determine the best way of conveying the model’s guidance to healthcare professionals—an aspect often overlooked in the research world.
“We work very closely with doctors and sonographers. It’s crucial for us, as technical researchers, to understand what is needed for our models to make a real impact in society,” says Aasa Feragen.
The PhD student Jakob Ambsdorf has gained invaluable insights into healthcare professionals’ needs through his engagement with the Copenhagen Academy for Medical Education and Simulation (CAMES) at Rigshospitalet.
“I’ve spent a lot of time in the clinic at Rigshospitalet to identify the challenges faced by staff. We’ve learned that sonographers don’t necessarily need help with diagnosis but rather with enhancing image quality. Thus, instead of trying to imitate human decisions, we aim to refine the surrounding factors. For instance, we recommend slight adjustments to the probe’s positioning or settings to enhance image clarity. It’s like a lane-assist for sonographers and doctors,” he says.
Potential for global expansion
With the project set to conclude in 2025, the primary objective is to expand upon the XAI model to equip less experienced healthcare personnel worldwide with the tools for conducting advanced scans. The XAI model, developed by the University of Copenhagen, has already undergone trials using data from Tanzania and Sierra Leone.
“In the long term, the model can be used in areas with limited access to high-quality equipment and specialised personnel,” concludes Jakob Ambsdorf.
DIREC has provided support to the EXPLAIN-ME project with a grant of DKK 7.39 million. Beyond ultrasound scans, the project also addresses the diagnosis of kidney tumors and robotic surgery.
What is explainable artificial intelligence (XAI)?
Explainable artificial intelligence aims to explain the rationale behind AI model outputs, fostering trust in their decisions. As machine learning models grow in complexity and are increasingly employed for critical decisions, XAI enables users to understand the data on which they were trained and assess the accuracy of the output.