Categories
AI News

Danish researcher shapes the future of machine learning at Harvard

23 September 2024

Danish researcher shapes the future of machine learning at Harvard  

In recent years, the Danish researcher Emil Njor has emerged as a pioneering figure in the field of TinyML. At Harvard University, he has contributed to the development of a new generation of datasets for local machine learning models, capable of processing data in an environmentally sustainable way and without the need for an internet connection.

Imagine a future where artificial intelligence is no longer confined to powerful data centers or advanced computersnbut is embedded in everything from coffee machines to industrial sensors. This scenario could become a reality with the rise of TinyML, enabling advanced AI models to run on small devices without internet connectivity.

One of the leading figures in Danish TinyML research is PhD student Emil Njor from DTU. As part of the DIREC project “Edge-based AI Systems for Predictive Maintenance,” he is working on developing machine learning models that are so compressed and efficient that they can operate on very small computers – far more robust than systems dependent on both an internet connection and cloud services.

“When I first started at DTU, hardly anyone knew about TinyML. Today, there are three or four PhD students working in the field, and interest has exploded because it allows us to make devices intelligent without needing the internet or large amounts of energy,” says Emil Njor.

AI without the cloud

Emil Njor’s passion for TinyML is driven largely by its environmental advantages. Efficient local computers can help reduce resource consumption, making devices less reliant on continuous data communication with cloud servers.

“I’m trying to take a different approach from the large machine learning models that consume a lot of resources. Instead, we are trying to trim optimize models so they use fewer resources and can run more efficiently on small devices,” he says.

As an example, he points to weather stations that use microphones to measure rain or detect wind speed, rather than relying on traditional moving parts, which are prone to breaking.

“This is a highly sought-after solution, especially in developing countries where accurate weather data is scarce, and traditional weather stations are costly to maintain.”

Advancing technology at Harvard University

During his PhD, Emil Njor spent time at Harvard University, where he collaborated with American researchers to refine datasets and publish new studies to advance TinyML research.

“The datasets we’ve been using for years are often small, error-prone, and not reflective of real-world conditions. At Harvard University, we created a dataset that is 100 times larger and has significantly fewer errors than previous ones,” Njor explains.

Looking ahead, Njor sees vast potential for TinyML, particularly in applications where fast and reliable responses are needed without an internet connection. Self-driving cars are another excellent example.

“Cars need efficient sensors that can function without the internet, and TinyML can provide the processing power needed for quick reactions — for example, if a pedestrian suddenly crosses the road. This technology can make a real difference in practice, and that’s what motivates me,” concludes Emil Njor.

Interested in learning more about Emil’s work with TinyML? Explore the DIREC project Edge-based AI Systems for Predictive Maintenance.

Emil Njor, PhD student, DTU Compute

Categories
AI Health tech News

AI against kidney cancer

2 July 2024

AI against kidney cancer: Reducing over-treatment and saving millions for society  

Every year, kidney tumor patients endure significant suffering due to unnecessary biopsies and surgeries. The current diagnostic methods leave much to be desired. Therefore, a research team from the University of Copenhagen, Roskilde University, and the Urology Department at Zealand University Hospital is developing an explainable artificial intelligence (XAI) to assist nephrologists and patients with accurate diagnoses.

Nessn Azawi with a CT scanner, which is used for scans of kidney cancer.

Kidney cancer is one of the most over-treated cancers in Denmark. The available scan images are often unreliable, with one in five CT scans yielding false positives. This means that up to 27 percent of kidney tumor patients undergo painful biopsies and surgeries without having cancer.

To address this, a newly developed AI model is currently being tested at Zealand University Hospital. It surpassed experienced doctors in diagnosing kidney cancer based on scan images. The problem is, however, is that doctors cannot explain the model’s conclusions. This hinders the AI model’s widespread adoption.

In the research and innovation project EXPLAIN ME, funded by the Digital Research Centre Denmark, a team of researchers from the University of Copenhagen, Roskilde University, and the Urology Department at Zealand University Hospital are working to interpret the model’s conclusions.

“Although it is tempting, we cannot simple leave such significant decisions to AI. We need to fully understand its neural patterns from the outset before we can implement it in practice,” says Nessn Azawi, Chief Physician at Zealand University Hospital’s Urology Department and Associate Professor at the University of Copenhagen.

Significant savings for society

As part of the EXPLAIN-ME project, Nessn Azawi and his research team have been working since 2022 to develop explainable artificial intelligence (XAI) that can guide nephrologists on when surgery is necessary, and crucially, explain why.

The 1,000 Danish patients diagnosed with kidney cancer each year rarely show symptoms until the cancer is advanced. The significant diagnostic uncertainty leads to many patients being over-treated. According to Nessn Azawi, AI-based diagnosis could reduce the treatment process by 2-4 weeks and save the healthcare system approximately 15-25 million kroner annually. These positive outcomes would be maximized if the technology is adopted throughout the Nordic region.

“We over-treat around 30,000 kidney cancer patients in the Scandinavian countries. Improving diagnosis would have significant positive effects for both society and the patients,” says Nessn Azawi.

A multidisciplinary effort

Researchers have already tested the AI model at Roskilde University with promising results. The next milestone is to develop a model with a more detailed dataset that can provide nephrologists with accurate kidney cancer diagnoses supported by solid evidence. This has been the focus of PhD student Daniel van Dijk Jacobsen from Roskilde University’s Department of People and Technology for the past two years.
“The challenge is that we don’t know what the model is analyzing when it makes the diagnosis. It’s about identifying the patterns the model detects at the pixel level and then conveying that information to the doctors,” he says.

Thus, it has been essential to work across disciplines, incorporating ethnographic observation studies during patient interactions, participatory design, and ongoing discussions with the medical staff at Zealand University Hospital.

“I find that doctors are enthusiastic about exploring technological possibilities, as they are eager for assistance in achieving more precise diagnostics. They want to be able to compare the patient’s history with the machine’s diagnosis and make decisions based on a better foundation than they currently have,” says Daniel van Dijk Jacobsen.

By analyzing CT scans, artificial intelligence can assess the likelihood of whether a tumor is malignant or benign and assist doctors in determining if surgery is needed.

At present, the researchers are seeking additional funding to support their goal of implementing the model in Danish hospitals within a few years. DIREC has supported the EXPLAIN-ME project with 7.39 million kroner from 2023 to 2025. In addition to kidney cancer diagnostics, the project focuses on ultrasound scans of pregnant women and robotic surgery.

What is Explainable Artificial Intelligence (XAI)?

Explainable artificial intelligence aims to elucidate the rationale behind AI model outputs, thereby enhancing trust in their decisions. Machine learning models are growing in complexity, and they are increasingly relied upon for critical decisions. Explainable artificial intelligence enables users to discern the model’s training data and evaluate the accuracy of its outputs, among other capabilities.

 

Categories
AI Health tech News

AI will be “lane assist” for healthcare professionals in ultrasound scans of pregnant women

23 May 2024

AI will be “lane assist” for healthcare professionals in ultrasound scans of pregnant women

After two years of collaboration, a team of researchers from Danish universities has developed an artificial intelligence capable of evaluating the quality of ultrasound scans of pregnant women, drawing insights from experienced physicians. This innovation aims to enhance the quality of scans not only within Denmark but also in developing nations.

Ultrasound scanning during pregnancy is a challenging discipline. Many practitioners have dedicated their careers to capturing precise fetal images using only a small probe and a screen. The pursuit of detecting fetal anomalies is often challenged by factors such as ultrasound beam alignment, layers of fat, and organ positioning, contributing to the difficulty in achieving clear and interpretable images.

Presently, there exists considerable variability in the quality of ultrasound scans of pregnant women, with evidence indicating a correlation between the expertise of clinicians and the detection of growth abnormalities. This underscores the need to standardise scan quality across clinicians and medical facilities. Here, artificial intelligence can serve as a mentor to less experienced practitioners.

Doctors train the algorithm

As part of the EXPLAIN-ME project, a group of researchers has been working since 2021 to create an explainable artificial intelligence (XAI) designed to guide healthcare professionals in performing high-quality scans without deep expertise. A significant milestone in the project has been the development of an algorithm that, based on criteria set by experienced doctors, matches the level of experienced clinicians in selecting quality scan images.

“Ultrasound scanning requires substantial expertise and specialized skills. Obtaining high-quality images is challenging, leading to great variations in scan quality across different hospitals. We hope that our project can level out these quality differences,” says Aasa Feragen, project leader of the EXPLAIN-ME project and professor at DTU Compute.

Close collaboration between theory and practice

With an effective AI model in place and eighteen months remaining until the project’s completion, the focus is now to determine the best way of conveying the model’s guidance to healthcare professionals—an aspect often overlooked in the research world.

“We work very closely with doctors and sonographers. It’s crucial for us, as technical researchers, to understand what is needed for our models to make a real impact in society,” says Aasa Feragen.

The PhD student Jakob Ambsdorf has gained invaluable insights into healthcare professionals’ needs through his engagement with the Copenhagen Academy for Medical Education and Simulation (CAMES) at Rigshospitalet.

“I’ve spent a lot of time in the clinic at Rigshospitalet to identify the challenges faced by staff. We’ve learned that sonographers don’t necessarily need help with diagnosis but rather with enhancing image quality. Thus, instead of trying to imitate human decisions, we aim to refine the surrounding factors. For instance, we recommend slight adjustments to the probe’s positioning or settings to enhance image clarity. It’s like a lane-assist for sonographers and doctors,” he says.

Potential for global expansion

With the project set to conclude in 2025, the primary objective is to expand upon the XAI model to equip less experienced healthcare personnel worldwide with the tools for conducting advanced scans. The XAI model, developed by the University of Copenhagen, has already undergone trials using data from Tanzania and Sierra Leone.

“In the long term, the model can be used in areas with limited access to high-quality equipment and specialised personnel,” concludes Jakob Ambsdorf.

DIREC has provided support to the EXPLAIN-ME project with a grant of DKK 7.39 million. Beyond ultrasound scans, the project also addresses the diagnosis of kidney tumors and robotic surgery.

What is explainable artificial intelligence (XAI)?

Explainable artificial intelligence aims to explain the rationale behind AI model outputs, fostering trust in their decisions. As machine learning models grow in complexity and are increasingly employed for critical decisions, XAI enables users to understand the data on which they were trained and assess the accuracy of the output.

Categories
AI News

The award goes to…

13 December 2023

The award goes to....  

PhD Student Axel Christfort and Supervisor Associate Professor Tijs Slaats from the University of Copenhagen won the Process Discovery Contest at the 5th International Conference on Process Mining with their DisCoveR miner.

In a remarkable achievement, PhD student Axel Christfort and his supervisor, Associate Professor Tijs Slaats, won the Process Discovery Contest at the 5th International Conference on Process Mining.

Their cutting-edge DisCoveR miner produced the most accurate models and stood as the sole algorithm to successfully complete discovery and classification tasks within the stipulated time.

Process discovery algorithms play a crucial role in analyzing event logs, generating human-readable models that elucidate the behavior captured in the log. This includes understanding how individuals sequence activities in their work processes. The ICPM Conference, organizers of the Process Discovery Contest, evaluate submissions based on accuracy, requiring participants to mine models for a diverse range of logs and correctly classify corresponding ground truth traces.

This is the third prize in the Process Discovery Contest for the Process Modelling and Intelligence group from the Department of Computer Science, University of Copenhagen. In 2021, they secured awards for the best overall and the best imperative miner. The DisCoveR miner.

DisCoveR originated from a M.Sc. thesis by Viktorija Sali and Andrew Tristan Parli, supervised by Professor Slaats. The algorithm has undergone further refinement by Industrial PhD Student Christoffer Olling Back from ServiceNow, with ongoing enhancements by Axel Christfort. Funding from Independent Research Fund Denmark, DIREC – Digital Research Centre Denmark, and Innovation Fund Denmark has been instrumental in supporting this groundbreaking work.

Axel Christfort and Tijs Slaats are nominated Process Discovery Contest Winners

The industrial application of DisCoveR has been demonstrated through its implementation by DCR Solutions. The algorithm’s efficacy and utility have been validated in real-world scenarios, emphasizing its practical significance. Ongoing contributions from PhD Vlad Paul Cosma and Professor Thomas Hildebrandt have further extended and improved the miner, adding to its robustness.

Looking ahead, the Process Modelling and Intelligence group is eager to build upon these achievements to secure additional funding and foster novel collaborations. The team is already gearing up for the next iteration of ICPM, aiming to continue their winning streak and further advance the field of process discovery.

FACTS

Associate Professor Tijs Slaats is the project manager of the DIREC project ‘AI and Blockchain for complex business processes’.

Together with industry, the project aims to develop methods and tools that enable the industry to develop new efficient solutions for exploiting the huge amount of business data generated by enterprise and blockchain systems, with a specific focus on tools and responsible methods for the use of process insights for business intelligence and transformation.  

Categories
AI Future of work News

Swarms of robots are being deployed on the fields – What does it take to expand the use of them?

21 November 2023

Swarms of robots are being deployed on the fields – What does it take to expand the use of them?

Danish farmers are ready to embrace new technologies to support the green transition and ensure smarter production. Multi-robot systems are a crucial part of the solution, but barriers need to be dismantled and teething problems eliminated for seamless interaction between farmers and robots.

Self-driving robots are replacing diesel-powered giant machines, and multi-robot systems enable several robots to collaborate in the fields. Precision spraying of fertilizers and pesticides reduces the use of spray chemicals.

There are both environmental and efficiency gains in entrusting fieldwork to robots, and technology plays a vital role in the agricultural green transition.

“One of the major problems in agriculture is that farm machinery is getting larger and larger. However, when large machines traverse the ground, they compact the soil, requiring a significant amount of energy to repair the damage they cause. If instead, we deploy smaller, autonomous robots, we can increase efficiency without causing damage to the environment.”

– Anders Lyhne Christensen, Professor, University of Southern Denmark, UAS Center

The development imposes new requirements on both technology and users. In the HERD project, funded by DIREC – Digital Research Centre Denmark, Aalborg University’s expertise in designing user interfaces, University of Southern Denmark’s (SDU) knowledge of robotics, and Copenhagen Business School’s (CBS) insights into market creation and business models are combined with use cases from various companies developing robot systems.

AGROINTELLI, a Danish scaleup, is one such company working to break down barriers preventing farmers from adopting new technologies. Alea Scovill, R&D Manager at AGROINTELLI, emphasizes the importance of addressing factors like price, robustness, and user-friendliness to facilitate wider robot adoption.

“At AGROINTELLI, efforts are being directed towards breaking down some of the barriers currently preventing several farmers from adopting new technologies – a challenge encountered by the majority of field robot companies in the EU, says R&D Manager Alea Scovill from AGROINTELLI.

“If farmers cannot see how the robot fits into the farm and can be used without significant instruction, sales are lost. Price, robustness, and user-friendliness are other parameters influencing the adoption and serving as barriers for more farmers to embrace the robots,” explains Alea Scovill, who is in close dialogue with the involved researchers from CBS.

The role of CBS researchers in the project is precisely to explore the market challenges and commercial opportunities in the technology, and what it takes to mature the market. PhD student Alexandra Hettich, for instance, has interviewed various stakeholders such as sales personnel and dealers, and will soon interview the farmers. 

“Agriculture is particularly interesting as a domain. With the introduction of robots, the farmer’s work is significantly altered, and the obstacles to a successful implementation of this groundbreaking technology vary in nature. Therefore, we need to analyze the diversity of obstacles before developing concrete solutions to overcome them,” says Alexandra Hettich, PhD student at CBS.”

The collaboration uncovers various aspects of the technology

According to Professor Anders Lyhne Christensen from the University of Southern Denmark, the project leader for the project, the results are particularly interesting because they cover all aspects of technology, focusing on both the technological challenges, user experience, and the commercial aspects of agricultural robots as a business area.

“At SDU, we work with multi-robot systems and focus on how to make robots do what they need to do and provide the user with the information they need. Aalborg University works on user interfaces and user interfaces, such as how users can keep track of what robots have done, what they are currently doing, and what they will do in the future. In other words, how to give the user the right buttons to turn. Finally, CBS focuses on the business part for companies developing these robots and what business models may be promising for them. How can they access the market, and what happens at the other end with the organizations that need to use multi-robot systems, how do they change?” explains Anders Lyhne Christensen.

The focus is largely on users’ understanding and use of the technology, he elaborates.

“We can certainly create robots and program them to do this and that, but getting them to work in the real world requires that people can control them. What we are working on is therefore the interaction between the AI in the robots, the people who have to control the robots, and the organizations around them.”

“It doesn’t work if the farmer has to keep an eye on the robot while it performs the task – not much is gained then. Instead, it is important to be able to oversee what several robots are doing at once.”

It may also be that the farmer himself does not have to monitor the robots, but a company that monitors robots for 50 farmers at a time, and then it changes those organizations because new job functions and responsibilities come with the technology.”

Alea Scovill is pleased with the collaboration with the researchers. It works well, says the R&D manager.

“The flow of information between the partners in the project has been relatively smooth. At AGROINTELLI, we have primarily worked with CBS and Aalborg University because their research areas fit well with our situation. CBS is investigating the market obstacles for ROBOTTI. And at Aalborg University, the researchers have developed a new proposal for a user interface for remote monitoring of multiple robots, and they will soon interview an agricultural school about the experience of the new user interface,” says Alea Scovill.

 

ABOUT THE HERD PROJECT

In the HERD project, researchers, along with industrial partners, aim to develop technologies that enable end-users to engage and control systems consisting of multiple robots. The goal is to enhance the value of industrial products by enabling faster and more cost-effective completion of current tasks and addressing entirely new tasks that require coordinated robot efforts.

Project period: 2021 to 2025 
Budget: DKK 17.08 million 
Partners: University of Southern Denmark, Aalborg University, Copenhagen Business School, AGROINTELLI, ROBOTTO, and the Danish Technological Institute. 

More about the project

Categories
AI Completed project Future of work Green Tech News

Explainable AI will disrupt the grain industry and give farmers confidence

4 July 2023

Explainable AI will disrupt the grain industry and give farmers confidence  

There is a huge potential for AI in the agricultural sector as a large part of food quality assurance is still handled manually. The aim of a research project is to strengthen understanding of and trust in AI and image analysis, which can improve quality assurance, food quality and optimize production.

One of the major critical barriers to using AI and image analysis in the agriculture and food industry is the trust in its effectiveness.

Today, manual visual inspection of grains remains one of the crucial quality assurance procedures throughout the value chain, ensuring the journey of grains from the field to the table and guaranteeing that farmers receive the right price for their crops.

At the Danish-owned family company FOSS, high-tech analytical instruments are developed for the agriculture and food industry, as well as the chemical and pharmaceutical industries.

Since its founding in 1956 by engineer Nils Foss, development and innovation have been high priorities. As a global producer of niche products, staying ahead of competitors is essential.

Hence, collaboration with researchers from the country’s universities is a crucial part of the company’s digital journey. In a project at the National Research Centre for Digital Technologies (DIREC), the company, along with researchers from Technical University of Denmark and University of Copenhagen, aims to map how AI and image analysis can replace the subjective manual inspection of grains with an automated solution based on image processing. The goal is to develop a method using deep learning neural networks to monitor the quality of seeds and grains using multispectral image data. This method has the potential to provide the grain industry with a disruptive tool to ensure quality and optimize the value of agricultural commodities.

The agricultural and food industry is generally a very conservative industry, and building trust in digital technologies is necessary, explains senior researcher Erik Schou Dreier from FOSS. The development of AI, therefore, cannot stand alone. To encourage farmers to adopt the technology, it is crucial to instill confidence in how it works. In this process, researchers use explainable AI to elucidate how the algorithms function.

Today, grain is assessed manually in many places, and replacing manual work with a machine requires trust. Because the work is performed by humans, it is a fairly subjective reference method used today. Humans may not necessarily perform the work the same way every time and can arrive at different results. Therefore, there will be some uncertainty about the outcome.

Mapping and explaining algorithms

– The result is more precise when using AI and image analysis in the process. However, for these new technologies to gain widespread acceptance globally, a model is needed to explain how AI works and arrives at a given result, says Erik Schou Dreier.

Many people have inherent skepticism toward self-driving cars. Self-driving cars need to be even better and safer at driving than us humans before we trust them. Similarly, the AI analysis models we work with must be significantly better than the manual processes they replace for people to trust them. To build that trust, we must first be able to explain how AI analyzes an image and arrives at a given result. That is the goal of the project—to interpret the way AI works, so people can understand how it reads an image.

We typically accept a higher error rate among humans than machines. For us humans to trust the algorithms, they need to be explainable.
Erik Schou Dreier, senior researcher

PhD student Lenka Tetková from Technical University of Denmark is part of the project and spends some days at FOSS’ office. Here, she works with images of grains in two different ways, partly to improve image qualification and partly to better understand how classifications work so they can be enhanced.

– I sometimes use the example of a zebra and a deer to explain how image classification works. Imagine you have a classification that can recognize zebras and deer. Now, you get a new image of an animal with a body like a deer, but the legs resemble those of a zebra. A standard model will not be able to recognize this animal if it hasn’t seen the animal during training. But if you provide it with additional information (metadata) – in this case, a description of all kinds of animals – it will be able to infer that the image corresponds to an okapi, based on its knowledge of zebras, deer, and the description of an okapi. That is, the model will be able to use information not present in the images to achieve better results, explains Lenka Tetková and continues:

– In this project, we want to use metadata about the grains, such as information about the place of origin, weather conditions, pesticide use, and storage conditions, to improve the classification of grains.

Can you find ‘Okapi’ in these pictures? Ph.D. student Lenka Tetková from DTU uses this example to explain how image classification works.

An important competitive advantage

As a global producer of niche products, FOSS must always stay two steps ahead of competitors.

– To ensure there is a market for us in the future, it is crucial to be the first with new solutions. It is challenging to make a profit if there is already a player doing it better, which is why we constantly introduce new digital technologies to improve our analysis tools. And here, collaboration with researchers from the country’s universities is very valuable to us, as we gain new insights and proposed solutions for the further development of our tools, says Erik Schou Dreier and continues:

– In this project, we hope that collaboration with researchers will lead to the development of AI methods and tools that enable us to create new solutions for automated image-based quality assessment and, secondly, that we can increase trust in our product with explainable AI. It is one of the critical themes for us—to create a product that is trusted.

Facts about FOSS

FOSS’ measuring instruments are used everywhere in the agriculture and food industry to quality assure a wide range of raw materials and finished food products.

Traditionally, light wavelengths are measured, and the measurements are used to obtain chemical information about a product. This can include knowledge about protein and moisture content in grains or fat and protein in milk, etc.

FOSS’ customers are large global companies that use FOSS’ products to quality assure and optimize their production—and to ensure the right pricing, so, for example, the farmer gets the right price for their grain.

Deep Learning and Automation of Imaging-based Quality of Seeds and Grains

Project Period: 2020-2024
Budget: DKK 3.91 million

Project participants:

Lenka Tetková
Lars Kai Hansen, Professor DTU
Kim Steenstrup Pedersen, Professor, KU
Thomas Nikolajsen, Head of Front-end Innovation, FOSS
Toke Lund-Hansen, Head of Spectroscopy Team, FOSS
Erik Schou Dreier, Senior Scientist, FOSS

What is a Deep Learning Neural Network?

Deep learning neural networks are computer systems inspired by how our brains function. It consists of artificial neurons called nodes organized in layers. Each node takes in information, processes it, and passes it on to the next layer. This helps the network understand data and make predictions. By training the network with examples and adjusting the connections between nodes, it learns to make accurate predictions on new data. Deep learning neural networks are used for tasks such as image recognition, language understanding, and problem-solving.

Categories
AI Health tech News

How do we become better at using artificial intelligence in healthcare?

17 OCTOBER 2022

How do we become better at using artificial intelligence in healthcare?

There is an increasing demand in Denmark for new and more advanced healthcare services. In the coming years, there will be more elderly people with treatment needs and a decreasing youth population to take care of the elderly. The challenges call for us to think differently, so that we can jointly develop a well-functioning healthcare system that can provide the best treatment methods.

The use of artificial intelligence is an important part of the solution when resources need to be optimized and we need to think differently. But is our healthcare system ready to implement the new solutions, and what challenges will arise in the meeting between digital research and everyday life in a busy hospital?

“Artificial intelligence and machine learning can improve the ways we prevent and diagnose diseases, optimize treatments, increase quality and reduce errors. A huge number of technological innovations are emerging right now, many of which are promising research-based AI solutions, and yet it is a challenge to get them tested and implemented in the healthcare sector, says Thomas Riisgaard Hansen, director of Digital Research Centre Denmark (DIREC). 

What is holding the development back and what are the actual challenges? Is it that technology is getting closer, but still too limited and full of errors to create actual value in the healthcare sector? Is it that data and legislation complicate the development of algorithms? Is it that the healthcare system has problems incorporating new technology and changing work processes? Is it a lack of resources and money? Or does the problem lie elsewhere? This hot topic was discussed in the session ‘How to navigate the challenges of implementing groundbreaking AI in the healthcare sector’ at this year’s Digital Tech Summit. 

“It is a major task to use the technological opportunities in the healthcare system and it also requires us not to be deceived by dazzling promises about what the technology can do but, instead, we must work purposefully to exploit the actual opportunities and to remove or reduce the barriers that interfere,” says Thomas Riisgaard Hansen, who has worked with health innovation for 20 years and moderated the panel discussion. 

He was accompanied by technology companies, researchers, innovators, and health professionals, who gave their own take on how we can jointly support the development and implementation of new solutions that will benefit patients and staff.

The session presented three concrete cases about implementation of AI in the Danish healthcare system:  

Getting Access to Health Data and Ways to Leverage it in the Health Sector
Henrik Løvig, Enversion & Gitte Kjeldsen, Danish Life Science Cluster

Getting AI innovations implemented internationally
Mads Jarner Brevadt, Co-founder & CEO, Radiobotics & Janus Uhd Nybing, Ledende Forskningsradiograf, Bispebjerg og Frederiksberg Hospital samt Medstifter, Radiologisk AI Testcenter RAIT

Getting Research Implemented in the Daily Practices in a Hospital Setting
Mads Nielsen, Professor, KU andIlse Vejborg, Head of Department, Rigshospitalet

Each case is based on experiences with the implementation of artificial intelligence in the healthcare system and highlighted the challenges and best practices that have been identified from the perspective of the technology developers and not least of the healthcare professionals.

The session was organized by DIREC, Pioneer Centre for AI, CBS, DTU, and Danish Life Science Cluster. 

 

 

 

Categories
AI Health tech News

Explainable AI to increase hospitals’ use of AI

26 November 2021

Explainable AI to increase hospitals' use of AI

In a new DIREC project, AI researchers are collaborating with hospitals to create more useful AI and AI algorithms that are easier to understand.

AI (artificial intelligence) is gradually gaining ground in assistive medical technologies such as image-based diagnosis, where artificial intelligence analyzes CT scans with superhuman precision. AI, on the other hand, is rarely designed as a collaborator for healthcare professionels.

In a new human-AI project EXPLAIN-ME – supported by the national research center DIREC, AI researchers together with medical staff will develop explanatory artificial intelligence (Explainable AI – XAI) that can give clinicians feedback when training in hospitals training clinics.

“In the Western world, about one in ten diagnoses is judged to be incorrect, so patients do not get the right treatment. The explanation may be due to a lack of experience and training. Our XAI model will help the medical staff make decisions and act a bit like a mentor who gives advice and response when they train,” explains Professor at DTU Compute and Project Manager Aasa Feragen.

In the project, DTU, the University of Copenhagen, Aalborg University, and Roskilde University collaborate with doctors at the training and simulation center CAMES at Rigshospitalet, NordSim at Aalborg University Hospital, and oncologists at the Department of Urology at Zealand University Hospital in Roskilde.

Ultrasound scan of pregnant women


At CAMES, DTU and the University of Copenhagen will develop an XAI model that looks over the shoulder of doctors and midwives when they ultrasound scan ‘pregnant’ training dolls at the training clinic.

In the field of ultrasound scanning, clinicians work on the basis of specific ‘standard plans’, which show different parts of the fetus’ anatomy to make it easier to see and react in case of complications. The rules are implemented in the XAI model, which is integrated into a simulator that gives the doctor ongoing feedback.

“It would be great if XAI could help less trained doctors to do scans that are on a par with the highly trained doctors.”
Professor and Projekt Manager Aasa Feragen

The researchers train the artificial intelligence on real data from Rigshospitalet’s ultrasound scans from 2009 to 2018, and it is primarily images from the common nuchal scan and malformation scans that are offered to all Danish pregnant women approximately 12 and 20 weeks into the pregnancy. When the XAI models will be ready to use at the training clinic, first they have to check whether the model also works in the simulator, since the EAI model is trained on real data, while the training doll is artificial data.

According to doctors, the quality of ultrasound scans and the ability to make accurate diagnoses depends on how much training the doctors have received.

“If our model can tell the doctor during the scan that a foot is missing in the picture, the doctor may be able to learn faster. If we get the XAI model to tell us that the probe on the ultrasound device needs to be moved a bit to get everything in the picture, then maybe it can be used in medical practice as well. It would be great if XAI could help less trained doctors to do scans that are on a par with the highly trained doctors,” says Aasa Feragen.

Research associate professor and head of CAMES’ research team for artificial intelligence Martin Grønnebæk Tolsgaard emphasizes that many doctors are interested in getting help from AI technology to find the best treatment for patients. Here is explainable AI the way to go.

“Many of the AI models that exist today do not provide very good insight into why they come to a particular decision. It is important for us to become wiser on that. If the model does not explain why it comes to a given decision, then clinicians do not believe in the decision. So if you want to use AI to make clinicians better, then we need good explanations, like Explainable AI.”

Ongoing feedback on robotic surgery


Robotic surgery allows surgeons to perform their work with more precision and control than traditional surgical tools. It reduces errors and increases efficiency, and the expectation is that AI will be able to improve the results further.

In Aalborg, the researchers will develop an XAI model that supports the doctors in the training center NordSim, where both Danish and foreign doctors can train surgery and operations in simulators on e.g. pig hearts. The model must provide ongoing feedback to the clinicians while they are training an operation without interfering, says Mikael B. Skov, professor at Department of Computer Science at Aalborg University:

“Today, it is typically the case that you only get to know if you should have done something different when you have finished training an operation. We would like to look at how you can come up with this feedback more continuously to better understand whether we have done something right or wrong. The feedback should be done in such a way that the people learn faster and, at the same time, make fewer mistakes before they have to go out and do real operations. We, therefore, need to look at how to develop different types of feedback, such as warnings without interrupting too much”.

Image analysis in kidney cancer


Doctors often have to make decisions under time pressure, e.g. in connection with cancer diagnoses to prevent cancer from spreading. A false-positive diagnosis, therefore, could cause a healthy kidney removed and other complications to be inflicted. Although experience shows that AI methods are more accurate in assessments, clinicians need a good explanation of why the mathematical models classify a tumor as benign or malignant.

In the DIREC project, researchers from Roskilde University will develop methods in which artificial intelligence analyzes medical images for use in diagnosing kidney cancer. Clinicians will help them understand what feedback is needed from the AI models to balance what is technically possible and what is clinically necessary.

“It is important that the technology can be included in the hospitals’ practice, and therefore we focus in particular on designing these methods within ‘Explainable AI’ in direct collaboration with the doctors who actually use it in their decision-making. Here we draw in particular on our expertise in Participatory Design, which is a systematic approach to achieve the best synergy between what the AI researchers come up with in terms of technological innovations and what doctors need,” says Henning Christiansen, professor in computer science at the Department of People and Technology at Roskilde University.

About DIREC – Digital Research Centre Denmark

The purpose of the national research centre DIREC is to bring Denmark at the forefront of the latest digital technologies through world-class digital research. To meet the great demand for highly educated IT specialists, DIREC also works to expand the capacity within both research and education of computer scientists. The centre has a total budget of DKK 275 million and is supported by the Innovation Fund Denmark with DKK 100 million. The partnership consists of a unique collaboration across the computer science departments at Denmark’s eight universities and the Alexandra Institute.

The activities in DIREC are based on societal needs, where research is continuously translated into value-creating solutions in collaboration with the business community and the public sector. The projects operate across industries with focus on artificial intelligence, Internet of Things, algorithms and cybersecurity among others.

Read more at direc.dk

EXPLAIN-ME

Partners in the project EXPLAIN-ME: Learning to Collaborate via Explainable AI in Medical Education

  • DTU (DTU Compute – Department of Mathematics and Computer Science)
    University of Copenhagen
  • Aalborg University
  • Roskilde University
  • CAMES – Copenhagen Academy for Medical Education and Simulation at Rigshospitalet in Copenhagen
  • NordSim – Center for skills training and simulation at Aalborg University Hospital
  • Department of Urology at Zealand University Hospital in Roskilde

Project period: 1 October 2021 to 30 April 2025

Contact: 
Aasa Feragen
DTU Compute
M: +45 26 22 04 98
afhar@dtu.dk

Anders Nymark Christensen
DTU Compute
+45 45 25 52 58
anym@dtu.dk

Categories
AI News

Broad university collaboration: Artificial intelligence helps predict the programming of robots at Universal Robots

Broad university collaboration: Artificial intelligence helps predict the programming of robots at Universal Robots

Robots generate a massive flood of data that can be used to optimize processes and predict wear and tear. In a new project, researchers from the University of Southern Denmark will help Universal Robots to develop AI systems that can predict how to optimize the programming of the robots.