Search
Close this search box.
Categories
News

Enhancing Teaching at Scale: Various experiments conducted at Danish universities

4 July 2024

Enhancing Teaching at Scale: Various experiments conducted at Danish universities

Teaching computer science topics to larger groups of students should of course utilize the modern digital technologies wherever possible. This project first investigated the possibility of creating shared resources for teaching on top of national cloud ressources. Due to the large variety in Learning Management Systems (LMSs) used across the Danish universities, the project was pivoted towards a more grass roots approach of supporting existing initiatives.

In the project Software Infrastructure for Teaching at Scale various experiments have been conducted at Danish universities to enhance teaching at scale.

Three experiments focused on (semi)automating feedback on programming tasks using Git, CI tools, and custom software. Other experiments aimed to extend learning platforms by adding features or creating more exercises. One experiment addressed onboarding challenges for new teachers and teaching assistants by developing standardized guidelines.

Finally, two tools were developed to facilitate exam planning and student project assignments, providing graphical interfaces for better management of student workloads and preferences.

Read about the experiments in the project report

Categories
AI Health tech News

AI against kidney cancer

2 July 2024

AI against kidney cancer: Reducing over-treatment and saving millions for society  

Every year, kidney tumor patients endure significant suffering due to unnecessary biopsies and surgeries. The current diagnostic methods leave much to be desired. Therefore, a research team from the University of Copenhagen, Roskilde University, and the Urology Department at Zealand University Hospital is developing an explainable artificial intelligence (XAI) to assist nephrologists and patients with accurate diagnoses.

Nessn Azawi with a CT scanner, which is used for scans of kidney cancer.

Kidney cancer is one of the most over-treated cancers in Denmark. The available scan images are often unreliable, with one in five CT scans yielding false positives. This means that up to 27 percent of kidney tumor patients undergo painful biopsies and surgeries without having cancer.

To address this, a newly developed AI model is currently being tested at Zealand University Hospital. It surpassed experienced doctors in diagnosing kidney cancer based on scan images. The problem is, however, is that doctors cannot explain the model’s conclusions. This hinders the AI model’s widespread adoption.

In the research and innovation project EXPLAIN ME, funded by the Digital Research Centre Denmark, a team of researchers from the University of Copenhagen, Roskilde University, and the Urology Department at Zealand University Hospital are working to interpret the model’s conclusions.

“Although it is tempting, we cannot simple leave such significant decisions to AI. We need to fully understand its neural patterns from the outset before we can implement it in practice,” says Nessn Azawi, Chief Physician at Zealand University Hospital’s Urology Department and Associate Professor at the University of Copenhagen.

Significant savings for society

As part of the EXPLAIN-ME project, Nessn Azawi and his research team have been working since 2022 to develop explainable artificial intelligence (XAI) that can guide nephrologists on when surgery is necessary, and crucially, explain why.

The 1,000 Danish patients diagnosed with kidney cancer each year rarely show symptoms until the cancer is advanced. The significant diagnostic uncertainty leads to many patients being over-treated. According to Nessn Azawi, AI-based diagnosis could reduce the treatment process by 2-4 weeks and save the healthcare system approximately 15-25 million kroner annually. These positive outcomes would be maximized if the technology is adopted throughout the Nordic region.

“We over-treat around 30,000 kidney cancer patients in the Scandinavian countries. Improving diagnosis would have significant positive effects for both society and the patients,” says Nessn Azawi.

A multidisciplinary effort

Researchers have already tested the AI model at Roskilde University with promising results. The next milestone is to develop a model with a more detailed dataset that can provide nephrologists with accurate kidney cancer diagnoses supported by solid evidence. This has been the focus of PhD student Daniel van Dijk Jacobsen from Roskilde University’s Department of People and Technology for the past two years.
“The challenge is that we don’t know what the model is analyzing when it makes the diagnosis. It’s about identifying the patterns the model detects at the pixel level and then conveying that information to the doctors,” he says.

Thus, it has been essential to work across disciplines, incorporating ethnographic observation studies during patient interactions, participatory design, and ongoing discussions with the medical staff at Zealand University Hospital.

“I find that doctors are enthusiastic about exploring technological possibilities, as they are eager for assistance in achieving more precise diagnostics. They want to be able to compare the patient’s history with the machine’s diagnosis and make decisions based on a better foundation than they currently have,” says Daniel van Dijk Jacobsen.

By analyzing CT scans, artificial intelligence can assess the likelihood of whether a tumor is malignant or benign and assist doctors in determining if surgery is needed.

At present, the researchers are seeking additional funding to support their goal of implementing the model in Danish hospitals within a few years. DIREC has supported the EXPLAIN-ME project with 7.39 million kroner from 2023 to 2025. In addition to kidney cancer diagnostics, the project focuses on ultrasound scans of pregnant women and robotic surgery.

What is Explainable Artificial Intelligence (XAI)?

Explainable artificial intelligence aims to elucidate the rationale behind AI model outputs, thereby enhancing trust in their decisions. Machine learning models are growing in complexity, and they are increasingly relied upon for critical decisions. Explainable artificial intelligence enables users to discern the model’s training data and evaluate the accuracy of its outputs, among other capabilities.

 

Categories
Future of work News

Error source or necessity? Researchers explore the human role in future production

2 July 2024

Error source or necessity? Researchers explore the human role in future production  

The interplay between humans and technology is a subject of intense debate. Is human involvement necessary if technology can handle the work on its own? A research collaboration between Copenhagen Business School and the University of Southern Denmark, funded by DIREC, highlights the importance of human involvement in automated systems.

As technology becomes an increasingly significant part of industrial processes and daily life, understanding the foundations of human-machine collaboration is becoming increasingly important.

In the research project Cyber-physical systems with humans in the loop, supported by the Digital Research Centre Denmark (DIREC), researchers from the Department of Digitalization at Copenhagen Business School and the Software Engineering Section at the Maersk Mc-Kinney Moller Institute at the University of Southern Denmark have examined the significance of the human role in cyber-physical systems (CPS).

CPS are systems that integrate physical objects with software enabling these components to interact and exchange information, for example in advanced industrial processes.

The researchers aim to establish a better conceptual framework and a foster a shared understanding of human collaboration with technology—a shared understanding that is currently lacking.

“Many in the industry view humans as a source of error and risk factors. They wish to automate humans out of the equation, excluding them from system design. However, much of the academic literature sees this perspective as problematic. Instead, systems should be designed to incorporate human roles. This creates a disconnect between academic theory and industry practice,” says Torkil Clemmensen, Professor at the Department of Digitalization at Copenhagen Business School.

Collaboration between humanities and engineering

The research team has combined humanistic and technical approaches in their project, which includes literature reviews, software development, and interviews with selected industry partners.

Combining different research domains is a key success factor for the project and has brought many valuable perspectives, according to Mahyar Tourchi Moghaddam, Associate Professor at the Maersk Mc-Kinney Moller Institute at the University of Southern Denmark. With his technical background, he has developed an artificial intelligence that ensures humans remain “in the loop” when machines make decisions.

“We are investigating the subject from various perspectives, and it has been very beneficial not always to think in terms of technical solutions but to consider how systems can best involve humans,” he says.

Humans play an essential role

The collaboration between the two research domains has led the researchers to conclude that humans play an essential role in working with machines. In fact, humans are difficult to eliminate when cyber-physical systems need to operate optimally. The challenge now is to identify the functions where humans add the most value, which should be the focus of future research.

“We need to find the balance between technological innovation and human capabilities. Therefore, more concrete interdisciplinary research is necessary to promote a harmonious coexistence between humans and machines,” concludes Torkil Clemmensen.

The conclusions from “Cyber-physical systems with humans in the loop” will be published later in a scientific article.

Read more about the project