Search
Close this search box.
Categories
DIREC TALKS

DIREC TALKS: Formal Verification and Machine Learning Joining Forces

Formal Verification and Machine Learning Joining Forces

The growing pervasiveness of computerised systems such as intelligent traffic control or energy supply makes our society vulnerable to faults or attacks on such systems. Rigorous software engineering methods and supporting efficient verification tools are crucial to encounter this threat.

In this DIREC talk Kim Guldstrand Larsen will present and discuss how to combine formal verification and AI in order to obtain optimal AND guaranteed safe strategies.

The ultimate goal of synthesis is to disrupt traditional software development. Rather than tedious manual programming with endless testing and revision effort, synthesis comes with the promise of automatic correct-by-construction control software.

In formal verification synthesis has a long history for discrete systems dating back to Church’s problem concerning realization of logic specifications by automata. Within AI the use of (deep) reinforcement learning (Q- and M-learning) has emerged as a popular method for learning optimal control strategies through training, e.g. as applied by autonomous driving.

The formal verification approach and the AI approach to synthesis are highly complementary: Formal verification synthesis comes with absolute guarantees but are computationally expensive with resulting strategies being extremely large. In contrast, AI synthesis comes with no guarantees but is highly scalable with neural networks providing compact strategy representation.

Kim Guldstrand Larsen will present the tool UPPAAL Stratego that combines symbolic techniques with reinforcement learning to achieve (near-)optimality and safety for hybrid Markov decision processes and highlight some of the applications that include water management, traffic light control, and energy aware building.

Emphasis will be on the challenges of implementing learning algorithms, argue for their convergence and designing data structures for compact and understandable strategy representation.

KIM GULDSTRAND LARSEN

PROFESSOR OF COMPUTER SCIENCE,
AALBORG UNIVERSITY
Speaker

KIM GULDSTRAND LARSEN

Kim Guldstrand Larsen is a Professor of Computer Science at Aalborg University since 1993. He received Honorary Doctorate from Uppsala University (1999), ENS Cachan (2007), International Chair at INRIA (2016) and Distinguished Professor at North-Eastern University, Shenyang, China (2018). His research interests cover modeling, verification, performance analysis of real-time and embedded systems with applications to concurrency theory, model checking and machine learning.  

He is the prime investigator of the verification tool UPPAAL for which he received the CAV Award in 2013. Other prizes received include Danish Citation Laureates Award, Thomson Scientific Award as the most cited Danish Computer Scientist in the period 1990-2004 (2005), Grundfos Prize (2016), Ridder af Dannebrog (2007). He is member of the Royal Danish Academy of Sciences and Letters, The Danish Academy of Technical Science, where he is Digital wiseman. Also, he is member of the Academia Europaea.

In 2015 he received the prestigious ERC Advanced Grant (LASSO), and in 2021 he won Villum Investigator Grant (S4OS).  He has been PI and director of several large centers and initiatives including CISS (Center for Embedded Software systems, 2002-2008), MT-LAB (Villum-Kahn Rasmussen Center of Excellence, 2009-2013), IDEA4CPS (Danish-Chinese Research Center, 2011-2017), INFINIT National ICT Innovation Network, 2009-2020), DiCyPS (Innovation Fund Center, 2015-2021). Finally, he is co-founder of the companies UP4ALL (2000), ATS (2017) and VeriAal (2020).

Categories
Bridge project

Embedded AI

DIREC project

Embedded AI

Summary

AI currently relies on large data centers and centralized systems, necessitating data movement to algorithms. To address this limitation, AI is evolving towards a decentralized network of devices, bringing algorithms directly to the data. This shift, enabled by algorithmic agility and autonomous data discovery, will reduce the need for high-bandwidth connectivity and enhance data security and privacy, facilitating real-time edge learning. This transformation is driven by the integration of AI and IoT, forming the “Artificial Intelligence of Things” (AIoT), and the rise of Embedded AI (eAI), which processes data on edge devices rather than in the cloud. 

Embedded AI offers increased responsiveness, functionality, security, and privacy. However, it requires engineers to develop new skills in embedded systems. Companies are hiring data scientists to leverage AI for optimizing products and services in various industries. This project aims to develop tools and methods to transition AI from cloud to edge devices, demonstrated through industrial use cases.

 

Project period: 2022-2024
Budget: DKK 16,2 million

AI is currently limited by the need for massive data centres and centralized architectures, as well as the need to move this data to algorithms. To overcome this key limitation, AI will evolve from today’s highly structured, controlled, and centralized architecture to a more flexible, adaptive, and distributed network of devices. This transformation will bring algorithms to the data, made possible by algorithmic agility and autonomous data discovery, and it will drastically reduce the need for high-bandwidth connectivity, which is required to transport massive data sets and eliminate any potential sacrifice of the data’s security and privacy. Furthermore, it will eventually allow true real-time learning at the edge.

This transformation is enabled by the merging of AI and IoT into “Artificial Intelligence of Things” (AIoT), and has created an emerging sector of Embedded AI (eAI), where all or parts of the AI processing are done on the sensor devices at the edge, rather than sent to the cloud. The major drivers for Embedded AI are increased responsiveness and functionality, reduced data transfer, and increased resilience, security, and privacy. To deliver these benefits, development engineers need to acquire new skills in embedded development and systems design.

To enter and compete in the AI era, companies are hiring data scientists to build expertise in AI and create value from data. This is true for many companies developing embedded systems, for instance, to control water, heat, and air flow in large facilities, large ship engines, or industrial robots, all with the aim of optimizing their products and services. However, there is a challenging gap between programming AI in the cloud using tools like Tensorflow, and programming at the edge, where resources are extremely constrained. This project will develop methods and tools to migrate AI algorithms from the cloud to a distributed network of AI-enabled edge-devices. The methods will be demonstrated on several use cases from the industrial partners.

Research problems and aims
In a traditional, centralized AI architecture, all the technology blocks would be combined in the cloud or at a single cluster (Edge computing) to enable AI. Data collected by IoT, i.e., individual edge-devices, will be sent towards the cloud. To limit the amount of data needed to be sent, data aggregation may be performed along the way to the cloud. The AI stack, the training, and the later inference, will be performed in the cloud, and results for actions will be transferred back to the relevant edge-devices. While the cloud provides complex AI algorithms which can analyse huge datasets fast and efficiently, it cannot deliver true real-time response and data security and privacy may be challenged.

When it comes to Embedded AI, where AI algorithms are moved to the edge, there is a need to transform the foundation of the AI Stack by enabling transformational advances, algorithmic agility and distributed processing will enable AI to perceive and learn in real-time by mirroring critical AI functions across multiple disparate systems, platforms, sensors, and devices operating at the edge. We propose to address these challenges in the following steps, starting with single edge-devices.

  1. Tiny inference engines – Algorithmic agility of the inference engines will require new AI algorithms and new processing architectures and connectivity. We will explore suitable microcontroller architectures and reconfigurable platform technologies, such as Microchip’s low power FPGA’s, for implementing optimized inference engines. Focus will be on achieving real-time performance and robustness. This will be tested on cases from the industry partners.
  2. µBrains – Extending the edge-devices from pure inference engines to also provide local learning. This will allow local devices to provide continuous improvements. We will explore suitable reconfigurable platform technologies with ultra-low power consumption, such as Renesas’ DRP’s using 1/10 of the power budget of current solutions, and Microchip’s low power FPGA’s for optimizing neural networks. Focus will be on ensuring the performance, scheduling, and resource allocation of the new AI algorithms running on very resource constrained edge-devices.
  3. Collective intelligence – The full potential of Embedded AI will require distributed algorithmic processing of the AI algorithms. This will be based on federated learning and computing (microelectronics) optimized for neural networks, but new models of distributed systems and stochastic analysis, is necessary to ensure the performance, prioritization, scheduling, resource allocation, and security of the new AI algorithms—especially with the very dynamic and opportunistic communications associated with IoT.

     

The expected outcome is an AI framework which supports autonomous discovery and processing of disparate data from a distributed collection of AI-enabled edge-devices. All three presented steps will be tested on cases from the industry partners.

Value Creation
Deep neural networks have changed the capabilities of machine learning reaching higher accuracy than hitherto. They are in all cases learning from unstructured data now the de facto standard. These networks often include millions of parameters and may take months to train on dedicated hardware in terms of GPUs in the cloud. This has resulted in high demand of data scientists with AI skills and hence, an increased demand for educating such profiles. However, an increased use of IoT to collect data at the edge has created a wish for training and executing deep neural networks at the edge rather than transferring all data to the cloud for processing. As IoT end- or edge devices are characterized by low memory, low processing power, and low energy (powered by battery or energy harvesting), training or executing deep neural networks is considered infeasible. However, developing dedicated accelerators, novel hardware circuits, and architectures, or executing smaller discretized networks may provide feasible solutions for the edge.

The academic partners DTU, KU, AU, and CBS, will not only create scientific value from the results disseminated through the four PhDs, but will also create important knowledge, experience, and real-life cases to be included in the education, and hence, create capacity building in this important merging field of embedded AI or AIoT.

The industry partners Indesmatech, Grundfos, MAN ES, and VELUX are all strong examples of companies who will benefit from mastering embedded AI, i.e., being able to select the right tools and execution platforms for implementing and deploying embedded AI in their products.

  • Indesmatech expect to gain leading edge knowledge about how AI can be implemented on various chip processing platforms, with a focus on finding the best and most efficient path to build cost and performance effective industrial solutions across industries as their customers are represented from most industries.
  • Grundfos will create value in applications like condition monitoring of pump and pump systems, predictive maintenance, heat energy optimization in buildings and waste-water treatment where very complex tasks can be optimized significant by AI. The possibility to deploy embedded AI directly on low cost and low power End and Edge devices instead of large cloud platforms, will give Grundfos a significant competitive advantage by reducing total energy consumption, data traffic, product cost, while at the same time increase real time application performance and secure customers data privacy.
  • MAN ES will create value from using embedded AI to predict problems faster than today. Features such as condition monitoring and dynamic engine optimization will give MAN ES competitive advantages, and the exploitation of embedded AI together with their large amount of data collected in the cloud will in the long run create marked advantages for MAN ES.

VELUX will increase their competitive edge by attaining a better understanding of the ability to implement the right level of embedded AI into their products. The design of new digital smart products with embedded intelligence, will create value from driving the digital product transformation of VELUX.

The four companies represent a general trend where several industries depend on their ability to develop, design and engineer high tech products with software, sensors and electronic solutions as embedded systems to their core products. Notably firms in the machine sub-industry of manufacturers of pumps, windmills and motors, and companies in the electronics industry, which are manufacturing computer and communication equipment and other electronic equipment. These industries have very high export with an 80 percent export share of total sales.

Digital and electronics solutions compose a very high share of the value added. In total, the machine sub-industry’s more than 250 companies and the electronics industry’s more than 500 companies in total exported equipment worth 100 billion DKK in 2020 and had more than 38.000 employees.[1] The majority of electronics educated have a master’s or bachelor’s degree in engineering and the share of engineers has risen since 2008.[2] 

Digitalization, IoT and AI are data driven and a large volume of data will have economic and environmental impact. AI will increase the demand for computing, which today depends on major providers of cloud services and transfer of data. The operating costs of energy related to this will increase, and according to EU’s Joint Research Center (JRC), it will account for 3-4 percent of Europe’s total energy consumption[3]. Thus, less energy consuming and less costly solutions, are needed. The EU-Commission find that fundamental new data processing technologies encompassing the edge are required. Embedded AI will make this possible by moving computing to sensors where data is generated, instead of moving data to computing. [4]All in all, the rising demand and need for these new high-tech solutions calls for development of Embedded AI capabilities and will have a positive impact on Danish industries, in terms of growth and job-creation.  

[1] Calculations on data from Statistics Denmark, Statistikbanken, tables: FIKS33; GF2 
[2] ”Elektronik-giver-beskaftigelse-i-mange-brancher” DI Digital, 2021 
[3] Artificial Intelligence, A European Perspective”, JRC, EUR 29425, 2018 
[4] “2030 Digital Compass, The European way for the Digital Decade”, EU Commission, 2021  

Impact

The project creates value not only scientifically from the results disseminated through the four PhDs, but will also create important knowledge, experience, and real-life cases to be included in education, and hence, create capacity building in this important merging field of embedded AI or AIoT.

News / coverage

Reports

Participants

Project Manager

Xenofon Fafoutis

Professor

Technical University of Denmark
DTU Compute

E: xefa@dtu.dk

Peter Gorm Larsen

Professor

Aarhus University
Dept. of Electrical and Computer Engineering

Jalil Boudjadar

Associate Professor

Aarhus University
Dept. of Electrical and Computer Engineering

Jan Damsgaard

Professor

Copenhagen Business School
Department of Digitalization

Ben Eaton

Associate Professor

Copenhagen Business School
Department of Digitalization

Thorkild Kvisgaard

Head of Electronics

Grundfos

Thomas S. Toftegaard

Director, Smart Product Technology

Velux

Rune Domsten

Co-founder & CEO

Indesmatech

Jan Madsen

Professor

Technical University of Denmark
DTU Compute

Henrik R. Olesen

Senior Manager

MAN Energy Solutions

Reza Toorajipour

PhD Student

Copenhagen Business School
Department of Digitalization

Iman Sharifirad

PhD Student

Aarhus University
Dept. of Electrical and Computer Engineering

Amin Hasanpour

PhD Student

Technical University of Denmark
DTU Compute

Partners

Categories
Bridge project

HERD: Human-AI Collaboration: Engaging and Controlling Swarms of Robots and Drones

DIREC project

HERD: Human-AI Collaboration

- Engaging and Controlling Swarms of Robots and Drones

Summary

Today, robots and drones take on an increasingly broad set of tasks. However, such robots are limited in their capacity to cooperate with one another and with humans. How can we leverage the potential benefits of having multiple robots working in parallel to reduce time to completion? If robots are given the task collectively as a swarm, they could potentially coordinate their operation on the fly and adapt based on local conditions to achieve optimal or near-optimal task performance.  

Together with industrial partners, this project aims to address multi-robot collaboration and design and evaluate technological solutions that enable users to engage and control autonomous multi-robot systems.

Project period: 2021-2025
Budget: DKK 17,08 million

Robots and drones take on an increasingly broad set of tasks, such as AgroIntelli’s autonomous farming robot and the drone-based emergency response systems from Robotto. Currently, however, such robots are limited in their capacity to cooperate with one another and with humans. In the case of AgroIntelli, for instance, only one robot can currently be deployed on a field at any time and is unable to respond effectively to the presence of a human-driven tractor or even another farming robot working in the same field. In the future, AgroIntelli wants to leverage the potential benefits of having multiple robots working in parallel on the same field to reduce time to completion. A straightforward way to achieve this is to partition the field into several distinct areas corresponding to the number of robots available and then assign each robot its own area. However, such an approach is inflexible and requires detailed a priori planning. If, instead, the robots were given the task collectively as a swarm, they could potentially coordinate their operation on the fly and adapt based on local conditions to achieve optimal or near-optimal task performance.

Similarly, Robotto’s system architecture currently requires one control unit to manage each deployed drone. In large area search scenarios and operations with complex terrain, the coverage provided by a single drone is insufficient. Multiple drones can provide real-time data on a larger surface area and from multiple perspectives – thereby aiding emergency response teams in their time-critical operations. In the current system, however, additional drones each requires a dedicated operator and control unit. Coordination between operators introduces an overhead and it can become a struggle to maintain a shared understanding of the rapidly evolving situation. There is thus a need to develop control algorithms for drone-to-drone coordination and interfaces that enable high-level management of the swarm from a single control console. The complexity requires advanced interactions to keep the data actionable, simple, and yet support the critical demands of the operation. This challenge is relevant to search & rescue (SAR) as well as other service offerings in the roadmap, including firefighting, inspections, and first responder missions.

For both of our industrial partners, AgroIntelli and Robotto, and for similar companies that are pushing robotics technology toward real-world application, there is a clear unmet need for approaches that enable human operators to effectively engage and control systems composed of multiple autonomous robots. This raises a whole new set of challenges compared to the current paradigm where there is a one-to-one mapping between operator and robot. The operator must be able to interact with the system at the swarm level as a single entity to set mission priorities and constraints, and at the same time, be able to intervene and take control of a single robot or a subset of robots. An emergency responder may, for instance, want to take control over a drone to follow a civilian or a group of personnel close to a search area, while a farmer may wish to reassign one or more of her farming robots to another field.

HERD will build an understanding of the challenges in multi-robot collaboration, and design and evaluate technological solutions that enable end-users to engage and control autonomous multi-robot systems. The project will build on use cases in agriculture and search & rescue supported by the industrial partners’ domain knowledge and robotic hardware. Through the research problems and aims outlined below, we seek to enable the next generation of human-swarm collaboration.

Pre-operation and on-the-fly mission planning for robot swarms: An increase in the number of robots under the user’s control has the potential to lead to faster task completion and/or a higher quality. However, the increase in unit count significantly increases the complexity of both end-user-to-robot communication and coordination between robots. As such, it is critical to support the user in efficient and effective task allocation between robots. We will answer the following research questions: (i) What are the functionalities required for humans to effectively define mission priorities and constraints at the swarm level? (ii) How can robotic systems autonomously divide tasks based on location, context, and capability, and under the constraints defined by the end-user? (iii) How does the use of autonomous multi-robot technologies change existing organizational routines, and which new ones are required?

Situational awareness under uncertainty in multi-robot tasks: Users of AI-driven (multi-)robot systems often wish to simulate robot behaviour across multiple options to determine the best possible approach to the task at hand. Given the context-dependent and algorithm-driven nature of these robots, simulation accuracy can only be achieved up to a limited degree. This inherent uncertainty negatively impacts the user’s ability to make an informed decision on the best approach to task completion. We will support situational awareness in the control of multi-robot systems by studying: (i) How to determine and visualise levels of uncertainty in robot navigation scenarios to optimise user understanding and control? (ii) What are the implications of the digital representation of the operational environment for organizational sensemaking? (iii) How can live, predictive visualisations of multi-robot trajectories and task performance support the steering and directing of robot swarms from afar?

User intervention and control of swarm subsets: Given the potentially (rapidly) changing contexts in which the robots operate, human operators will have to regularly adapt from a predetermined plan for a subset of robots. This raises novel research questions both in terms of robot control, in which the swarm might depend on a sufficient number of nearby robots to maintain communication, and in terms of user interaction, in which accurate robot selection and information overload can quickly raise issues. We will therefore answer the following research questions:

(i) When a user takes low-level control of a single robot or subset of a robot swarm, how should that be done, and how should the rest of the system respond?

(ii) How can the user interfaces help the user to understand the potential impact when they wish to intervene or deviate from the mission plans?

Validation of solutions in real-world applications: Based on the real-world applications of adaptive herbicide spraying by farming robots and search & rescue as provided by our industrial partners, we will validate the solutions developed in the project. While both industrial partners deal with robotic systems, their difference in both application area and technical solution (in-the-air vs. on land) allows us to assess the generalisability and efficiency of our solutions in real-world applications. We will answer the following research questions:

(i) What common solutions should be validated in both scenarios and which domain-specific solutions are relevant in the respective types of scenarios?

(ii) What business and organisational adaptation and innovation are necessary for swarm robotics technology to be successfully adopted in the public sector and in the private sector.

Advances in AI, computer science, and mechatronics mean that robots can be applied to an increasingly broader set of domains. To build the world class computer science research and innovation centres, as per the long-term goal of DIREC, this project focuses on building the competencies necessary to address the complex relationship between humans, artificial intelligence, and autonomous robots.

Scientific value
The project’s scientific value is the development of new methods and techniques to facilitate effective interaction between humans and complex AI systems and the empirical validation in two distinct use cases. The use cases provide opportunities to engage with swarm interactions across varying demands, including domains where careful a priori planning is possible (agricultural context) and chaotic and fast-paced domains (search & rescue with drones). HERD will thus lead to significant contributions in the areas of autonomous multi-robot coordination and human-robot interaction. We expect to publish at least ten rank A research articles and to demonstrate the potential of the developed technologies in concrete real-world applications. This project also gears up the partners to participate in project proposals to the EU Framework Programme on specific topics in agricultural robotics, nature conservation, emergency response, security, and so on, and in general topics related to developing key enabling technologies.

Capacity building
HERD will build and strengthen the research capacity in Denmark directly through the education of three PhDs, and through the collaboration between researchers, domain experts, and end-users that will lead to industrial R&D growth. Denmark has been a thought leader in robotics, innovating how humans collaborate with robots in manufacturing and architecture, e.g. Universal Robots, MiR, Odico, among others. Through HERD, we support not only the named partners in developing and improving their products and services, but the novel collaboration between the academic partners, who have not previously worked together, helps to ensure that the Danish institutes of higher education build the competencies and the workforce that are needed to ensure continued growth in the sectors of robotics and artificial intelligence. HERD will thus contribute to building the capacity required to facilitate effective interaction between end-users and complex AI systems.

Business value
HERD will create business value through the development of technologies that enable end-users to effectively engage and control systems composed of multiple robots. These technologies will significantly increase the value of the industrial partners’ products, since current tasks can be done faster and at a lower cost, and entirely new tasks that require multiple coordinated robots can be addressed. The value increase will, in turn, increase sales and exports. Furthermore, multi-robot systems have numerous potential application domains in addition to those addressed in this project, such as infrastructure inspection, construction, environmental monitoring, and logistics. The inclusion of DTI as partner will directly help explore these opportunities through a broader range of anticipated tech transfer, future market and project possibilities.

Societal value
HERD will create significant societal value and directly contribute to SDGs 1 (no poverty), 2 (zero hunger), 13 (climate action), and 15 (life on land). Increased use of agricultural robots can, for instance, lead to less soil compaction and enable the adoption of precision agriculture techniques, such as mechanical weeding that eliminates the need for pesticides. Similarly, increased use of drones in search & rescue can reduce the time needed to save people in critical situations.

Impact

The project will develop technologies that enable end-users to effectively engage and control systems composed of multiple robots.

Systems composed of multiple robots will significantly increase the value of industrial products, since current tasks can be done faster and at a lower cost, and entirely new tasks that require multiple coordinated robots can be addressed. 

News / coverage

Participants

Project Manager

Anders Lyhne Christensen

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

E: andc@mmmi.sdu.dk

Ulrik Pagh Schultz

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Mikael B. Skov

Professor

Aalborg University
Department of Computer Science

Timothy Robert Merritt

Associate Professor

Aalborg University
Department of Computer Science

Niels van Berkel

Associate Professor

Aalborg University
Department of Computer Science

Ionna Constantiou

Professor

Copenhagen Business School
Department of Digitalization

Kenneth Richard Geipel

Chief Executive Officer

Robotto

Christine Thagaard

Marketing Manager

Robotto

Lars Dalgaard

Head of Section

Danish Technological Institute
Robot Technology

Gareth Edwards

R&D Team Manager

AGROINTELLI A/S

Hans Carstensen

CPO

AGROINTELLI A/S

Maria-Theresa Oanh Hoang

PhD Student

Aalborg University
Department of Computer Science

Alexandra Hettich

PhD Student

Copenhagen Business School
Department of Digitalization

Kasper Grøntved

PhD Student

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Partners

Categories
Bridge project

EXPLAIN-ME: Learning to Collaborate via Explainable AI in Medical Education

DIREC project

Explain me

- Learning to Collaborate via Explainable AI in Medical Education

Summary

In the Western world, approximately one in ten medical diagnoses is estimated to be incorrect, which results in the patients not getting the right treatment. The explanation may be lack of experience and training on the part of the medical staff.

Together with clinicians, this project aims to develop explanatory AI that can help medical staff make qualified decisions by taking the role as a mentor who provides feedback and advice for the clinicians. It is important that the explainable AI provides good explanations that are easy to understand and utilize during the medical staff’s workflow.

Project period: 2021-2025
Budget: DKK 28,44 million

AI is widely deployed in assistive medical technologies, such as image-based diagnosis, to solve highly specific tasks with feasible model optimization. However, AI is rarely designed as a collaborator for the healthcare professionals, but rather as a mechanical substitute for part of a diagnostic workflow. From the AI researcher’s point of view, the goal of development is to beat state-of-the-art on narrow performance parameters, which the AI may solve with superhuman accuracy.

However, for more general problems such as full diagnosis, treatment execution, or explaining the background for a diagnosis, the AI is still not to be trusted. Hence, clinicians do not always perceive AI solutions as helpful in solving their clinical tasks, as they only solve part of the problem sufficiently well. The EXPLAIN-ME initiative seeks to create AIs that help solve the overall general tasks in collaboration with the human health care professional.

To do so, we need not only to provide interpretability in the form of explainable AI models — we need to provide models whose explanations are easy to understand and utilize during the clinician’s workflow. Put simply, we need to provide good explanations.

Unmet technical needs
It is not hard to agree that good explanations are better than bad explanations. In this project, however, we aim to establish methods and collect data that allow us to train and validate the quality of clinical AI explanations in terms of how understandable and useful they are.

AI support should neither distract nor hinder ongoing tasks, giving fluctuating need for AI support, e.g. throughout a surgical procedure. As such, the relevance and utility of AI explanations are highly context- and task-dependent. Through collaboration with Zealand University Hospital we will develop explainable AI (XAI) feedback for human-AI collaboration in static clinical procedures, where data is collected and analyzed independently — e.g. when diagnosing cancer from scans collected beforehand in a different unit.

In collaboration with CAMES and NordSim, we will implement human-AI collaboration in simulation centers used to train clinicians in dynamic clinical procedures, where data is collected on the fly — e.g. for ultrasound scanning of pregnant women, or robotic surgery. We will monitor the clinicians’ behavior and performance as a function of feedback provided by the AI. As there are no actual patients involved in medical simulation, we are also free to provide clinicians with potentially bad explanations, and we may use the clinicians’ responses to freely train and evaluate the AI’s ability to explain.

Unmet clinical needs
In the Western World, medical errors are only exceeded by cancer and heart diseases in the number of fatalities caused. About one in ten diagnoses is estimated to be wrong, resulting in inadequate and even harmful care. Errors occur during clinical practice for several reasons, but most importantly, because clinicians often work alone with minimal expert supervision and support. The EXPLAIN-ME initiative aims to create AI decision support systems that take the role of an experienced mentor providing advice and feedback.

This initiative seeks to optimize the utility of feedback provided by healthcare explainable AI (XAI). We will approach this problem both in static healthcare applications, where clinical decisions are based on data already collected, and in dynamic applications, where data is collected on the fly to continually improve confidence in the clinical decision. Via an interdisciplinary effort between XAI, medical simulation, participatory design and HCI, we aim to optimize the explanations provided by the XAI to be of maximal utility for clinicians, supporting technology utility and acceptance in the clinic.

Case 1: Renal tumor classification
Classification of a renal tumor as malign or benign is an example of a decision that needs to be taken under time pressure. If malign, the patient should be operated immediately to prevent cancer from spreading to the rest of the body, and thus a false positive diagnosis may lead to the unnecessary destruction of a kidney and other complications. While AI methods can be shown statistically to be more precise than an expert physician, there is a need for extending it with explanation for a decision– and only the physicians know what “a good explanation” is. This motivates a collaborative design and development process to find the best balance between what is technically possible and what is clinically needed.

Case 2: Ultrasound Screening
Even before birth, patients suffer from erroneous decisions made by healthcare workers. In Denmark, 95% of all pregnant women participate in the national ultrasound screening program aimed at detecting severe maternal-fetal disease. Correct diagnosis is directly linked to the skills of the clinicians, and only about half of all serious conditions are detected before birth. AI feedback, therefore, comes with the potential to standardize care across clinicians and hospitals. At DTU, KU and CAMES, ultrasound imaging will be the main case for development, as data access and management, as well as manual annotations, are already in place. We seek to give the clinician feedback during scanning, such as whether the current image is a standard ultrasound plane (see figure); whether it has sufficient quality; whether the image can be used to predict clinical outcomes, or how to move the probe to improve image quality.

Case 3: Robotic Surgery
AAU and NordSim will collaborate on the assessment and development of robotic surgeons’ skills, associated with an existing clinical PhD project. Robotic surgery allows surgeons to do their work with more precision and control than traditional surgical tools, thereby reducing errors and increasing efficiency. AI-based decision support is expected to have a further positive effect on outcomes. The usability of AI decision support is critical, and this project will study temporal aspects of the human-AI collaboration, such as how to present AI suggestions in a timely manner without interrupting the clinician; how to hand over tasks between a member of the medical team and an AI system; and how to handle disagreement between the medical expert and the AI system.

In current healthcare AI research and development, there is often a gap between the needs of clinicians and the developed solutions. This comes with a lost opportunity for added value: We miss out on potential clinical value for creating standardized, high quality care across demographic groups. Just as importantly, we miss out on added business value: If the first, research-based step in the development food chain is unsuccessful, then there will also be fewer spin-offs and start-ups, less knowledge dissemination to industry, and overall less innovation in healthcare AI.

The EXPLAIN-ME initiative will address this problem:

  • We will improve clinical interpretability of healthcare AI by developing XAI methods and workflows that allow us to optimize XAI feedback for clinical utility, measured both on clinical performance and clinical outcomes.
  • We will improve clinical technology acceptance by introducing these XAI models in clinical training via simulation-laboratories.
  • We will improve business value by creating a prototype for collaborative, simulation-based deployment of healthcare AI. This comes with great potential for speeding up industrial development of healthcare AI: Simulation-based testing of algorithms can begin while algorithms still make mistakes, because there is no risk of harming patients. This, in particular, can speed up the timeline from idea to clinical implementation, as the simulation-based testing is realistic while not requiring the usual ethical approvals.

This comes with great potential value: While AI has transformed many aspects of society, its impact on the healthcare sector is so far limited. Diagnostic AI is a key topic in healthcare research, but only marginally deployed in clinical care. This is partly explained by the low interpretability of state-of-the-art AI, which negatively affects both patient safety and clinicians’ technology acceptance. This is also explained by the typical workflow in healthcare AI research and development, which is often structured as parallel tracks where AI researchers independently develop technical solutions to a predefined clinical problem, while only occasionally interacting with the clinical end-users.

This often results in a gap between the clinicians’ needs and the developed solution. The EXPLAIN-ME initiative aims to close this gap by developing AI solutions that are designed to interact with clinicians in every step of the design-, training-, and implementation process.

Impact

The project will develop explainable AI that can help medical staff make qualified decisions by taking the role as a mentor.

News / coverage

Participants

Project Manager

Aasa Feragen

Professor

Technical University of Denmark
DTU Compute

E: afhar@dtu.dk

Anders Nymark Christensen

Associate Professor

Technical University of Denmark
DTU Compute

Mads Nielsen

Professor

University of Copenhagen
Department of Computer Science

Mikael B. Skov

Professor

Aalborg University
Department of Computer Science

Niels van Berkel

Associate Professor

Aalborg University
Department of Computer Science

Henning Christiansen

Professor

Roskilde University
Department of People and Technology

Jesper Simonsen

Professor

Roskilde University
Department of People and Technology

Henrik Bulskov Styltsvig

Associate Professor

Roskilde University
Department of People and Technology

Martin Tolsgaard

Associate Professor

CAMES Rigshopitalet

Morten Bo Svendsen

Chief Engineer

CAMES Rigshospitalet

Sten Rasmussen

Professor, Head

Department of Clinical Medicine
Aalborg University

Mikkel Lønborg Friis

Director

NordSim
Aalborg University

Nessn Htum Azawi

Associate Professor,
Head of Research Unit & Renal Cancer team

Department of Urology
Zealand University Hospital

Manxi Lin

PhD Student

University of Southern Denmark
DTU Compute

Naja Kathrine Kollerup

PhD Student

Aalborg University
Department of Computer Science

Jakob Ambsdorf

PhD Student

University of Copenhagen
Department of Computer Science

Daniel van Dijk Jacobsen

PhD Student

Roskilde University
Department of People and Technology

Partners

Categories
Bridge project

Business Transformation and Organisational AI-based Decision Making

DIREC project

Business Transformation and Organisational AI-based Decision Making

Summary

Today, business processes in private companies and public organisations are widely supported by Enterprise Resource Planning, Business Process Management and Electronic Case Management systems, put into use with the aim to improve efficiency of the business processes. 
 


The combined result is however often an increasingly elaborate information systems landscape, leading to ineffectiveness, limited understanding of business processes, inability to predict and find the root cause of losses, errors and fraud, and inability to adapt the business processes. This lack of understanding, agility and control over business processes places a major burden on the companies and organisations. 
 


Together with industry, the project aims to develop methods and tools that enable industry to develop new efficient solutions for exploiting the huge amount of business data generated by enterprise systems, with specific focus on tools and responsible methods for the use of process insights for business intelligence and transformation.

 

Project period: 2021-2025
Budget: DKK 16,8 million

Enterprise systems generate a plethora of highly granular data recording their operation. Machine learning has a great potential to aid in the analysis of this data in order to predict errors, detect fraud and improve their efficiency. Knowledge of business processes can also be used to support the needed transformation of old and heterogeneous it landscapes to new platforms. Application areas include Anti-Money-Laundering (AML) and Know-Your-Customer (KYC) supervision of business processes in the financial sector, supply chain management in agriculture and foodstuff supply, and compliance and optimisation of workflow processes in the public sector.

The research aim of the project to develop methods and tools that enable industry to develop new efficient solutions for exploiting the huge amount of business data generated by enterprise systems, with specific focus on tools and responsible methods for the use of process insights for business intelligence and transformation. Through field studies in organizations that are using AI, BPM and process mining techniques it will be investigated how organizations implement, use and create value (both operational and strategic) through AI, BPM and process mining techniques. In particular, the project will focus on how organizational decision-making changes with the implementation of AI-based algorithms in terms of decision making skills (intuitive + analytical) of the decision makers, their roles and responsibilities, their decision rights and authority and the decision context.

Scientific value

The scientific value of the project is new methods and user interfaces for decision support and business transformation and associated knowledge of their performance and properties in case studies. These are important contributions to provide excellent knowledge to Danish companies and education programs within AI for business innovation and processes.

Capacity building

For capacity building the value of the project is to educate 1 industrial PhD in close collaboration between CBS, DIKU and the industrial partner DCR Solutions. The project will also provide on-line course material that can be used in existing and new courses for industry, MSc and PhD.

Business and societal value

For the business and societal value, the project has very broad applicability, targeting improvements in terms of effectiveness and control of process aware information systems across the private and public sector. Concretely, the project considers cases of customers of the participating industry partners within the financial sector, the public sector and within energy and building management. All sectors that have vital societal role. The industry partner will create business value of estimated 10-20MDkr increased turnaround and 2-3 new employees in 5-7 years through the generation of IP by the industrial researcher and the development of state- of-the-art proprietary process analysis and decision support tools.

Impact

The project will develop methods and tools that enable industry to develop new efficient solutions for exploiting the huge amount of business data generated by enterprise systems.

Participants

Project Manager

Arisa Shollo

Associate Professor

Copenhagen Business School
Department of Digitalization

E: ash.digi@cbs.dk

Thomas Hildebrandt

Professor

University of Copenhagen
Department of Computer Science

Raghava Mukkamala

Associate Professor

Copenhagen Business School
Department of Digitalization

Morten Marquard

Founder & CEO

DCR Solutions

Søren Debois

CTO

DCR Solutions

Panagiotis Keramidis

PhD Student

Copenhagen Business School
Department of Digitalization

Partners

Categories
Bridge project

AI and Blockchains for Complex Business Processes

DIREC project

AI and Blockchains for Complex Business Processes

Summary

Today, business processes in private companies and public organizations are widely supported by Enterprise Resource Planning, Business Process Management, and Electronic Case Management systems to improve the efficiency of the processes. 

The combined result is however often an increasingly elaborate information systems landscape, leading to ineffectiveness, limited understanding of business processes, inability to predict and find the root cause of losses, errors, and fraud, and inability to adapt the business processes. This lack of understanding, agility and control over business processes places a major burden on companies and organizations. 

Together with industry, this project aims to develop methods and tools that enable the industry to develop new efficient solutions for exploiting the huge amount of business data generated by enterprise and blockchain systems, with specific focus on tools and responsible methods for the use of process insights for business intelligence and transformation.  

 

Project period: 2021-2025

Enterprise and block chain systems generate a plethora of highly granular data recording their operation. Machine learning has a great potential to aid in the analysis of this data in order to predict errors, detect fraud and improve their efficiency. Knowledge of business processes can also be used to support the needed transformation of old and heterogeneous it landscapes to new platforms. Application areas include Anti-Money-Laundering (AML) and Know-Your-Customer (KYC) supervision of business processes in the financial sector, supply chain management in agriculture and foodstuff supply, and compliance and optimisation of workflow processes in the public sector.

The research aim of the AI and Blockchain for Complex Business Processes project is methods and tools that enable industry to develop new efficient solutions for exploiting the huge amount of business data generated by enterprise and blockchain systems, from techniques for automatic identification of business events, via the development of new rule based process mining technologies to tools for the use of process insights for business intelligence and transformation.

The project will do this through a unique bridge between industry and academia, involving two innovative, complementary industrial partners and researchers across disciplines of AI, software engineering and business intelligence from three DIREC partner universities. Open source release (under the LGPL 3.0 license) of the rule-based mining algorithms developed by the PhD assigned task 2 will ensure future enhancement and development by the research community, while simultaneously providing businesses the opportunity to include them in proprietary software.

Impact

The project will develop methods and tools that enable the industry to develop new efficient soluations for exploiting the huge amout of business data generated by entreprise systems. 

News / coverage

Participants

Project Manager

Tijs Slaats

Associate Professor

University of Copenhagen
Department of Computer Science

E: slaats@di.ku.dk

Jakob Grue Simonsen

Professor

University of Copenhagen
Department of Computer Science

Thomas Hildebrandt

Professor

University of Copenhagen
Department of Computer Science

Hugo López

Associate Professor

Technical University of Denmark
DTU Compute

Henrik Axelsen

PHD Fellow

University of Copenhagen
Department of Computer Science

Christoffer Olling Back

Postdoc

University of Copenhagen

Anders Mygind

Director

ServiceNow

Søren Debois

Associate Professor

IT University of Copenhagen
Department of Computer Science

Omri Ross

Chief Blockchain Scientist

eToro

Axel Fjelrad Christfort

PhD Fellow

University of Copenhagen
Dept. of Computer Science

Partners

Categories
Bridge project

Mobility Analytics using Sparse Mobility Data and Open Spatial Data

DIREC project

Mobility Analytics using Sparse Mobility Data and Open Spatial Data

Summary

Both society and industry have a substantial interest in well-functioning outdoor and indoor mobility infrastructures that are efficient, predictable, environmentally friendly, and safe. For outdoor mobility, reduction of congestion is high on the political agenda as is the reduction of CO2 emissions, as the transportation sector is the second largest in terms of greenhouse gas emissions. For indoor mobility, corridors and elevators represent bottlenecks for mobility in large building complexes.  

The amount of mobility-related data has increased massively which enables an increasingly wide range of analyses. When combined with digital representations of road networks and building interiors, this data holds the potential for enabling a more fine-grained understanding of mobility and for enabling more efficient, predictable, and environmentally friendly mobility.   

Project period: 2021-2024
Budget: DKK 9,41 million

The mobility of people and things is an important societal process that facilitates and affects the lives of most people. Thus, society, including industry, has a substantial interest in well-functioning outdoor and indoor mobility infrastructures that are efficient, predictable, environmentally friendly, and safe. For outdoor mobility, reduction of congestion is high on the political agenda – it is estimated that congestion costs Denmark 30 billion DKK per year. Similarly, the reduction of CO2 emissions from transportation is on the political agenda, as the transportation sector is the second largest in terms of greenhouse gas emissions. Danish municipalities are interested in understanding the potentials for integrating various types of e-bikes in transportation planning. Increased use of such bicycles may contribute substantially to the greening of transportation and may also ease congestion and thus improve travel times. For indoor mobility, corridors and elevators represent bottlenecks for mobility in large building complexes (e.g. hospitals, factories and university campuses). With the addition of mobile robots, humans and robots will also be fighting to use the same space when moving indoors. Heavy use of corridors is also a source of noise that negatively impacts building occupants.

The ongoing, sweeping digitalisation has also reached outdoor and indoor mobility. Thus, increasingly massive volumes of mobility-related data, e.g. from sensors embedded in the road and building infrastructures, networked positioning (e.g. GPS or UWB) devices (e.g. smartphones and in-vehicle navigation devices) or indoor mobile robots, are becoming available. This enables an increasingly wide range of analyses related to mobility. When combined with digital representations of road networks and building interiors, this data holds the potential for enabling a more fine-grained understanding of mobility and for enabling more efficient, predictable, and environmentally friendly mobility. Long movement times equate with congestion and bad overall experiences.

The above data foundation offers a basis for understanding how well a road network or building performs across different days and across the duration of a day, and it offers the potential for decreased movement times by means of improved mobility flows and routing. However, there is an unmet need for low-cost tools that can be used by municipalities and building providers (e.g. mobile robot manufactures) that are capable of enabling a wide range of analytics on top of mobility data.

  1. Build extract-transform-load (ETL) prototypes that are able to ingest high and low frequency spatial data (e.g. GPS and indoor positioning data). These prototypes must enable map-matching of spatial data to open road network and building representations and must enable privacy protection.
  2. Design effective data warehouse schemas that can be populated with ingested spatial data.
  3. Build mobility analytics warehouse systems that are able to support a broad range of analyses in interactive time.
  4. Build software systems that enable users to formulate analyses and visualise results in maps-based interfaces for both indoor and outdoor use. This includes infrastructure for the mapping of user input into database queries and the maps-based display of results returned by the data warehouse system.
  5. Develop a range of advanced analyses that address user needs. Possible analyses include congestion maps, isochrones, aggregate travel-path analyses, origin-destination travel time matrices, and what-if analyses where the effects of reconstruction are estimated (e.g. adding an additional lane to a stretch of road or changing corridors). For outdoors settings, CO2-emissions analyses based on vehicular environmental impact models and GPS data are also considered.
  6. Develop transfer learning techniques that make it possible to leverage spatial data from dense spatio-temporal “regions” for enabling analyses in sparse spatio-temporal regions.

Value creation
The envisioned prototype software infrastructure characterised above aims to be able to replace commercial road network maps with the crowd sourced OpenStreetMap (OSM) map and for indoors enable new data sources about the indoor geography. The open data might not be curated, which means that new quality control tools are required to ensure that computed travel times are correct. This will reduce cost.

Next, the project will provide means of leveraging available spatial data as efficiently and effectively as possible. In particular, while more and more data becomes available, the available data will remain sparse in relation to important analyses. This is due to the cost of data that can be purchased and due to the lack of desired data. Thus, it is important to be able to exploit available data as well as possible. We will examine how to transfer data from locations and times with ample data to locations and times with insufficient data. For example, we will study transfer learning techniques for this purpose; and as part of this, we will study feature learning. This will reduce cost and will enable new analyses that where not possible previously due to a lack of data.

Rambøll will be able to in-source the software infrastructure and host analytics for municipalities. Mobile Industrial Robotics (MiR) will be able to in-source the software infrastructure and host analytics for building owners. Additional value will be created because the above studies will be conducted for multiple transportation modes, with a focus on cars and different kinds of e-bikes. We have access to a unique data foundation that will enable these studies.

Impact

The project will provide a prototype software infrastructure that aims to be able to replace commercial road network maps with the crowd sourced OpenStreetMap (OSM) and for indoors enable new data sources about the indoor geography.

The open data might not be curated, which means that new quality control tools are required to ensure that computed travel times are correct. This will reduce cost.

News / coverage

Participants

Project Manager

Christian S. Jensen

Professor

Aalborg University
Department of Computer Science

E: csj@cs.aau.dk

Ira Assent

Professor

Aarhus University
Department of Computer Science

Kristian Torp

Professor

Aalborg University
Department of Computer Science

Bin Yang

Professor

Aalborg University
Department of Computer Science

Mads Darø Kristensen

Principal Application Architect

The Alexandra Institute

Søren Krogh Sørensen

Senior Software Engineer

The Alexandra Institute

Frederik Palludan Madsen

Software Engineer

The Alexandra Institute

Mikkel Baun Kjærgaard

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Norbert Krüger

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Leon Bodenhagen

Associate Professor

University of Southern Denmark The Maersk Mc-Kinney Moller Institute

Brian Rosenkilde Jeppesen

Project Manager Roads and Traffic

Rambøll

Stig Grønning Søbjærg

Engineer

Rambøll

Mads Graungaard

Mobility and Traffic Engineer

Rambøll

Johan Poulsgaard

Engineer

Rambøll

Christoffer Bø

Traffic and Mobility Planner

Rambøll

Morten Steen Nørby

Software Manager

Mobile Industrial Robots

Kasper Fromm Pedersen

Research Assistant

Aalborg University
Dept. of Computer Science

Helene Hauschultz

PhD Student

Aarhus University
Department of Mathematical Science

Avgi Kollakidou

PHD student

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Hao Miao

PHD STUDENT

Aalborg University
Department of Computer Science

partners

Categories
Bridge project

Deep Learning and Automation of Image-Based Quality of Seeds and Grains

DIREC project

Deep Learning and Automation of Image-based Quality of Seeds and Grains

Summary

Today, manual visual inspection of grain is still one of the most important quality assurance procedures throughout the value chain of bringing cereals from the field to the table.

Together with FOSS, this project aims to develop and validate a method of automated image-based solutions that can replace subjective manual inspection and improve performance, robustness and consistency of the inspection. The method has the potential of providing the grain industry with a disruptive new tool for ensuring quality and optimising the value of agricultural commodities.

Project period: 2020-2024
Budget: DKK 3,91 million

To derive maximum value from the data there is a need to develop methods of training data algorithms to automatically be able to provide industry with the best possible feedback on the quality of incoming materials. The purpose is to develop a framework which replaces the current feature-based models with deep learning methods. By using these methods, the potential is significantly to reduce the labor needed to expand the application of EyeFoss™ into new applications; e.g. maize, coffee, while at the same time increase the performance of the algorithms in accurately and reliably describing the quality of cereals.

This project aims at developing and validating, with industrial partners, a method of using deep learning neural networks to monitor quality of seeds and grains using multispectral image data. The method has the potential of providing the grain industry with a disruptive new tool for ensuring quality and optimising the value of agricultural commodities. The ambition of the project is to end up with an operationally implemented deep learning framework for deploying EyeFoss™ to new applications in the industry. In order to the achieve this, the project will team up with DTU Compute as a strong competence centre on deep learning as well as a major player within the European grain industry (to be selected).

The research aim of the project is the development of AI methods and tools that enable industry to develop new solutions for automated image-based quality assessment. End-to-end learning of features and representations for object classification by deep neural networks can lead to significant performance improvements. Several recent mechanisms have been developed for further improving performance and reducing the need for manual annotation work (labelling) including semi-supervised learning strategies and data augmentation.

Semi-supervised learning  combines generative models that are trained without labels (unsupervised learning), application of pre-trained networks (transfer learning) with supervised learning on small sets of labelled data. Data augmentation employs both knowledge based transformations, such as translations and rotations and more general learned transformations like parameterised “warps” to increase variability in the training data and increase robustness to natural variation.

Scientific value
The scientific value of the project will be new methods and open source tools and associated knowledge of their performance and properties in an industrial setup.

Capacity building
The aim of the project is to educate one PhD student in close collaboration with FOSS – the aim is that the student will be present at FOSS at least 40% of the time to secure a close integration and knowledge exchange with the development team at FOSS working on introducing EyeFossTM to the market. Specific focus will be on exchange at the faculty level as well; the aim is to have faculty from DTU Compute present at FOSS and vice-versa for senior FOSS specialists that supervise the PhD student. This will secure better networking, anchoring and capacity building also at the senior level. The PhD project will additionally be supported by a master-level program already established between the universities and FOSS.

Societal impact
Specifically, the project aims to provide FOSS with new tools to assist in scaling the market potential of the EyeFossTM from its current potential of 20m EUR/year. Adding, in a cost-efficient way, applications for visual inspection of products like maize, rice or coffee has the potential to at least double the market potential. In addition, the contributions will be of generic relevance to companies developing image-based solutions for food quality/integrity assessment and will provide excellent application and AI integration knowledge of commercial solutions already on-the-market to other Danish companies.

Impact

The project has the potential of providing the grain industry with a disruptive new tool for ensuring quality and optimising the value of agricultural commodities.

News / coverage

Participants

Project Manager

Lars Kai Hansen

Professor

Technical University of Denmark
DTU Compute

E: lkai@dtu.dk

Kim Steenstrup Pedersen

Professor

University of Copenhagen
Department of Computer Science

Lenka Hýlová

PHD Fellow

Technical University of Denmark
DTU Compute

Thomas Nikolajsen

Head of Front-end Innovation

FOSS

Toke Lund-Hansen

Head of Spectroscopy team

FOSS

Erik Schou Dreier

Senior Scientist

FOSS

Partners