Søg
Close this search box.
Kategorier
Bridge-projekt

Automatic Tuning of Spin-qubit Arrays

DIREC-projekt

Automatic Tuning of Spin-qubit Arrays

Resumé

Spin-qubit quantum-dot arrays er en af de mest lovende kandidater til universel quantum computing. Men med størrelsen af arrays er der opstået en flaskehals: At justere de mange kontrolparametre for et array i hånden er tidskrævende og meget dyrt. Den begyndende spin-qubit-industri har brug for en platform af algoritmer, der kan finjusteres til specifik sensing-hardware, og som tillader koldstart-tuning af en enhed. En sådan platform skal omfatte effektive, skalerbare og robuste algoritmer mod almindelige problemer i fremstillede enheder. Det nuværende landskab af automatiske tuningalgoritmer opfylder ikke disse krav Dette projekt har til formål at overvinde de største forhindringer i udviklingen af algoritmerne.

Projektperiode: 2022-2025

Spin-qubit quantum-dot arrays are one of the most promising candidates for universal quantum computing. While manufacturing even single dots used to be a challenge, nowadays multi-dot arrays are becoming the norm. However, with the size of the arrays, a new bottleneck emerged: tuning the many control parameters of an array by hand is not feasible anymore. This makes R&D of this promising technology difficult: hand tuning by experts becomes harder, as not only the size increases, but also more, and more difficult interactions between the parameters manifest. This process is time-consuming and very expensive. Not only does tuning a device require several steps, each of which can take several days to complete, but also at each step, errors can manifest that can lead to re-tuning of earlier steps or even starting from scratch with a new device. Moreover, devices drift over time and have to be re-tuned to their perfect operation point.

The lack of automatic tuning algorithms that can run on dedicated or embedded hardware is by now one of the biggest factors that hamper the growth of the nascent spin-qubit industry as a whole. What is needed is a platform of algorithms that can be fine-tuned to specific sensing hardware and which allows cold-start tuning of a device: that is, after the device is cooled, tuning it up to a specific operating regime and finding the parameters required to perform specific operations or measurements. Such a platform must include algorithms that are efficient, scalable and robust against common problems in manufactured devices.

The current landscape of automatic tuning algorithms does not fulfill these requirements. Many algorithms are specifically developed for common small device types and use algorithms that do not scale up to more complex devices, or include assumptions on the geometry of the devices that many experimental devices do not fulfill. On the other hand, recent candidates for scalable algorithms are theoretical or developed targeting simplified simulations and lack robustness to the difficulties encountered on real devices.

The research Aims are to overcome the major obstacles in developing the algorithms, which are outlined below:

Aim1: Develop interpretable physics-inspired Machine Learning approaches (WP2)

Machine Learning approaches often rely on flexible black-box models that allow them to solve a task with high precision. However, these models are not interpretable, which makes them unusable for many tasks in physics. Still, interpretable models often lack the flexibility required to solve the task satisfactorily. We will develop physics-inspired models that add additional flexibility to physics-based models in a way that does not interfere with interpretability. A key element to achieve this is to limit the degree of variation the flexible components can add on top of the physical model. We will test the robustness by application on different devices, tuned into regimes that require the additional flexibility.

Aim2: Demonstrate practical scalability of algorithms based on line-scans (WP1-3)

For an algorithm to be useful in practice, it must be scalable to large device sizes. An example for an approach that is not scalable, is the use of 2D raster scans to measure angles and slopes of transitions. This is because the number of required 2D scans rises quickly with the number of parameters of the device. We will instead rely on 1D line-scans and demonstrate that we can still infer the same quantities as 2D scans at a lower measurement time on real devices.

Aim3: Automate discovery of optimal measurement strategies (WP3)

To keep devices at an optimal operating point, they have to be re-tuned with high frequency (e.g., every 100ms). We will develop adaptable active learning and measurement selection strategies to allow monitoring and adaptation of the device parameters while it is running.

State-of-the-art

Currently, the largest manufactured quantum-dot array has 16 qubits in a 4×4 configuration [1]. While promising, it has not been successfully controlled yet. The largest hand-tuned array has 8 qubits [2] in a 2×4 configuration, where the array was tuned to contain a single electron on each dot.

To date most development of automatic tuning algorithms are compatible with arrays of at most two qubits and use deep-learning techniques to approach several steps of the tuning process. These steps involve coarse tuning of a device into the area where it forms quantum dots [3], finding the empty charge state of the device [3], finding a regime with two distinct dots [4] and navigating to a regime with a correct number of charge states [5].

All these techniques are primarily based on 2D raster scans of the charge-sensor response given two control voltages and rely on additional heuristics to allow for efficient tuning.

For techniques that support more than two qubits, there is far less work performed. For the task of finding inter-dot electron transitions [6-7] an algorithm has been demonstrated to work on a silicon device with 3 dots and on an idealized simulated device with up to 16 dots in a 4×4 configuration allowing for automatic labeling of transitions. However, the device used in [6] used a favorable sensor setup that is not applicable to more general devices.

Value Creation

The project is situated at a perfect point in time for realizing its scientific impact via publications of the post doctoral researcher and the development of open source algorithms. We are at a turning point in Danish Quantum efforts: In 2021 the EU funded the QLSI quantum consortium, a 10 year project to develop spin-qubit quantum-dot arrays with strong involvement of Danish collaborators. In 2022, the Novo Nordisk foundation funded the Quantum for Life Center and in 2023 the new Danish NATO center for Quantum Technologies will open at the University of Copenhagen. Moreover, there is a long term pledge from Novo Nordisk foundation to fund development of the first functional quantum computer until 2034. With these long term investments, the scientific outcomes of the project will become available at a time when many other projects are starting and automatic tuning algorithms become mandatory for many of these efforts. To aid these goals, this project will make use of an existing collaboration of QM and KU with the IGNITE EU project that aims to develop a 48 spin-qubit device to verify the usefulness of the developed algorithms.

QM will create value by bundling their hardware solutions together with tuned versions of the software, which allows their customer base to develop and test their devices on a shorter time-scale.

Moreover, this project will foster knowledge transfer between machine-learning and quantum physics in order to continue development of high quality machine-learning approaches. To this end, regular meetings between all participants will be conducted and the work will be presented at physics conferences

Værdi

Projektet skaber værdi ved strategisk at løfte sin position inden for kvanteteknologiudvikling. Dette opnås ved at skabe videnskabelig værdi gennem publikationer og udvikling af open-source algoritmer. Desuden fremmes overførsel af viden mellem maskinlæring og kvantefysik, hvilket i sidste ende muliggør kortere udviklingsproces for hardwareløsninger og bidrager til vækst inden for området gennem tværfagligt samarbejde og formidlingsindsatser.

Deltagere

Project Manager

Oswin Krause

Assistant Professor

University of Copenhagen
Department of Computer Science

E: oswin.krause@di.ku.dk

Ferdinand Kuemmeth

Professor

University of Copenhagen
Niels Bohr Institute
Center for Quantum Devices

Anasua Chatterjee

Assistant Professor

University of Copenhagen
Niels Bohr Institute
Center for Quantum Devices

Jonatan Kutchinsky

General Manager

Quantum Machines

Joost van der Heijden

Scientific Business Development Manager

Quantum Machines

Partnere

Kategorier
Bridge-projekt

Verified Voting Protocols and Blockchains

DIREC-projekt

Verified Voting protocols and blockchains

Resumé

Der er konstant interesse for internetafstemning fra valgkommissioner rundt om i verden. Samtidig er der behov for onlineafstemning i blockchain-styring. Dog er det ikke let at opbygge et internetafstemningssystem: Designet af nye kryptografiske protokoller er fejlbehæftet, og offentlig tillid til det valgte organ er let truet.

I samarbejde med en industriel partner har dette projekt til formål at forbedre sikkerheden og kvaliteten af internetafstemningssystemet og påvirke reguleringen af minimumskvalitetskrav for blockchains.

Projektperiode: 2023-2025
Budget: 7,5 millioner kr.

Our aim is to bring the security proofs about protocols much closer to their implementation.

Here are four considerations that explain the unmet needs of this project.

  1. Voting protocols, both in form of Blockchain Governance Protocols and Internet Voting Protocols have become increasingly popular and will be more widely deployed, as a result of an ongoing digitalization effort of democratic processes and also driven by the current pandemic.
  2. Elections are based on trust, which means that election systems ideally should be based on algorithms and data structures that are already trusted. Blockchains provide such a technology. They provide a trusted bulletin board, which can be used as part of voting.
  3. Voting crucially depends on establishing the identity of the voter to avoid fraud and to establish eligibility verifiability.
  4. Any implementation created by a programmer, be it a Blockchain Governance Protocol or an Internet Voting Protocol can have bugs that quickly erode public confidence.

This project aims to shed more light on the overall research question, how to design high assurance blockchain governance software, and can such protocols scale to Internet Voting Protocols.

(RO) To advance the state of the art of high assurance cryptographic software, especially for blockchain governance protocols and voting protocols.

(WP1) To achieve (RO), we start by working towards a high assurance implementation of a blockchain governance protocol (e.g. the one used by Concordium) and an existing blockchain voting protocol, such as the Open Vote Network, or Election Guard. If there is sufficient progress in the design of a software-independent protocol we will retarget our research to such a protocol. This will use existing software projects developed at AU: SSProve, ConCert and various libraries for high assurance cryptographic primitives. AU will take the lead for this WP.

(WP2) The Concordium blockchain provides a secure and private way to put credentials, such as passport information, on the internet. In this project we aim to integrate this with legacy ID infrastructure, such as MitID. We will investigate how to reuse such blockchain based identities for internet voting. We aim to address (4) above in this way. Concordium will take the lead for this WP.

(WP3) Implementation of the cryptographic protocol. Based on the results from (WP1), we propose to develop an open-source library that makes our high assurance blockchain voting technology available for use in third-party products. We envision to release a prototype similar to Election Guard (which is provided by Microsoft), but with a blockchain providing the ID infrastructure, as well as functioning as a public bulletin board. ALX will take the lead for this WP.

Scientific value
Internet voting provides a unique collection of challenges, such as, for example, vote privacy, software quality, receipt freeness, coercion resistance, and dispute resolution. Subsets of them can be solved separately, here we aim to guarantee vote privacy and software quality by the means of a privacy-preserving and accountable blockchain and formally verify substantial parts of the resulting voting protocol.

Capacity building
The proposed project pursues capacity building by training a PhD student. The Alexandra Institute will build capacity in rust, smart contracts and high assurance cryptographic software.

Business value
The project is highly interesting to and relevant for the industry. There are two reasons why it is interesting for Concordium. On the one hand, voting is an excellent application demonstrating the vision of the blockchain and, on the other hand, Concordium will as part of the project implement a voting scheme to be used for decentralized governance of the blockchain. More precisely, the Concordium blockchain is designed to support applications where users can act privately while maintaining accountability and meeting regulatory requirements.

Furthermore, it is an explicit goal of Concordium to support formally verified smart contracts. Obviously, all these goals fit nicely with the proposed project, and it will be important for Concordium to demonstrate that the blockchain actually supports the secure voting schemes developed in the project. With respect to governance, Concordium has a need to develop a strong voting scheme allowing members of our community to vote on proposed features and to elect members of the Governance Committee. The project is of great interest to the Alexandra Institute to apply and improve in-house capacity for implementing cryptographic algorithms. The involvement of Alexandra will guarantee that the theoretical findings of the proposed project will we translated into usable real world products.

Societal value
Internet voting was stalled for three years in Switzerland due to insecure protocols and implementations. We aim to develop technology to improve the security (audits) of such protocols and implementations. Around 5 billion dollars were lost since 2018 due to insecure blockchain implementations, often effecting retail investors. Our project aims to improve the state of the art of cryptographic software, and thus influence regulation on minimal quality requirements for blockchains, similar to existing Swiss regulation for e-voting.

Værdi

Projektet søger at implementere sikre blockchain-baserede afstemningsordninger, der understøtter decentral styring og overholdelse af lovgivning, samtidig med at kryptografisk software fremmes for at forbedre sikkerhedsforanstaltninger og påvirke regulatoriske standarder, og derved mindske risici og forbedre samfundets tillid til digital afstemning og blockchain-implementeringer.

Nyheder / omtale

Deltagere

Project Manager

Bas Spitters

Associate Professor

Aarhus University
Department of Computer Science

E: spitters@cs.au.dk

Gert Læssøe Mikkelsen

Head of Security Lab

The Alexandra Institute

Nibras Stiebar-Bang

Chief Technology Officer

Concordium ApS

Bernardo David

Associate Professor

IT University of Copenhagen

Diego Aranha

Associate Professor

Aarhus University
Department of Computer Science

Lasse Letager Hansen

PhD Student

Aarhus University
Department of Computer Science

Eske Hoy Nielsen

PhD Student

Aarhus University
Department of Computer Science

Partnere

Kategorier
Bridge-projekt

Low-Code Programming of Spatial Contexts for Logistic Tasks in Mobile Robotics

DIREC-projekt

Low-code programming of spatial contexts for logistic tasks in mobile robotics

Resumé

Lavvolumenproduktion udgør en stor del af den danske produktionsindustri. Et udækket behov i denne industri er fleksibilitet og tilpasningsevne i fremstillingsprocesser. Eksisterende løsninger til automatisering af industrielle logistikopgaver omfatter kombinationer af automatiseret opbevaring, transportbånd og mobile robotter med specielle læsse- og lossepladser.

Disse løsninger kræver imidlertid store investeringer og er ikke omkostningseffektive til lavvolumenproduktion, desuden er lavvolumenproduktion ofte arbejdskrævende.

Sammen med industrielle partnere vil dette projekt undersøge produktionsscenarier, hvor en maskine kan betjenes af utrænet personale ved at bruge lavkodeudvikling til adaptiv og rekonfigurerbar robotprogrammering af logistiske opgaver.

Projektperiode: 2022-2025
Budget: DKK 7,15 million

An unmet need in industry is flexibility and adaptability of manufacturing processes in low-volume production. Low-volume production represents a large share of the Danish manufacturing industry. Existing solutions for automating industrial logistics tasks include combinations of automated storage, conveyor belts, and mobile robots with special loading and unloading docks. However, these solutions require major investments and are not cost efficient for low-volume production.

Therefore, low-volume production is today labor intensive, as automation technology and software are not yet cost effective for such production scenarios where a machine can be operated by untrained personnel. The need for flexibility, ease of programming, and fast adaptability of manufacturing processes is recognized in both Europe and USA. EuRobotics highlights the need for systems that can be easily re-programmed without the use of skilled system configuration personnel. Furthermore, the American roadmap for robotics  highlights adaptable and reconfigurable assembly and manipulation as an important capability for manufacturing.

The company Enabled Robotics (ER) aims to provide easy programming as an integral part of their products. Their mobile manipulator ER-FLEX consists of a robot arm and a mobile platform. The ER-FLEX mobile collaborative robot provides an opportunity to automate logistic tasks in low-volume production. This includes manipulation of objects in production in a less invasive and more cost-efficient way, reusing existing machinery and traditional storage racks. However, this setting also challenges the robots due to the variability in rack locations, shelf locations, box types, object types, and drop off points.

Today the ER-FLEX can be programmed by means of block-based features, which can be configured to high-level robot behaviors. While this approach offers an easier programming experience, the operator must still have a good knowledge of robotics and programming to define the desired behavior. In order to enable the product to be accessible to a wider audience of users in low-volume production companies, robot behavior programming has to be defined in a simpler and intuitive manner. In addition, a solution is needed that address the variability in a time-efficient and adaptive way to program the 3D spatial context.

Low-code software development is an emerging research topic in software engineering. Research in this area has investigated the development of software platforms that allow non-technical people to develop fully functional application software without having to make use of a general-purpose programming language. The scope of most low-code development platforms, however, has been limited to create software-only solutions for business processes automation of low-to-moderate complexity.

Programming of robot tasks still relies on dedicated personnel with special training. In recent years, the emergence of digital twins, block-based programming languages, and collaborative robots that can be programmed by demonstration, has made a breakthrough in this field. However, existing solutions still lack the ability to address variability for programming logistics and manipulation tasks in an everchanging environment.

Current low-code development platforms do not support robotic systems. The extensive use of hardware components and sensorial data in robotics makes it challenging to translate low-level manipulations into a high-level language that is understandable for non-programmers. In this project we will tackle this by constraining the problem focusing on the spatial dimension and by using machine learning for adaptability. Therefore, the first research question we want to investigate in this project is whether and how the low-code development paradigm can support robot programming of spatial logistic task in indoor environments. The second research question will address how to apply ML-based methods for remapping between high-level instructions and the physical world to derive and execute new task-specific robot manipulation and logistic actions.

Therefore, the overall aim of this project is to investigate the use of low-code development for adaptive and re-configurable robot programming of logistic tasks. Through a case study proposed by ER, the project builds on SDU’s previous work on domain-specific languages (DSLs) to propose a solution for high-level programming of the 3D spatial context in natural language and work on using machine learning for adaptable programming of robotic skills. RUC will participate in the project with interaction competences to optimize the usability of the approach.

Our research methodology to solve this problem is oriented towards design science, which provides a concrete framework for dynamic validation in an industrial setting. For the problem investigation, we are planning a systematic literature review around existing solutions to address the issues of 3D space mapping and variability of logistic tasks. For the design and implementation, we will first address the requirement of building a spatial representation of the task conditions and the environment using external sensors, which will give us a map for deploying the ER platform. Furthermore, to minimizing the input that the users need to provide to link the programming parameters to the physical world we will investigate and apply sensor-based user interface technologies and machine learning. The designed solutions will be combined into the low-code development platform that will allow for the high-level robot programming.

Finally, for validation the resultant low-code development platform will be tested for logistics-manipulation tasks with the industry partner Enabled Robotics, both at a mockup test setup which will be established in the SDU I4.0 lab and at a customer site with increasing difficulty in terms of variability.

Value creation

Making it easier to program robotic solutions enables both new users of the technology and new use cases. This contributes to the DIREC’s long-term goal of building up research capacity as this project focuses on building the competences necessary to address challenges within software engineering, cyber-physical systems (robotics), interaction design, and machine learning.

Scientific value
The project’s scientific value is to develop new methods and techniques for low-code programming of robotic systems with novel user interface technologies and machine learning approaches to address variability. This addresses the lack of approaches for low-code development of robotic skills for logistic tasks. We expect to publish at least four high-quality research articles and to demonstrate the potential of the developed technologies in concrete real-world applications.

Capacity building
The project will build and strengthen the research capacity in Denmark directly through the education of one PhD candidate, and through the collaboration between researchers, domain experts, and end-users that will lead to R&D growth in the industrial sector. In particular, research competences in the intersection of software engineering and robotics to support the digital foundation for this sector.

Societal and business value
The project will create societal and business value by providing new solutions for programming robotic systems. A 2020 market report predicts that the market for autonomous mobile robots will grow from 310M DKK in 2021 to 3,327M DKK in 2024 with inquiries from segments such as the semiconductor manufacturers, automotive, automotive suppliers, pharma, and manufacturing in general. ER wants to tap into these market opportunities by providing an efficient and flexible solution for internal logistics. ER would like to position its solution with benefits such as making logistics smoother and programmable by a wide customer base while alleviating problems with shortage of labor. This project enables ER to improve their product in regard to key parameters. The project will provide significant societal value and directly contribute to SDGs 9 (Build resilient infrastructure, promote inclusive and sustainable industrialization, and foster innovation).

Værdi

Projektet vil udgøre et stærkt bidrag til det digitale fundament for robotteknologi baseret på softwarekompetencer og bidrage til Danmarks position som digital frontløber på dette område.

Deltagere

Project Manager

Thiago Rocha Silva

Associate Professor

University of Southern Denmark
Maersk Mc-Kinney Moller Institute

E: trsi@mmmi.sdu.dk

Aljaz Kramberger

Associate Professor

University of Southern Denmark
Maersk Mc-Kinney Moller Institute

Mikkel Baun Kjærgaard

Professor

University of Southern Denmark
Maersk Mc-Kinney Moller Institute

Mads Hobye

Associate Professor

Roskilde University
Department of People and Technology

Lars Peter Ellekilde

Chief Executive Officer

Enabled Robotics ApS

Anahide Silahli

PhD

University of Southern Denmark
Maersk Mc-Kinney Moller Institute

Partnere

Kategorier
Bridge-projekt

Trust through Software Independence and Program Verification

DIREC-projekt

Trust through software independence and program verification

Resumé

Der er konstant interesse for internetafstemning fra valgkommissioner rundt om i verden. Dette illustreres godt af Grønland – deres valglov blev ændret i 2020, og tillader nu brugen af internetafstemning. Det er dog ikke let at bygge et internetafstemningssystem: Designet af nye kryptografiske protokoller er udsat for fejl, og offentlighedens tillid til det valgte organ er let truet.

En softwareuafhængig afstemningsprotokol er en, hvor en uopdaget ændring eller fejl i software ikke kan forårsage en uopdagelig ændring eller fejl i et valgresultat. Programverifikationsteknikker er nået langt og lover at forbedre pålideligheden og cybersikkerheden af valgteknologier, men det er på ingen måde klart, om formelt verificerede softwareuafhængige afstemningssystemer også øger offentlighedens tillid til valg.

Dette projekt vil sammen med myndighederne i Grønland undersøge, hvilken effekt programverifikation har på offentlighedens tillid til valgteknologier. Projektet har til formål at bidrage til at gøre internetvalg mere troværdige, hvilket kan styrke udviklings- og post-konfliktdemokratier rundt om i verden.

Projektperiode: 2023-2026
Budget: 4,6 million kr

Here are four considerations that explain the unmet needs of this proposed project.

  1. Voting protocols have become increasingly popular and will be more widely deployed in the future as a result of an ongoing digitalization effort of democratic processes.
  2. Elections are based on trust, which means that election systems ideally should be based on algorithms and data structures that are trusted.
  3. Program verification techniques are believed to strengthen this trust.
  4. Greenland laws were recently changed to allow for Internet Voting.

The integrity of an election result is best captured through software-independence in the sense of Rivest and Wack’s definition “A voting system is software-independent if an undetected change or error in its software cannot cause an undetectable change or error in an election outcome.” Software independence is widely considered a precondition for trust. The assumption that program verification increases trust arises from the fact that those doing the verification are becoming convinced that the system implements its specification. However, the question is if these arguments also convince others not involved in the verification process that the verified system can be trusted, and if not, under which additional assumptions will they trust?

Thus, the topic of this project is to study the effects of program verification on public trust in the context of election technologies. Therefore, this project is structured into two parts. First, can we formally verify software dependence using modern program verification techniques and second, is software-independence sufficient to generate trust.

The research project aims to shed more light on the overall research question, if formal verification of software-independence can strengthen public confidence. Affirming this research question in the positive would lead to a novel understanding of what it means for voting protocols to be trustworthy, it would lead to an understanding how to increase public confidence in Internet Voting, which may be useful for countries that lack trust in the security of paper records.

(RO1) Explore the requirement of software-independence in the context of formal verification of existing Internet voting protocols.

(RO2) Study the public confidence in Greenland with respect to software-independence and formally verified Internet Voting protocols and systems.

Software Independence

In order to achieve (RO1), we will consider two theories of what constitutes software-independence. There is the game-theoretic view, which, similar to proof by reduction and simulation in cryptography, reduces software-independence of one protocol to another. The statistical view gives precise bounds on the likelihood of the election technology to produce an incorrect result. We plan to understand how to capture formally the requirement of software-independence by selecting existing or newly developed voting protocols and generate formally verified implementations. For all voting protocols that we design within this project, we will use proof assistants to derive mechanized proofs of software independence.

User Studies

To achieve (RO2), we will, together with the Domestic Affairs Division, Govern-ment of Greenland study the effects of formal verification of software independence on public confidence. The core hypothesis of these studies is that strategic communication of concepts, such as software inde-pendence, can be applied in such a way that it strengthens public confidence. We will invite Greenland voters to participate in pilot demonstrations and user studies and will evaluate answers qualitatively and quantitatively.

Scientific value
Internet voting provides a unique collection of challenges, such as election integrity, vote privacy, receipt-freeness, coercion resistance, and dispute resolution. Here we aim to focus on election integrity, and show that if we were to verify formally the property of software-independence of a voting system that would increase the public confidence of the voters in the accuracy of the election result.

Capacity building
The proposed project pursues two kinds of capacity building. First, by training the PhD student and university students affiliated with the project, making Denmark a leading place for secure Internet voting. Second, if successful, the results of the project will contribute to the Greenland voting project and to international capacity building in the sense that they will strengthen democratic institutions.

Societal value
Some nations are rethinking their respective electoral processes and the ways they hold elections. Since the start of the Covid-19 pandemic, approximately a third of all nations scheduled to hold a national election, have postponed them. It is therefore not surprising that countries are exploring Internet Voting as an additional voting channel. The result of this project would contribute to making Internet election more credible, and therefore strengthen developing and post-conflict democracies around the world.

Værdi

Projektet skaber værdi ved at øge troværdigheden af internetstemmesystemer, hvilket styrker udviklingslande og post-konflikt demokratier verden over, især i forhold til lande, der revurderer deres valgprocesser midt i udfordringer som Covid-19 pandemien.

Nyheder / omtale

Deltagere

Project Manager

Carsten Schürmann

Professor

IT University of Copenhagen
Department of Computer Science

E: carsten@itu.dk

Klaus Georg Hansen

Founder

KGH Productions

Markus Krabbe Larsen

PhD Student

IT University of Copenhagen
Department of Computer Science

Bas Spitters

Associate Professor

Aarhus University
Department of Computer Science

Oksana Kulyk

Associate Professor

IT University of Copenhagen

Philip Stark

Professor

University of California, Berkeley

Peter Ryan

Professor, Dr.

University of Luxembourg

Partners

Kategorier
Bridge-projekt

Multimodal Data Processing of Earth Observation Data

DIREC-projekt

Multimodal data processing of Earth Observation Data

Resumé

Baseret på observationer af jorden opbygger og vedligeholder en række danske offentlige organisationer vigtige datagrundlag, der bruges til beslutningstagning, fx til at eksekvere miljølovgivning eller træffe planlægningsbeslutninger i både private og offentlige organisationer i Danmark. 
 
Sammen med nogle af disse offentlige organisationer har dette projekt til formål at understøtte den digitale acceleration af den grønne omstilling ved at styrke datagrundlaget for miljødata. 
Der er behov for, at offentlige organisationer udnytter nye datakilder og skaber et skalerbart datavarehus for jordobservationsdata. Dette vil involvere opbygning af behandlingspipelines til multimodal databehandling og design af brugerorienterede datahubs og analyser. 

 

Projektperiode: 2022-2025
Budget: 12,27 millioner kr.

The Danish partnership for digitalization has concluded that there is a need to support the digital acceleration of the green transition. This includes strengthening efforts to establish a stronger data foundation for environmental data. Based on observations of the Earth a range of Danish public organizations build and maintain important data foundations. Such foundations are used for decision making, e.g., for executing environmental law or making planning decisions in both private and public organizations in Denmark.

The increasing possibilities of automated data collection and processing can decrease the cost of creating and maintaining such data foundations and provide service improvements to provide more accurate and rich information. To realize such benefits, public organizations need to be able to utilize the new data sources that become available, e.g., to automize manual data curation tasks and increase the accuracy and richness of data. However, the organizations are challenged by the available methods ability to efficiently combine the different sources of data for their use cases. This is particularly the case when user-facing tools must be constructed on top of the data foundation. The availability of better data for end-users will among others help the user decrease the cost of executing environmental law and making planning decisions. In addition, the ability of public data sources to provide more value to end-users, improves the societal return-on-investment for publishing these data, which is in the interest of the public data providers as well as their end-users and the society at large.

The Danish Environmental Protection Agency (EPA) has the option to receive data from many data sources but today does not utilize this because today’s lack of infrastructure makes it cost prohibitive to take advantage of the data. Therefore, they are expressing a need for methods to enable a data hub that provide data products combining satellite, orthophoto and IoT data. The Danish GeoData Agency (GDA) collects very large quantities of Automatic Identification System (AIS) data from ships sailing in Denmark. However, they are only to a very limited degree using this data today. The GDA has a need for methods to enable a data hub that combines multiple sources of ship-based data including AIS data, ocean observation data (sea level and sea temperature) and metrological data. There is a need for analytics on top that can provide services for estimating travel-time at sea or finding the most fuel-efficient routes. This includes estimating the potential of lowering CO2 emissions at sea by following efficient routes.

Geo supports professional users in performing analysis of subsurface conditions based on their own extensive data, gathered from tens of thousands of geotechnical and environmental drilling operations, and on public sources. They deliver a professional software tool that presents this multi modal data in novel ways and are actively working on creating an educational platform giving high school students access to the same data. Geo has an interest in and need for methods for adding live, multi modal data to their platform, to support both professional decision makers and students. Furthermore, they have a need for novel new ways of querying and representing such data, to make it accessible to professionals and students alike. Creating a testbed for combining Geo’s data with satellite feeds, combined with automated processing to interpret this data, will create new synergies and has the potential to greatly improve the visualizations of the subsurface by building detailed, regional and national 3D voxel models.

Therefore, the key challenges that this project will address are how to construct scalable data warehouses for Earth observation data, how to design systems for combining and enriching multimodal data at scale and how to design user-oriented data interfaces and analytics to support domain experts. Thereby, helping the organizations to produce better data for the benefit of the green transition of the Danish society.

The aim of the project is to do use-inspired basic research on methods for multimodal processing of Earth observation data. The research will cover the areas of advanced and efficient big data management, software engineering, Internet of Things and machine learning. The project will research in these areas in the context of three domain cases with GDA on sea data and EPA/GEO on environmental data.

Scalable data warehousing is the key challenge that work within advanced and efficient big data management will address. The primary research question is how to build a data warehouse with billions of rows of all relevant domain data. AIS data from GDA will be studied and in addition to storage also data cleaning will be addressed. On top of the data warehouse, machine learning algorithms must be enabled to compute the fastest and most fuel-efficient route between two arbitrary destinations.

Processing pipelines for multimodal data processing is the key topic for work within software engineering, Internet of Things and machine learning. The primary research question is how to engineer data processing pipelines that allows for enriching data through processes of transformation and combination. In the EPA case there is a need for enriching data by combining data sources, both from multiple sources (e.g., satellite and drone) and modality (e.g., the NDVI index for quantifying vegetation greenness is a function over a green and a near infrared band). Furthermore, we will research methods for easing the process of bringing disparate data into a form that can be inspected both by a human and an AI user. For example, data sources are automatically cropped to a polygon representing a given area of interest (such as a city, municipality or country), normalized for comparability and subjected to data augmentation, in order to improve machine learning performance. We will leverage existing knowledge on graph databases. We aim to facilitate the combination of satellite data with other sources like sensor recordings at specific geo locations. This allows for advanced data analysis of a wide variety of phenomena, like detection and quantification of objects and changes over time, which again allows for prediction of future occurrences.

User-oriented data hubs and analytics is a cross cutting topic with the aim to design interfaces and user-oriented analytics on top of data warehouses and processing pipelines. In the EPA case the focus is on developing a Danish data hub with Earth observation data. The solution must provide a uniform interface to working with the data providing a user-centric view to data representation. This will then enable decision support systems, which will be worked on in the GEO case, that may be augmented by artificial intelligence and understandable to the human users through explorative graph-based user interfaces and data visualizations. For the GPA case the focus is on a web-frontend for querying AIS data as a trajectory and heat maps and estimating the travel time between two points in Danish waters. As part of the validation the data warehouse and related services will be deployed at GDA and serve as the foundation for future GDA services.

Advancing means to process, store and use Earth observation data has many potential domain applications. To build the world class computer science research and innovation centres, as per the long-term goal of DIREC, this project focuses on building the competencies necessary to address challenges with Earth observation data building on advances in advanced and efficient big data management, software engineering, Internet of Things and machine learning.

Scientific value
The project’s scientific value is the development of new methods and techniques for scalable data warehousing, processing pipelines for multimodal data and user-oriented data hubs and analytics. We expect to publish at least seven rank A research articles and to demonstrate the potential of the developed technologies in concrete real-world applications.

Capacity building
The project will build and strengthen the research capacity in Denmark directly through the education of two PhDs, and through the collaboration between researchers, domain experts, and end-users that will lead to R&D growth in the public and industrial sectors. Research competences to address a stronger digital foundation for the green transformation is important for the Danish society and associated industrial sectors.

Societal and business value
The project will create societal and business value by providing the foundation for the Blue Denmark to reduce environmental and climate impact in Danish and Greenlandic waters to help support the green transformation. With ever-increasing human activity at sea, growing transportation of goods where 90% is being transported by shipping and a goal of a European economy based on carbon neutrality there is a need for activating marine data to support this transformation. For the environmental protection sector the project will provide the foundation for efforts to increase the biodiversity in Denmark by better protection of fauna types and data-supported execution of environmental law. The project will provide significant societal value and directly contribute to SDGs 13 (climate action), 14 (life under water) and 15 (life on land).

In conclusion, the project will provide a strong contribution to the digital foundation for the green transition and support Denmark being a digital frontrunner in this area.

Værdi

Projektet vil danne grundlag for, at Det Blå Danmark kan reducere miljø- og klimabelastningen i danske og grønlandske farvande for at være med til at understøtte den grønne omstilling.

News / coverage

Participants

Project Manager

Kristian Torp

Professor

Aalborg University
Department of Computer Science
E: torp@cs.aau.dk

Christian S. Jensen

Professor

Aalborg University
Department of Computer Science

Thiago Rocha Silva

Associate Professor

University of Southern Denmark
Maersk Mc-Kinney Moller Institute

Serkan Ayvaz

Associate Professor

University of Southern Denmark
Maersk Mc-Kinney Moller Institute

Jakob Winge

Senior Software Developer

The Alexandra Institute

Mads Darø Kristensen

Principal Application Architect

The Alexandra Institute

Søren Krogh Sørensen

Software Developer

The Alexandra Institute

Oliver Hjermitslev

Visual Computing Specialist

The Alexandra Institute

Mads Robenhagen Mølgaard

Department Director

GEO
Geodata & Subsurface Models

Ove Andersen

Special Consultant

Danish Geodata Agency

Mikael Vind Mikkelsen

Research Assistant

Aalborg University
Department of Computer Science

Tianyi Li

Assistant Professor

Aalborg University
Department of Computer Science

Partnere

Kategorier
Bridge-projekt

REWORK – The Future of Hybrid Work

DIREC-projekt

REWORK

- The Future of Hybrid Work

Resumé

COVID-19-pandemien og den medfølgende nedlukning viste de potentielle fordele og muligheder ved hjemmearbejde samt de iøjnefaldende mangler, det medfører. ‘Zoom-træthed’ som resultat af høj kognitiv belastning og intense mængder øjenkontakt er kun toppen af isbjerget.

Hjemme- og hybridarbejde er kommet for at blive, men hvordan skal disse arbejdsmetoder se ud i fremtiden? Skal vi blot forsøge at rette op på det, vi allerede har, eller kan vi være dristigere og udforme en anden slags fremtid på arbejdspladsen? I samarbejde med en række virksomheder søger dette projekt en fremtidsvision, der integrerer erfaringerne omkring hybridt arbejde.

Projektperiode: 2022-2025
Budget: 20,21 millioner kr.

There are a multitude of reasons to embrace remote and hybrid work. Climate concerns are increasing, borders are difficult to cross, work/life balance may be easier to attain, power distributions in society could potentially be redressed, to name a few. This means that the demand for Computer Supported Cooperative Work (CSCW) systems that support hybrid work will increase significantly. At the same time, we consistently observe and collectively experience that current digital technologies struggle to mediate the intricacies of collaborative work of many kinds. Even when everything works, from network connectivity to people being present and willing to engage, there are aspects of embodied co-presence that are almost impossible to achieve digitally.

We argue that one major weakness in current remote work technologies is the lack of support for relation work and articulation work, caused by limited embodiment. The concept of relation work denotes the fundamental activities of creating socio-technical connections between people and artefacts during collaborative activities, enabling actors in a global collaborative setting to engage each other in activities such as articulation work. We know that articulation work cannot be handled in the same way in hybrid remote environments. The fundamental difference is that strategies of awareness and coordination mechanisms are embedded in the physical surroundings, and use of artefacts cannot simply be applied to the hybrid setting, but instead requires translation.

Actors in hybrid settings must create and connect the foundational network of globally distributed people and artefacts in a multitude of ways.

In REWORK, we focus on enriching digital technologies for hybrid work. We will investigate ways to strengthen relation work and articulation work through explorations of embodiment and presence. To imagine futures and technologies that can be otherwise, we look to artistic interventions, getting at the core of engagement and reflection on the future of remote and hybrid work by imagining and making alternatives through aesthetic speculations and prototyping of novel multimodal interactions (using the audio, haptic, visual, and even olfactory modalities). We will explore the limits of embodiment in remote settings by uncovering the challenges and limitations of existing technical solutions, following a similar approach as some of our previous research.

Scientific value
REWORK will develop speculative techniques and ideas that can help rethink the practices and infrastructures of remote work and its future. REWORK focuses on more than just the efficiency of task completion in hybrid work. Rather, we seek to foreground and productively support the invisible relation and articulation work that is necessary to ensure overall wellbeing and productivity.

Specifically, REWORK will contribute:

  1. Speculative techniques for thinking about the future of remote work;
  2. Multimodal prototypes to inspire a rethink of remote work;
  3. Design Fictions anchoring future visions in practice;
  4. Socio-technical framework for the future of hybrid remote work practices;
  5. Toolkits for industry.

The research conducted as part of REWORK will produce substantial scientific contributions disseminated through scientific publications in top international journals and conferences relevant to the topic. The scientific contributions will constitute both substantive insights and methodological innovations. These will be targeting venues such as the Journal of Human-Computer Interaction, ACM TOCHI, Journal of Computer Supported Cooperative Work, the ACM CHI conference, NordiCHI, UIST, DIS, Ubicomp, ICMI, CSCW, and others of a similar level.

The project will also engage directly and closely with industries of different kinds, from startups that are actively envisioning new technology to support different types of hybrid work (Cadpeople, Synergy XR, and Studio Koh) to organizations that are trying to find new solutions to accommodate changes in work practices (Arla, Bankdata, Keyloop, BEC).

Part of the intent of engagement with the artistic collaboratory is to create bridges between artistic explorations and practical needs articulated by relevant industry actors. REWORK will enable the creation of hybrid fora to enable such bridging. The artistic collaboratory will enable the project to engage with the general public through an art exhibit at Catch, public talks, and workshops. It is our goal to exhibit some of the artistic output at a venue, such as Ars Electronica, that crosses artistic and scientific audiences.

Societal value
The results of REWORK have the potential to change everybody’s work life broadly. We all know that “returning to work after COVID-19” will not be the same – and the combined situation of hybrid work will be a challenge. Through the research conducted in REWORK, individuals that must navigate the demands of hybrid work and the organizations that must develop policies and practices to support such work will benefit from the improved sense of embodiment and awareness, leading to more effective collaboration.

REWORK will take broadening participation and public engagement seriously, by offering online and in-person workshops/events through a close collaboration with the arts organization Catch (catch.dk). The workshops will be oriented towards particular stakeholder groups – artists interested in exploring the future of hybrid work, industry organizations interested in reconfiguring their existing practices – and open public events.

Capacity building
There are several ways in which REWORK contributes to capacity building. Firstly, by collaborating with the Alexandra Institute, we will create a multimodal toolbox/demonstrator facility that can be used in education, and in industry.

REWORK will work closely with both industry partners (through the Alexandra Institute) and cultural (e.g. catch.dk)/public institutions for collaboration and knowledge dissemination, in the general spirit of DIREC.

We will include the findings from REWORK in our research-based teaching at all three universities. Furthermore, we plan to host a PhD course, or a summer school, on the topic in Year 2 or Year 3. Participants will be recruited nationally and internationally.

Lastly, in terms of public engagement, HCI and collaborative technologies are disciplines that can be attractive to the public at large, so there will be at least one REWORK Open Day where we will invite interested participants, and the DIREC industrial collaborators.

Værdi

Projektet skaber værdi ved at levere forskningsbaserede indsigter og praktiske løsninger til at tackle udfordringerne i hybride arbejdsmiljøer efter COVID-19. Dette vil fremme samarbejde og følelsen af tilstedeværelse og bevidsthed blandt enkeltpersoner og organisationer, og derved påvirke arbejdslivet og det offentlige engagement i en positiv retning.

Nyheder / omtale

Deltagere

Project Manager

Eve Hoggan

Associate Professor

Aarhus University
Department of Computer Science

E: eve.hoggan@cs.au.dk

Susanne Bødker

Professor

Aarhus University
Department of Computer Science

Irina Shklovski

Professor

University of Copenhagen
Department of Computer Science

Pernille Bjørn

Professor

University of Copenhagen
Department of Computer Science

Louise Barkhuus

Professor

IT University of Copenhagen
Department of Computer Science

Naja Holten Møller

Assistant Professor

University of Copenhagen
Department of Computer Science

Nina Boulus-Rødje

Associate Professor

Roskilde University
Department of People and Technology

Allan Hansen

Head of Digital Experience and Solutions Lab

The Alexandra Institute

Mads Darø Kristensen

Principal Application Architect

The Alexandra Institute

Melanie Duckert Schmidt

PhD student

IT University of Copenhagen
Department of Computer Science

Juliane Busboom

PhD Student

Roskilde University
Department of People and Technology

Qianqian Mu

PhD Student

Aarhus University
Department of Computer Science

Kellie Dunn

PhD Student

University of Copenhagen
Department of Computer Science

Sarbajit Deb

Executive Vice President

LTI

Simon Lajboschitz

Co-founder & CEO

Khora

Barbara Scherfig

Programme coordinator

Kulturværftet

Karen Olsen Wosylus

Business and Social Sciences

Arla Foods

Michael Edwards

CEO

eventSPACE

Henrik René Jensen

Senior Manager - User Experience Center

Grundfos

Maz Spork

Partner

Unlikly

Lea Porsager

Artist

Line Finderup Jensen

Artist

Stine Deja

Artist

Jakob la Cour

Artist

Partnere

Kategorier
Bridge-projekt

Secure Internet of Things – Risk Analysis in Design and Operation (SIoT)

DIREC-projekt

Secure Internet of things (SIOT)

- Risk Analysis in Design and Operation

Resumé

Behovet for sikkerhed i IoT-systemer er enormt, men det er svært at opnå på grund af systemernes karakteristika.

Sammen med en række virksomheder har dette projekt til formål at identificere sikkerheds- og sikkerhedskrav til IoT-systemer og udvikle algoritmer til kvantitativ risikovurdering og beslutningstagning, samt skabe værktøjer til at designe og certificere IoT-sikkerhedstræningsprogram, der vil sætte danske virksomheder i stand til at opnå sikkerhedcertificering af deres IoT-enheder, hvilket kan give dem et forspring på et marked, der sandsynligvis vil kræve en sådan certificering i den nærmeste fremtid..

 

Projektperiode: 2022-2025
Budget: 25,10 millioner kr.

When developing novel IoT services or products today, it is essential to consider the potential security implications of the system and to take those into account before deployment. Due to the criticality and widespread deployment of many IoT systems, the need for security in these systems has even been recognised at the government and legislative level, e.g., in the US and the UK, resulting in proposed legislation to enforce at least a minimum of security consideration in deployed IoT products.

However, developing secure IoT systems is notoriously difficult, not least due to the characteristics of many such systems: they often operate in unknown and frequently in privacy‐sensitive environments, engage in communication using a wide variety of protocols and technologies, and must perform essential tasks such as monitoring and controlling (physical) entities. In addition, IoT systems must often perform within real‐ time bounds on limited computing platforms and at times even with a limited energy budget. Moreover, with the increasing number of safety‐critical IoT devices (such as medical devices and industrial IoT devices), IoT security has become a public safety issue. To develop a secure IoT system, one should take into account all of the factors and characteristics mentioned above, and balance them against functionality and performance requirements. Such a risk analysis must be performed not only at the design stage, but also throughout the lifetime of the product. Besides technical aspects, the analysis should also take into account the human and organizational aspects. This type of analysis will form an essential activity for standardization and certification purposes.

In this project, we will develop a modelling formalism with automated tool support, for performing such risk assessments and allowing for extensive “what‐if” scenario analysis. The starting point will be the well‐ known and widely used formalism of attack‐defense trees extended to include various quantities, e.g., cost or energy consumption, as well as game features, for modelling collaboration and competition between systems and between a system and its environment.

In summary, the project will deliever:

  • a modeling method for a systematic description of the relevant IoT system/service aspects
  • a special focus on their security, interaction, performance, and cost aspects
  • a systematic approach, through a new concept of attack‐defense‐games
  • algorithms to compute optimal strategies and trade‐offs between performance, cost and security
  • a tool to carry out quantitative risk assessment of secure IoT systems
  • a tool to carry out “what‐if” scenario analysis, to harden a secure IoT system’s design and/or operation
  • usability studies and design for usability of the tools within organizations around IoT services
  • design of training material to enforce security policies for employees within these organizations.

The main research problems are:

  1. To identify safety and security requirements (including threats, attacker models and counter measures) for IoT systems, as well as the inherent design limitations in the IoT problem domain (e.g., limited computing resources and a limited energy budget).
  2. To organize the knowledge in a comprehensive model. We propose to extend attack‐defense trees with strategic game features and quantitative aspects (time, cost, energy, probability).
  3. To transform this new model into existing “computer models” (automata and games) that are amenable to automatic analysis algorithms. We consider stochastic priced timed games as an underlying framework for such models due to their generality and existing tool support.
  4. To develop/extend the algorithms needed to perform analysis and synthesis of optimal response strategies, which form the basis of quantitative risk assessment and decision‐making.
  5. To translate the findings into instruments and recommendations for the partner companies, addressing both technical and organizational needs.
  6. To design, evaluate, and assess the user interface of the IoT security tools, which serve as important backbones supporting to design and certify IoT security training programs for stakeholder organizations.

Throughout the project, we focus on the challenges and needs of the partner companies. The concrete results and outcomes of the project will also be evaluated in the contexts of these companies. The project will combine the expertise of five partners of DIREC (AAU, AU, Alexandra, CBS and DTU) and four Work Streams from DIREC (WS7: Verification, WS6: CPS and IoT systems, WS8: Cybersecurity and WS5: HCI, CSCW and InfoVis) in a synergistic and collaborative way.

Business value
While it is difficult to make a precise estimate of the number of IoT devices, most estimates are in the range 7‐15 billion connected devices and expected to increase dramatically over the next 5‐10 years. The impact of a successful attack on IoT systems can range from nuisance, e.g., when baby monitors or thermostats are hacked, over potentially expensive DDoS attacks, e.g., when the Mirai malware turned many IoT devices into a DDoS botnet, to life‐threatening, e.g., when pacemakers are not secure. Gartner predicted that the worldwide spending on IoT security will increase from roughly USD 900M to USD 3.1B in 2021 out of a total IoT market up to USD 745B.

The SIOT project will concretely contribute to the agility of the Danish IoT industry. By applying the risk analysis and secure design technologies developed in the project, these companies get a fast path to certification of secure IoT devices. Hence, this project will give Danish companies a head‐start for the near future where the US and UK markets will demand security certification for IoT devices. Also, EU is already working on security regulation for IoT devices. Furthermore, it is well known that the earlier in the development process a security vulnerability or programming error is found, the cheaper it is to fix it. This is even more important for IoT products that may not be updatable “over‐the‐air” and thus require a product recall or physical update process. The methods and technologies developed in this project will help companies find and fix security vulnerabilities already from the design phase and exploration phase, thus reducing long‐term cost of maintenance.

Societal value
It is an academic duty to contribute to safer and more secure IoT systems, since they are permeating the society. Security issues quickly become safety incidents, for instance since IoT systems are monitoring against dangerous physical conditions. In addition, compromised IoT devices can be detrimental for our privacy, since they are measuring all aspects of human life. DTU and Alexandra Institute will disseminate the knowledge and expertise through the network built in the joint CIDI project (Cybersecure IoT in Danish Industry, ending in 2021), in particular a network of Danish IoT companies interested in security, with a clear understanding of companies’ needs for security concerns.

We will strengthen the cybersecurity level of Danish companies in relation to Industry 4.0 and Internet of Things (IoT) security, which are key technological pillars of digital transformation. We will do this by means of research and lectures on several aspects of IoT security, with emphasis on security‐by‐design, risk analysis, and remote attestation techniques as a counter measure.

Capacity building
The education of PhD students itself already contributes to “capacity building”. We will organize a PhD Summer school towards the end of the project, to disseminate the results, across the PhD students from DIREC and students abroad.

We will also prepare learning materials to be integrated in existing course offerings (e.g., existing university courses, and the PhD and Master training networks of DIREC) to ensure that the findings of the project are injected into the current capacity building processes.

Through this education, we will also attract more students for the Danish labor market. The lack of skilled people is even larger in the security area than in other parts of computer science and engineering.

Værdi

Projektet vil give danske virksomheder et forspring i den nærmeste fremtid, når både EU, USA og det britiske marked vil kræve sikkerhedscertificering af IoT-enheder.

Ved at anvende risikoanalyse og sikre designteknologier udviklet i projektet får danske virksomheder en hurtig vej til certificering af sikre IoT-enheder.

Nyheder / omtale

Deltagere

Project Manager

Jaco van de Pol

Professor

Aarhus University
Department of Computer Science

E: jaco@cs.au.dk

Torkil Clemmensen

Professor

Copenhagen Business School
Department of Digitalization

Qiqi Jiang

Associate Professor

Copenhagen Business School
Department of Digitalization

Kim Guldstrand Larsen

Professor

Aalborg University
Department of Computer Science

René Rydhof Hansen

Associate Professor

Aalborg University
Department of Computer Science

Flemming Nielson

Professor

Technical University of Denmark
DTU Compute

Alberto Lluch Lafuente

Associate Professor

Technical University of Denmark
DTU Compute

Nicola Dragoni

Professor

Technical University of Denmark
DTU Compute

Sean Kauffman

Assistant Professor (Tenure Track)

Aalborg University

Mikael Bisgaard Dahlsen-Jensen

PhD Student

Aarhus University
Department of Computer Science

Alyzia-Maria Konsta

PhD Student

Technical University of Denmark
DTU Compute

Gert Læssøe Mikkelsen

Head of Security Lab

The Alexandra Institute

Laura Lynggaard Nielsen

Senior Anthropologist

The Alexandra Institute

Zaruhi Aslanyan

Security Architect

The Alexandra Institute

Marcia ShiTing Wang

PhD Student

Copenhagen Business School
Department of Digitalization

Anders Qvistgaard Sørensen

R&D Manager

Micro Technic

Jørgen Hartig

CEO & Strategic Advisor

SecuriOT

Claus Riber

Senior Manager
Software Cybersecurity

Beumer Group

Morten Granum

Software Director

Beumer Group

Kristian Baasch Thomsen

Lead Digital Compliance Specialist

Grundfos

Karsten Ries

CEO

Develco Products

Daniel Lux

Chief Technology Officer

Seluxit

Samant Khajuria

Chief Specialist Cybersecurity

Terma

Tobias Worm Bøgedal

PhD student

Aalborg University

Partnere

Kategorier
Bridge-projekt

Embedded AI

DIREC-projekt

Embedded AI

Resumé

AI er i øjeblikket afhængig af store datacentre og centraliserede systemer, hvilket nødvendiggør databevægelse til algoritmer. For at imødegå denne begrænsning udvikler AI sig mod et decentraliseret netværk af enheder, der bringer algoritmerne direkte til dataene. Dette skift, muliggjort af algoritmisk smidighed og autonom datadiscovery, vil reducere behovet for høj-båndbreddeforbindelse og forbedre datasikkerhed og privatliv, hvilket letter realtids edge-læring. Denne transformation drives af integrationen af AI og IoT, som danner “Artificial Intelligence of Things” (AIoT), og fremkomsten af Embedded AI (eAI), der behandler data på edge-enheder fremfor i skyen.

Embedded AI tilbyder øget reaktionshastighed, funktionalitet, sikkerhed og privatliv. Dog kræver det, at ingeniører udvikler nye færdigheder inden for indlejrede systemer. Virksomheder ansætter dataspecialister for at udnytte AI til at optimere produkter og tjenester i forskellige industrier. Dette projekt sigter mod at udvikle værktøjer og metoder til at overføre AI fra skyen til edge-enheder, demonstreret gennem industrielle anvendelsestilfælde.

Projektperiode: 2022-2024
Budget: DKK 16,2 millioner

AI is currently limited by the need for massive data centres and centralized architectures, as well as the need to move this data to algorithms. To overcome this key limitation, AI will evolve from today’s highly structured, controlled, and centralized architecture to a more flexible, adaptive, and distributed network of devices. This transformation will bring algorithms to the data, made possible by algorithmic agility and autonomous data discovery, and it will drastically reduce the need for high-bandwidth connectivity, which is required to transport massive data sets and eliminate any potential sacrifice of the data’s security and privacy. Furthermore, it will eventually allow true real-time learning at the edge.

This transformation is enabled by the merging of AI and IoT into “Artificial Intelligence of Things” (AIoT), and has created an emerging sector of Embedded AI (eAI), where all or parts of the AI processing are done on the sensor devices at the edge, rather than sent to the cloud. The major drivers for Embedded AI are increased responsiveness and functionality, reduced data transfer, and increased resilience, security, and privacy. To deliver these benefits, development engineers need to acquire new skills in embedded development and systems design.

To enter and compete in the AI era, companies are hiring data scientists to build expertise in AI and create value from data. This is true for many companies developing embedded systems, for instance, to control water, heat, and air flow in large facilities, large ship engines, or industrial robots, all with the aim of optimizing their products and services. However, there is a challenging gap between programming AI in the cloud using tools like Tensorflow, and programming at the edge, where resources are extremely constrained. This project will develop methods and tools to migrate AI algorithms from the cloud to a distributed network of AI-enabled edge-devices. The methods will be demonstrated on several use cases from the industrial partners.

Research problems and aims
In a traditional, centralized AI architecture, all the technology blocks would be combined in the cloud or at a single cluster (Edge computing) to enable AI. Data collected by IoT, i.e., individual edge-devices, will be sent towards the cloud. To limit the amount of data needed to be sent, data aggregation may be performed along the way to the cloud. The AI stack, the training, and the later inference, will be performed in the cloud, and results for actions will be transferred back to the relevant edge-devices. While the cloud provides complex AI algorithms which can analyse huge datasets fast and efficiently, it cannot deliver true real-time response and data security and privacy may be challenged.

When it comes to Embedded AI, where AI algorithms are moved to the edge, there is a need to transform the foundation of the AI Stack by enabling transformational advances, algorithmic agility and distributed processing will enable AI to perceive and learn in real-time by mirroring critical AI functions across multiple disparate systems, platforms, sensors, and devices operating at the edge. We propose to address these challenges in the following steps, starting with single edge-devices.

  1. Tiny inference engines – Algorithmic agility of the inference engines will require new AI algorithms and new processing architectures and connectivity. We will explore suitable microcontroller architectures and reconfigurable platform technologies, such as Microchip’s low power FPGA’s, for implementing optimized inference engines. Focus will be on achieving real-time performance and robustness. This will be tested on cases from the industry partners.
  2. µBrains – Extending the edge-devices from pure inference engines to also provide local learning. This will allow local devices to provide continuous improvements. We will explore suitable reconfigurable platform technologies with ultra-low power consumption, such as Renesas’ DRP’s using 1/10 of the power budget of current solutions, and Microchip’s low power FPGA’s for optimizing neural networks. Focus will be on ensuring the performance, scheduling, and resource allocation of the new AI algorithms running on very resource constrained edge-devices.
  3. Collective intelligence – The full potential of Embedded AI will require distributed algorithmic processing of the AI algorithms. This will be based on federated learning and computing (microelectronics) optimized for neural networks, but new models of distributed systems and stochastic analysis, is necessary to ensure the performance, prioritization, scheduling, resource allocation, and security of the new AI algorithms—especially with the very dynamic and opportunistic communications associated with IoT.

The expected outcome is an AI framework which supports autonomous discovery and processing of disparate data from a distributed collection of AI-enabled edge-devices. All three presented steps will be tested on cases from the industry partners.

Value Creation
Deep neural networks have changed the capabilities of machine learning reaching higher accuracy than hitherto. They are in all cases learning from unstructured data now the de facto standard. These networks often include millions of parameters and may take months to train on dedicated hardware in terms of GPUs in the cloud. This has resulted in high demand of data scientists with AI skills and hence, an increased demand for educating such profiles. However, an increased use of IoT to collect data at the edge has created a wish for training and executing deep neural networks at the edge rather than transferring all data to the cloud for processing. As IoT end- or edge devices are characterized by low memory, low processing power, and low energy (powered by battery or energy harvesting), training or executing deep neural networks is considered infeasible. However, developing dedicated accelerators, novel hardware circuits, and architectures, or executing smaller discretized networks may provide feasible solutions for the edge.

The academic partners DTU, KU, AU, and CBS, will not only create scientific value from the results disseminated through the four PhDs, but will also create important knowledge, experience, and real-life cases to be included in the education, and hence, create capacity building in this important merging field of embedded AI or AIoT.

The industry partners Indesmatech, Grundfos, MAN ES, and VELUX are all strong examples of companies who will benefit from mastering embedded AI, i.e., being able to select the right tools and execution platforms for implementing and deploying embedded AI in their products.

  • Indesmatech expect to gain leading edge knowledge about how AI can be implemented on various chip processing platforms, with a focus on finding the best and most efficient path to build cost and performance effective industrial solutions across industries as their customers are represented from most industries.
  • Grundfos will create value in applications like condition monitoring of pump and pump systems, predictive maintenance, heat energy optimization in buildings and waste-water treatment where very complex tasks can be optimized significant by AI. The possibility to deploy embedded AI directly on low cost and low power End and Edge devices instead of large cloud platforms, will give Grundfos a significant competitive advantage by reducing total energy consumption, data traffic, product cost, while at the same time increase real time application performance and secure customers data privacy.
  • MAN ES will create value from using embedded AI to predict problems faster than today. Features such as condition monitoring and dynamic engine optimization will give MAN ES competitive advantages, and the exploitation of embedded AI together with their large amount of data collected in the cloud will in the long run create marked advantages for MAN ES.

VELUX will increase their competitive edge by attaining a better understanding of the ability to implement the right level of embedded AI into their products. The design of new digital smart products with embedded intelligence, will create value from driving the digital product transformation of VELUX.

The four companies represent a general trend where several industries depend on their ability to develop, design and engineer high tech products with software, sensors and electronic solutions as embedded systems to their core products. Notably firms in the machine sub-industry of manufacturers of pumps, windmills and motors, and companies in the electronics industry, which are manufacturing computer and communication equipment and other electronic equipment. These industries have very high export with an 80 percent export share of total sales.

Digital and electronics solutions compose a very high share of the value added. In total, the machine sub-industry’s more than 250 companies and the electronics industry’s more than 500 companies in total exported equipment worth 100 billion DKK in 2020 and had more than 38.000 employees.[1] The majority of electronics educated have a master’s or bachelor’s degree in engineering and the share of engineers has risen since 2008.[2]

Digitalization, IoT and AI are data driven and a large volume of data will have economic and environmental impact. AI will increase the demand for computing, which today depends on major providers of cloud services and transfer of data. The operating costs of energy related to this will increase, and according to EU’s Joint Research Center (JRC), it will account for 3-4 percent of Europe’s total energy consumption[3]. Thus, less energy consuming and less costly solutions, are needed. The EU-Commission find that fundamental new data processing technologies encompassing the edge are required. Embedded AI will make this possible by moving computing to sensors where data is generated, instead of moving data to computing. [4]All in all, the rising demand and need for these new high-tech solutions calls for development of Embedded AI capabilities and will have a positive impact on Danish industries, in terms of growth and job-creation.

[1] Calculations on data from Statistics Denmark, Statistikbanken, tables: FIKS33; GF2
[2] ”Elektronik-giver-beskaftigelse-i-mange-brancher” DI Digital, 2021
[3] Artificial Intelligence, A European Perspective”, JRC, EUR 29425, 2018
[4] “2030 Digital Compass, The European way for the Digital Decade”, EU Commission, 2021

Værdi

Projektet skaber ikke kun videnskabelig værdi gennem de resultater, der formidles gennem de fire ph.d.-studerende, men vil også skabe vigtig viden, erfaring og virkelige eksempler, der kan indgå i uddannelse, og dermed skabe kapacitetsopbygning inden for dette vigtige nye område af embedded AI eller AIoT.

Nyheder / omtale

Rapporter

Deltagere

Project Manager

Xenofon Fafoutis

Professor

Technical University of Denmark
DTU Compute

E: xefa@dtu.dk

Peter Gorm Larsen

Professor

Aarhus University
Dept. of Electrical and Computer Engineering

Jalil Boudjadar

Associate Professor

Aarhus University
Dept. of Electrical and Computer Engineering

Jan Damsgaard

Professor

Copenhagen Business School
Department of Digitalization

Ben Eaton

Associate Professor

Copenhagen Business School
Department of Digitalization

Thorkild Kvisgaard

Head of Electronics

Grundfos

Thomas S. Toftegaard

Director, Smart Product Technology

Velux

Rune Domsten

Co-founder & CEO

Indesmatech

Jan Madsen

Professor

Technical University of Denmark
DTU Compute

Henrik R. Olesen

Senior Manager

MAN Energy Solutions

Reza Toorajipour

PhD Student

Copenhagen Business School
Department of Digitalization

Iman Sharifirad

PhD Student

Aarhus University
Dept. of Electrical and Computer Engineering

Amin Hasanpour

PhD Student

Technical University of Denmark
DTU Compute

Partnere

Kategorier
Bridge-projekt

HERD: Human-AI Collaboration: Engaging and Controlling Swarms of Robots and Drones

DIREC-projekt

HERD: Human-AI Collaboration

- Engaging and Controlling Swarms of Robots and Drones

Resumé

I dag har robotter og droner et stadigt bredere sæt opgaver. Imidlertid er sådanne robotter begrænset i deres kapacitet til at samarbejde med hinanden og med mennesker. Hvordan kan vi udnytte de potentielle fordele ved at have flere robotter, der arbejder parallelt for at reducere tiden til færdiggørelse? Hvis robotter får opgaven kollektivt som en sværm, kan de potentielt koordinere deres drift i farten og tilpasse sig baseret på lokale forhold for at opnå optimal eller næsten optimal opgaveydelse.

Sammen med industrielle partnere har dette projekt til formål at adressere samarbejde med flere robotter og designe og evaluere teknologiske løsninger, der gør det muligt for brugere at engagere og kontrollere autonome multirobotsystemer.

Projektperiode: 2021-2025
Budget: 17,08 millioner kr.

Robots and drones take on an increasingly broad set of tasks, such as AgroIntelli’s autonomous farming robot and the drone-based emergency response systems from Robotto. Currently, however, such robots are limited in their capacity to cooperate with one another and with humans. In the case of AgroIntelli, for instance, only one robot can currently be deployed on a field at any time and is unable to respond effectively to the presence of a human-driven tractor or even another farming robot working in the same field. In the future, AgroIntelli wants to leverage the potential benefits of having multiple robots working in parallel on the same field to reduce time to completion. A straightforward way to achieve this is to partition the field into several distinct areas corresponding to the number of robots available and then assign each robot its own area. However, such an approach is inflexible and requires detailed a priori planning. If, instead, the robots were given the task collectively as a swarm, they could potentially coordinate their operation on the fly and adapt based on local conditions to achieve optimal or near-optimal task performance.

Similarly, Robotto’s system architecture currently requires one control unit to manage each deployed drone. In large area search scenarios and operations with complex terrain, the coverage provided by a single drone is insufficient. Multiple drones can provide real-time data on a larger surface area and from multiple perspectives – thereby aiding emergency response teams in their time-critical operations. In the current system, however, additional drones each requires a dedicated operator and control unit. Coordination between operators introduces an overhead and it can become a struggle to maintain a shared understanding of the rapidly evolving situation. There is thus a need to develop control algorithms for drone-to-drone coordination and interfaces that enable high-level management of the swarm from a single control console. The complexity requires advanced interactions to keep the data actionable, simple, and yet support the critical demands of the operation. This challenge is relevant to search & rescue (SAR) as well as other service offerings in the roadmap, including firefighting, inspections, and first responder missions.

For both of our industrial partners, AgroIntelli and Robotto, and for similar companies that are pushing robotics technology toward real-world application, there is a clear unmet need for approaches that enable human operators to effectively engage and control systems composed of multiple autonomous robots. This raises a whole new set of challenges compared to the current paradigm where there is a one-to-one mapping between operator and robot. The operator must be able to interact with the system at the swarm level as a single entity to set mission priorities and constraints, and at the same time, be able to intervene and take control of a single robot or a subset of robots. An emergency responder may, for instance, want to take control over a drone to follow a civilian or a group of personnel close to a search area, while a farmer may wish to reassign one or more of her farming robots to another field.

HERD will build an understanding of the challenges in multi-robot collaboration, and design and evaluate technological solutions that enable end-users to engage and control autonomous multi-robot systems. The project will build on use cases in agriculture and search & rescue supported by the industrial partners’ domain knowledge and robotic hardware. Through the research problems and aims outlined below, we seek to enable the next generation of human-swarm collaboration.

Pre-operation and on-the-fly mission planning for robot swarms: An increase in the number of robots under the user’s control has the potential to lead to faster task completion and/or a higher quality. However, the increase in unit count significantly increases the complexity of both end-user-to-robot communication and coordination between robots. As such, it is critical to support the user in efficient and effective task allocation between robots. We will answer the following research questions: (i) What are the functionalities required for humans to effectively define mission priorities and constraints at the swarm level? (ii) How can robotic systems autonomously divide tasks based on location, context, and capability, and under the constraints defined by the end-user? (iii) How does the use of autonomous multi-robot technologies change existing organizational routines, and which new ones are required?

Situational awareness under uncertainty in multi-robot tasks: Users of AI-driven (multi-)robot systems often wish to simulate robot behaviour across multiple options to determine the best possible approach to the task at hand. Given the context-dependent and algorithm-driven nature of these robots, simulation accuracy can only be achieved up to a limited degree. This inherent uncertainty negatively impacts the user’s ability to make an informed decision on the best approach to task completion. We will support situational awareness in the control of multi-robot systems by studying: (i) How to determine and visualise levels of uncertainty in robot navigation scenarios to optimise user understanding and control? (ii) What are the implications of the digital representation of the operational environment for organizational sensemaking? (iii) How can live, predictive visualisations of multi-robot trajectories and task performance support the steering and directing of robot swarms from afar?

User intervention and control of swarm subsets: Given the potentially (rapidly) changing contexts in which the robots operate, human operators will have to regularly adapt from a predetermined plan for a subset of robots. This raises novel research questions both in terms of robot control, in which the swarm might depend on a sufficient number of nearby robots to maintain communication, and in terms of user interaction, in which accurate robot selection and information overload can quickly raise issues. We will therefore answer the following research questions:

(i) When a user takes low-level control of a single robot or subset of a robot swarm, how should that be done, and how should the rest of the system respond?

(ii) How can the user interfaces help the user to understand the potential impact when they wish to intervene or deviate from the mission plans?

Validation of solutions in real-world applications: Based on the real-world applications of adaptive herbicide spraying by farming robots and search & rescue as provided by our industrial partners, we will validate the solutions developed in the project. While both industrial partners deal with robotic systems, their difference in both application area and technical solution (in-the-air vs. on land) allows us to assess the generalisability and efficiency of our solutions in real-world applications. We will answer the following research questions:

(i) What common solutions should be validated in both scenarios and which domain-specific solutions are relevant in the respective types of scenarios?

(ii) What business and organisational adaptation and innovation are necessary for swarm robotics technology to be successfully adopted in the public sector and in the private sector.

Advances in AI, computer science, and mechatronics mean that robots can be applied to an increasingly broader set of domains. To build the world class computer science research and innovation centres, as per the long-term goal of DIREC, this project focuses on building the competencies necessary to address the complex relationship between humans, artificial intelligence, and autonomous robots.

Scientific value
The project’s scientific value is the development of new methods and techniques to facilitate effective interaction between humans and complex AI systems and the empirical validation in two distinct use cases. The use cases provide opportunities to engage with swarm interactions across varying demands, including domains where careful a priori planning is possible (agricultural context) and chaotic and fast-paced domains (search & rescue with drones). HERD will thus lead to significant contributions in the areas of autonomous multi-robot coordination and human-robot interaction. We expect to publish at least ten rank A research articles and to demonstrate the potential of the developed technologies in concrete real-world applications. This project also gears up the partners to participate in project proposals to the EU Framework Programme on specific topics in agricultural robotics, nature conservation, emergency response, security, and so on, and in general topics related to developing key enabling technologies.

Capacity building
HERD will build and strengthen the research capacity in Denmark directly through the education of three PhDs, and through the collaboration between researchers, domain experts, and end-users that will lead to industrial R&D growth. Denmark has been a thought leader in robotics, innovating how humans collaborate with robots in manufacturing and architecture, e.g. Universal Robots, MiR, Odico, among others. Through HERD, we support not only the named partners in developing and improving their products and services, but the novel collaboration between the academic partners, who have not previously worked together, helps to ensure that the Danish institutes of higher education build the competencies and the workforce that are needed to ensure continued growth in the sectors of robotics and artificial intelligence. HERD will thus contribute to building the capacity required to facilitate effective interaction between end-users and complex AI systems.

Business value
HERD will create business value through the development of technologies that enable end-users to effectively engage and control systems composed of multiple robots. These technologies will significantly increase the value of the industrial partners’ products, since current tasks can be done faster and at a lower cost, and entirely new tasks that require multiple coordinated robots can be addressed. The value increase will, in turn, increase sales and exports. Furthermore, multi-robot systems have numerous potential application domains in addition to those addressed in this project, such as infrastructure inspection, construction, environmental monitoring, and logistics. The inclusion of DTI as partner will directly help explore these opportunities through a broader range of anticipated tech transfer, future market and project possibilities.

Societal value
HERD will create significant societal value and directly contribute to SDGs 1 (no poverty), 2 (zero hunger), 13 (climate action), and 15 (life on land). Increased use of agricultural robots can, for instance, lead to less soil compaction and enable the adoption of precision agriculture techniques, such as mechanical weeding that eliminates the need for pesticides. Similarly, increased use of drones in search & rescue can reduce the time needed to save people in critical situations.

Værdi

Projektet vil udvikle teknologier, der gør det muligt for slutbrugere at engagere og kontrollere systemer, der består af flere robotter. Systemer, der består af flere robotter vil øge værdien af industrielle produkter markant, da nuværende opgaver kan udføres hurtigere og til en lavere pris, og helt nye opgaver, der kræver flere koordinerede robotter, kan løses.

Nyheder / omtale

Deltagere

Project Manager

Anders Lyhne Christensen

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

E: andc@mmmi.sdu.dk

Ulrik Pagh Schultz

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Mikael B. Skov

Professor

Aalborg University
Department of Computer Science

Timothy Robert Merritt

Associate Professor

Aalborg University
Department of Computer Science

Niels van Berkel

Associate Professor

Aalborg University
Department of Computer Science

Ionna Constantiou

Professor

Copenhagen Business School
Department of Digitalization

Kenneth Richard Geipel

Chief Executive Officer

Robotto

Christine Thagaard

Marketing Manager

Robotto

Lars Dalgaard

Head of Section

Danish Technological Institute
Robot Technology

Gareth Edwards

R&D Team Manager

AGROINTELLI A/S

Hans Carstensen

CPO

AGROINTELLI A/S

Maria-Theresa Oanh Hoang

PhD Student

Aalborg University
Department of Computer Science

Alexandra Hettich

PhD Student

Copenhagen Business School
Department of Digitalization

Kasper Grøntved

PhD Student

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Partnere

Kategorier
Bridge-projekt

EXPLAIN-ME: Learning to Collaborate via Explainable AI in Medical Education

DIREC-projekt

Explain me

- Learning to Collaborate via Explainable AI in Medical Education

Resumé

I den vestlige verden vurderes ca. hver tiende medicinske diagnose til at være forkert, hvilket resulterer i, at patienterne ikke får den rigtige behandling. Forklaringen kan være manglende erfaring og uddannelse hos lægepersonalet.

Sammen med klinikere har dette projekt til formål at udvikle forklarende AI, der kan hjælpe medicinsk personale med at træffe kvalificerede beslutninger ved at agere som en mentor, der giver feedback og råd, når personalet træner. Det er vigtigt, at den forklarlige AI giver gode forklaringer, som er lette at forstå og bruge under det medicinske personales arbejdsgang.

Projektperiode: 2021-2025
Budget: 28,44 millioner kr.

AI is widely deployed in assistive medical technologies, such as image-based diagnosis, to solve highly specific tasks with feasible model optimization. However, AI is rarely designed as a collaborator for the healthcare professionals, but rather as a mechanical substitute for part of a diagnostic workflow. From the AI researcher’s point of view, the goal of development is to beat state-of-the-art on narrow performance parameters, which the AI may solve with superhuman accuracy.

However, for more general problems such as full diagnosis, treatment execution, or explaining the background for a diagnosis, the AI is still not to be trusted. Hence, clinicians do not always perceive AI solutions as helpful in solving their clinical tasks, as they only solve part of the problem sufficiently well. The EXPLAIN-ME initiative seeks to create AIs that help solve the overall general tasks in collaboration with the human health care professional.

To do so, we need not only to provide interpretability in the form of explainable AI models — we need to provide models whose explanations are easy to understand and utilize during the clinician’s workflow. Put simply, we need to provide good explanations.

Unmet technical needs
It is not hard to agree that good explanations are better than bad explanations. In this project, however, we aim to establish methods and collect data that allow us to train and validate the quality of clinical AI explanations in terms of how understandable and useful they are.

AI support should neither distract nor hinder ongoing tasks, giving fluctuating need for AI support, e.g. throughout a surgical procedure. As such, the relevance and utility of AI explanations are highly context- and task-dependent. Through collaboration with Zealand University Hospital we will develop explainable AI (XAI) feedback for human-AI collaboration in static clinical procedures, where data is collected and analyzed independently — e.g. when diagnosing cancer from scans collected beforehand in a different unit.

In collaboration with CAMES and NordSim, we will implement human-AI collaboration in simulation centers used to train clinicians in dynamic clinical procedures, where data is collected on the fly — e.g. for ultrasound scanning of pregnant women, or robotic surgery. We will monitor the clinicians’ behavior and performance as a function of feedback provided by the AI. As there are no actual patients involved in medical simulation, we are also free to provide clinicians with potentially bad explanations, and we may use the clinicians’ responses to freely train and evaluate the AI’s ability to explain.

Unmet clinical needs
In the Western World, medical errors are only exceeded by cancer and heart diseases in the number of fatalities caused. About one in ten diagnoses is estimated to be wrong, resulting in inadequate and even harmful care. Errors occur during clinical practice for several reasons, but most importantly, because clinicians often work alone with minimal expert supervision and support. The EXPLAIN-ME initiative aims to create AI decision support systems that take the role of an experienced mentor providing advice and feedback.

This initiative seeks to optimize the utility of feedback provided by healthcare explainable AI (XAI). We will approach this problem both in static healthcare applications, where clinical decisions are based on data already collected, and in dynamic applications, where data is collected on the fly to continually improve confidence in the clinical decision. Via an interdisciplinary effort between XAI, medical simulation, participatory design and HCI, we aim to optimize the explanations provided by the XAI to be of maximal utility for clinicians, supporting technology utility and acceptance in the clinic.

Case 1: Renal tumor classification
Classification of a renal tumor as malign or benign is an example of a decision that needs to be taken under time pressure. If malign, the patient should be operated immediately to prevent cancer from spreading to the rest of the body, and thus a false positive diagnosis may lead to the unnecessary destruction of a kidney and other complications. While AI methods can be shown statistically to be more precise than an expert physician, there is a need for extending it with explanation for a decision– and only the physicians know what “a good explanation” is. This motivates a collaborative design and development process to find the best balance between what is technically possible and what is clinically needed.

Case 2: Ultrasound Screening
Even before birth, patients suffer from erroneous decisions made by healthcare workers. In Denmark, 95% of all pregnant women participate in the national ultrasound screening program aimed at detecting severe maternal-fetal disease. Correct diagnosis is directly linked to the skills of the clinicians, and only about half of all serious conditions are detected before birth. AI feedback, therefore, comes with the potential to standardize care across clinicians and hospitals. At DTU, KU and CAMES, ultrasound imaging will be the main case for development, as data access and management, as well as manual annotations, are already in place. We seek to give the clinician feedback during scanning, such as whether the current image is a standard ultrasound plane (see figure); whether it has sufficient quality; whether the image can be used to predict clinical outcomes, or how to move the probe to improve image quality.

Case 3: Robotic Surgery
AAU and NordSim will collaborate on the assessment and development of robotic surgeons’ skills, associated with an existing clinical PhD project. Robotic surgery allows surgeons to do their work with more precision and control than traditional surgical tools, thereby reducing errors and increasing efficiency. AI-based decision support is expected to have a further positive effect on outcomes. The usability of AI decision support is critical, and this project will study temporal aspects of the human-AI collaboration, such as how to present AI suggestions in a timely manner without interrupting the clinician; how to hand over tasks between a member of the medical team and an AI system; and how to handle disagreement between the medical expert and the AI system.

In current healthcare AI research and development, there is often a gap between the needs of clinicians and the developed solutions. This comes with a lost opportunity for added value: We miss out on potential clinical value for creating standardized, high quality care across demographic groups. Just as importantly, we miss out on added business value: If the first, research-based step in the development food chain is unsuccessful, then there will also be fewer spin-offs and start-ups, less knowledge dissemination to industry, and overall less innovation in healthcare AI.

The EXPLAIN-ME initiative will address this problem:

  • We will improve clinical interpretability of healthcare AI by developing XAI methods and workflows that allow us to optimize XAI feedback for clinical utility, measured both on clinical performance and clinical outcomes.
  • We will improve clinical technology acceptance by introducing these XAI models in clinical training via simulation-laboratories.
  • We will improve business value by creating a prototype for collaborative, simulation-based deployment of healthcare AI. This comes with great potential for speeding up industrial development of healthcare AI: Simulation-based testing of algorithms can begin while algorithms still make mistakes, because there is no risk of harming patients. This, in particular, can speed up the timeline from idea to clinical implementation, as the simulation-based testing is realistic while not requiring the usual ethical approvals.

This comes with great potential value: While AI has transformed many aspects of society, its impact on the healthcare sector is so far limited. Diagnostic AI is a key topic in healthcare research, but only marginally deployed in clinical care. This is partly explained by the low interpretability of state-of-the-art AI, which negatively affects both patient safety and clinicians’ technology acceptance. This is also explained by the typical workflow in healthcare AI research and development, which is often structured as parallel tracks where AI researchers independently develop technical solutions to a predefined clinical problem, while only occasionally interacting with the clinical end-users.

This often results in a gap between the clinicians’ needs and the developed solution. The EXPLAIN-ME initiative aims to close this gap by developing AI solutions that are designed to interact with clinicians in every step of the design-, training-, and implementation process.

Værdi

Projektet vil udvikle forklarlig AI, der kan hjælpe medicinsk personale med at træffe kvalificerede beslutninger ved at tage rollen som mentor.

Nyheder / omtale

Deltagere

Project Manager

Aasa Feragen

Professor

Technical University of Denmark
DTU Compute

E: afhar@dtu.dk

Anders Nymark Christensen

Asscociate Professor

Technical University of Denmark
DTU Compute

Mads Nielsen

Professor

University of Copenhagen
Department of Computer Science

Mikael B. Skov

Professor

Aalborg University
Department of Computer Science

Niels van Berkel

Associate Professor

Aalborg University
Department of Computer Science

Henning Christiansen

Professor

Roskilde University
Department of People and Technology

Jesper Simonsen

Professor

Roskilde University
Department of People and Technology

Henrik Bulskov Styltsvig

Associate Professor

Roskilde University
Department of People and Technology

Martin Tolsgaard

Associate Professor

CAMES Rigshopitalet

Morten Bo Svendsen

Chief Engineer

CAMES Rigshospitalet

Sten Rasmussen

Professor, Head

Dept. of Clinical Medicine
Aalborg University

Mikkel Lønborg Friis

Director

NordSim
Aalborg University

Nessn Htum Azawi

Associate Professor,
Head of Research Unit & Renal Cancer team

Department of Urology
Zealand University Hospital

Manxi Lin

PhD Student

University of Southern Denmark
DTU Compute

Naja Kathrine Kollerup

PhD Student

Aalborg University
Department of Computer Science

Jakob Ambsdorf

PhD Student

University of Copenhagen
Department of Computer Science

Daniel van Dijk Jacobsen

PhD Student

Roskilde University
Department of People and Technology

Partnere