Categories
News

Explainable AI to increase hospitals’ use of AI

26 November 2021

Explainable AI to increase hospitals' use of AI

In a new DIREC project, AI researchers are collaborating with hospitals to create more useful AI and AI algorithms that are easier to understand.

AI (artificial intelligence) is gradually gaining ground in assistive medical technologies such as image-based diagnosis, where artificial intelligence analyzes CT scans with superhuman precision. AI, on the other hand, is rarely designed as a collaborator for healthcare professionels.

In a new human-AI project EXPLAIN-ME – supported by the national research center DIREC, AI researchers together with medical staff will develop explanatory artificial intelligence (Explainable AI – XAI) that can give clinicians feedback when training in hospitals training clinics.

“In the Western world, about one in ten diagnoses is judged to be incorrect, so patients do not get the right treatment. The explanation may be due to a lack of experience and training. Our XAI model will help the medical staff make decisions and act a bit like a mentor who gives advice and response when they train,” explains Professor at DTU Compute and Project Manager Aasa Feragen.

In the project, DTU, the University of Copenhagen, Aalborg University, and Roskilde University collaborate with doctors at the training and simulation center CAMES at Rigshospitalet, NordSim at Aalborg University Hospital, and oncologists at the Department of Urology at Zealand University Hospital in Roskilde.

Ultrasound scan of pregnant women


At CAMES, DTU and the University of Copenhagen will develop an XAI model that looks over the shoulder of doctors and midwives when they ultrasound scan ‘pregnant’ training dolls at the training clinic.

In the field of ultrasound scanning, clinicians work on the basis of specific ‘standard plans’, which show different parts of the fetus’ anatomy to make it easier to see and react in case of complications. The rules are implemented in the XAI model, which is integrated into a simulator that gives the doctor ongoing feedback.

“It would be great if XAI could help less trained doctors to do scans that are on a par with the highly trained doctors.”
Professor and Projekt Manager Aasa Feragen

The researchers train the artificial intelligence on real data from Rigshospitalet’s ultrasound scans from 2009 to 2018, and it is primarily images from the common nuchal scan and malformation scans that are offered to all Danish pregnant women approximately 12 and 20 weeks into the pregnancy. When the XAI models will be ready to use at the training clinic, first they have to check whether the model also works in the simulator, since the EAI model is trained on real data, while the training doll is artificial data.

According to doctors, the quality of ultrasound scans and the ability to make accurate diagnoses depends on how much training the doctors have received.

“If our model can tell the doctor during the scan that a foot is missing in the picture, the doctor may be able to learn faster. If we get the XAI model to tell us that the probe on the ultrasound device needs to be moved a bit to get everything in the picture, then maybe it can be used in medical practice as well. It would be great if XAI could help less trained doctors to do scans that are on a par with the highly trained doctors,” says Aasa Feragen.

Research associate professor and head of CAMES’ research team for artificial intelligence Martin Grønnebæk Tolsgaard emphasizes that many doctors are interested in getting help from AI technology to find the best treatment for patients. Here is explainable AI the way to go.

“Many of the AI models that exist today do not provide very good insight into why they come to a particular decision. It is important for us to become wiser on that. If the model does not explain why it comes to a given decision, then clinicians do not believe in the decision. So if you want to use AI to make clinicians better, then we need good explanations, like Explainable AI.”

Ongoing feedback on robotic surgery


Robotic surgery allows surgeons to perform their work with more precision and control than traditional surgical tools. It reduces errors and increases efficiency, and the expectation is that AI will be able to improve the results further.

In Aalborg, the researchers will develop an XAI model that supports the doctors in the training center NordSim, where both Danish and foreign doctors can train surgery and operations in simulators on e.g. pig hearts. The model must provide ongoing feedback to the clinicians while they are training an operation without interfering, says Mikael B. Skov, professor at Department of Computer Science at Aalborg University:

“Today, it is typically the case that you only get to know if you should have done something different when you have finished training an operation. We would like to look at how you can come up with this feedback more continuously to better understand whether we have done something right or wrong. The feedback should be done in such a way that the people learn faster and, at the same time, make fewer mistakes before they have to go out and do real operations. We, therefore, need to look at how to develop different types of feedback, such as warnings without interrupting too much”.

Image analysis in kidney cancer


Doctors often have to make decisions under time pressure, e.g. in connection with cancer diagnoses to prevent cancer from spreading. A false-positive diagnosis, therefore, could cause a healthy kidney removed and other complications to be inflicted. Although experience shows that AI methods are more accurate in assessments, clinicians need a good explanation of why the mathematical models classify a tumor as benign or malignant.

In the DIREC project, researchers from Roskilde University will develop methods in which artificial intelligence analyzes medical images for use in diagnosing kidney cancer. Clinicians will help them understand what feedback is needed from the AI models to balance what is technically possible and what is clinically necessary.

“It is important that the technology can be included in the hospitals’ practice, and therefore we focus in particular on designing these methods within ‘Explainable AI’ in direct collaboration with the doctors who actually use it in their decision-making. Here we draw in particular on our expertise in Participatory Design, which is a systematic approach to achieve the best synergy between what the AI researchers come up with in terms of technological innovations and what doctors need,” says Henning Christiansen, professor in computer science at the Department of People and Technology at Roskilde University.

About DIREC – Digital Research Centre Denmark

The purpose of the national research centre DIREC is to bring Denmark at the forefront of the latest digital technologies through world-class digital research. To meet the great demand for highly educated IT specialists, DIREC also works to expand the capacity within both research and education of computer scientists. The centre has a total budget of DKK 275 million and is supported by the Innovation Fund Denmark with DKK 100 million. The partnership consists of a unique collaboration across the computer science departments at Denmark’s eight universities and the Alexandra Institute.

The activities in DIREC are based on societal needs, where research is continuously translated into value-creating solutions in collaboration with the business community and the public sector. The projects operate across industries with focus on artificial intelligence, Internet of Things, algorithms and cybersecurity among others.

Read more at direc.dk

EXPLAIN-ME

Partners in the project EXPLAIN-ME: Learning to Collaborate via Explainable AI in Medical Education

  • DTU (DTU Compute – Department of Mathematics and Computer Science)
    University of Copenhagen
  • Aalborg University
  • Roskilde University
  • CAMES – Copenhagen Academy for Medical Education and Simulation at Rigshospitalet in Copenhagen
  • NordSim – Center for skills training and simulation at Aalborg University Hospital
  • Department of Urology at Zealand University Hospital in Roskilde

Project period: 1 October 2021 to 30 April 2025

Contact: 
Aasa Feragen
DTU Compute
M: +45 26 22 04 98
afhar@dtu.dk

Anders Nymark Christensen
DTU Compute
+45 45 25 52 58
anym@dtu.dk

Categories
Explore project

Verifiable and Robust AI

Project type: Explore Project

Verifiable and Robust AI

The challenge to the research community is how to extend existing verification technologies to cope with software systems comprising AI components (see report of the Dagstuhl Seminar “Machine Learning and Model Checking join Forces” 2018). This is an unchartered territory and one of the most pressing research challenges in AI. The industrial importance of this topic is closely related to the question of liability in case of malfunctioning products. Over a 4-month period the explore project will provide a state-of-the-art survey and identify research directions to be followed.

Participants

Project Manager

Kim Guldstrand Larsen

Professor

Aalborg Universlty
Department of Computer Science

E: kgl@cs.aau.dk

Thomas Dyhre Nielsen

Professor

Aalborg Universlty
Department of Computer Science

Manfred Jaeger

Associate Professor

Aalborg Universlty
Department of Computer Science

Andrzej Wasowski

Professor

IT University of Copenhagen
Department of Computer Science

Rune Møller Jensen

Associate Professor

IT University of Copenhagen
Department of Computer Science

Peter Schneider-Kamp

Professor

University of Southern Denmark
Department of Mathematics and Computer Science

Jaco van de Pol

Professor

Aarhus University
Department of Computer Science

Thomas Hildebrandt

Professor

University of Copenhagen
Department of Computer Science

Alberto Lluch Lafuente

Associate Professor

Technical University of Denmark
DTU Compute

Flemming Nielson

Professor

Technical University of Denmark
DTU Compute

Thomas Bolander

Professor

Technical University of Denmark
DTU Compute

Thomas Asger Hansen

Head of Analytics and AI

Grundfos

Christian Rasmussen

Senior Manager Data Analytics

Grundfos

Malte Skovby Ahm

Research and business lead

Aarhus Vand

Partners

Categories
SciTech project

Privacy and Machine Learning

Project type: SCITECH Project

Privacy and Machine Learning

There is an unmet need for decentralised privacy-preserving machine learning. Cloud computing has great potential, however, there is a lack of trust in the service  providers and there is a risk of data breaches. A lot of data are private and stored locally for good reasons, but combining the information in a global machine learning (ML) system could lead to services that benefit all. For instance, consider a consortium of banks that want to improve fraud detection by pooling their customers’ payment data and merge these with data from, e.g., Statistics Denmark. However, for competitive reasons the banks want to keep their customers’ data secret and Statistics Denmark is not allowed to share the required sensitive data. As another example, consider patient information (e.g., medical images) stored at hospitals. It would be great to build diagnostic and prognostic tools using ML based on these data, however, the data can typically not be shared.
The research aim of the project is the development of AI methods and tools that enable industry to develop new solutions for automated image-based quality assessment. End-to-end learning of features and representations for object classification by deep neural networks can lead to significant performance improvements. Several recent mechanisms have been developed for further improving performance and reducing the need for manual annotation work (labelling) including semi-supervised learning strategies and data augmentation. Semi-supervised learning  combines generative models that are trained without labels (unsupervised learning), application of pre-trained networks (transfer learning) with supervised learning on small sets of labelled data. Data augmentation employs both knowledge based transformations, such as translations and rotations and more general learned transformations like parameterised “warps” to increase variability in the training data and increase robustness to natural variation.
Researching secure use of sensitive data will benefit society at large. CoED-based ML solves the fundamental problem of keeping private input data private while still enabling the use of the most applied analytical tools. The CoED privacy-preserving technology reduces the risk of data breaches. It allows for secure use of cloud computing, with no single point of failure, and removes the fundamental cloud security problem of missing trust in service providers. The project will bring together leading experts in CoED and ML. It may serve as a starting point for attracting additional national and international funding, and it will build up competences highly relevant for Danish industry. The concepts developed in the project may change how organisations collaborate and allow for innovative ways of using data, which can increase the competitiveness of Danish companies relative to large international players.

October 1, 2020 – September 31, 2024 – 3,5 years.

Total budget DKK 4,7 / DIREC investment DKK 3,22

Participants

Project Manager

Peter Scholl

Assistant Professor

Aarhus University
Department of Computer Science

E: peter.scholl@cs.au.dk

Ivan Bjerre Damgaard

Professor

Aarhus University
Department of Computer Science

Christian Igel

Professor

University of Copenhagen
Department of Computer Science

Kurt Nielsen

Associate Professor

University of Copenhagen
Department of Food and Resource Economics

Partners

Categories
SciTech project

Machine Learning Algorithms Generalisation

Project type: SCITECH Project

Machine Learning Algorithms Generalisation

AI is radically changing society and the main driver behind new AI methods and systems is machine learning. Machine learning focuses on finding solutions for, or patterns in, new data by learning from relevant existing data. Thus, machine learning algorithms are often applied to large datasets and then they more or less autonomously find good solutions by finding relevant information or patterns hidden in the data. However, it is often not well understood why machine learning algorithms work so well in practice on completely new data – often their performance surpass what current theory would suggest by a wide margin.

Being able to understand and predict when, why and how well machine learning algorithms work on a given problem is critical for knowing when they may be applied and trusted, in particular in more critical systems. Understanding why the algorithms work is also important in order to be able drive the machine learning field forward in the right direction, improving upon existing algorithms and designing new ones.

The goal of this project is to research and develop a better understanding of the generalisation capability of the most used machine learning algorithms, including boosting algorithms, support vector machines and deep learning algorithms. The result will be new generalisation bounds, both showing positive what can be achieved and negative what cannot.

This will allow us to more fully understand the current possibilities and limits, and thus drive the development of new and better methods. Ultimately, this will provide better guarantees for the quality of the output of machine learning algorithms in a variety of domains.

Researching the theoretical foundation for machine learning (and thus essentially all AI based systems) will benefit society at large, since a solid theory will allow us to formally argue and understand when and under which conditions machine learning algorithms can deliver the required quality.

As an added value, the project will bring together leading experts in Denmark in the theory of algorithms to (further) develop the fundamental theoretical basis of machine learning. Thus, it may serve as a starting point for additional national and international collaboration and projects, and it will build up competences highly relevant for Danish industry.

October 1, 2020 – September 31, 2024 – 3,5 years.

Total budget DKK 2,41 / DIREC investment DKK 1,55

Participants

Project Manager

Kasper Green Larsen

Associate Professor

Aarhus University
Department of Computer Science

E: larsen@cs.au.dk

Allan Grønlund

Postdoc

Aarhus University
Department of Computer Science

Mikkel Thorup

Professor

University of Copenhagen
Department of Computer Science

Martin Ritzert

Postdoc

Aarhus University
Department of Computer Science

Partners

Categories
News

Launch of five new digital research and innovation projects totalling 115 million Dkr

18 November 2021

The future of hybrid work, collaborative robots and AI in hospitals:

Launch of five new digital research and innovation projects totalling 115 million Dkr

Zoom and Teams meetings have become common during the COVID-19 pandemic, but what should future work practices look like, and how can they support  future remote and hybrid work? Researchers and companies will explore these questions in one of the five new projects recently launched by the national research centre for advanced digital technologies, DIREC.

The centre, which is funded by Innovation Fund Denmark, is a collaboration between the computer science departments at the eight Danish universities and the Alexandra Institute. In total, we have initiated projects worth DKK 115 million, of which DKK 31.7 million is funded by DIREC. Other projects focus on how to control and program multiple robots simultaneously, how to secure IoT devices, how to accelerate the use of artificial intelligence in hospitals and how to support artificial intelligence on very small devices, such as smart thermostats, windows and garage doors.

The power of these projects is that they are carried out across university and industry boundaries, Thomas Riisgaard Hansen, director of DIREC explains:

“The most innovative solutions always emerge from cross-disciplinary collaboration. It may be across universities, academic disciplines and across research and industry. Thus, it is a requirement for the projects we launch that they include several partners with different competences”.

The research and innovations projects are:

EXPLAIN-ME: Learning to collaborate via explainable AI in medical education

HERD: Human-AI collaboration: Engaging and controlling swarms of robots and drones

REWORK: The futures of hybrid work

Embedded AI

SIOT: Secure Internet of Things – Risk analysis in design and operation

 

Categories
Explore project

Explainable AI

Project type: Explore Project

Explainable AI

Artificial Intelligence brings the promise of technological means to solve problems that previously were assumed to require human intelligence, and ultimately provide human-centered solutions that are both more effective and of higher quality in a synergy between the human and the AI system than solutions that are provided by humans or by an AI system alone.

However, compared to traditional problem solving based on logical rules and procedures, some artificial intelligence systems, in particular systems based on neural networks (e.g. as in deep learning), do not offer a human-understandable explanation to the answers given. Lack of explanation is not necessarily a problem, e.g. if the correctness of an answer can be easily validated, such as automatic character recognition subsequently validated by a human. However, in some situations, a lack of explanation may pose severe problems, and may even be illegal as it is the case for governmental decisions.

Participants

Project Manager

Thomas Hildebrandt

Professor

University of Copenhagen
Department of Computer Science

E: hilde@di.ku.dk

Irina Shklovski

Professor

University of Copenhagen
Department of Computer Science

Naja Holten Møller

Assistant Professor

University of Copenhagen
Department of Computer Science

Hugo Lopez

Assistant Professor

University of Copenhagen
Department of Computer Science

Boris Düdder

Associate Professor

University of Copenhagen
Department of Computer Science

Tijs Slaats

Associate Professor

University of Copenhagen
Department of Computer Science

Henrik Korsgaard

Assistant Professor

Aarhus University
Department of Computer Science

Susanne Bødker

Professor

Aarhus University
Department of Computer Science

Lars Kai Hansen

Professor

Technical University of Denmark
DTU Compute

Thomas Bolander

Professor

Technical University of Denmark
DTU Compute

Kim Guldstrand Larsen

Professor

Aalborg University
Department of Computer Science

Thomas Dyhre Nielsen

Professor

Aalborg University
Department of Computer Science

Alessandro Tibo

Assistant professor

Aalborg University
Department of Computer Science

Manfred Jaeger

Associate Professor

Aalborg University
Department of Computer Science

Anders Lyhne Christensen

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Sebastian Risi

Professor

IT University of Copenhagen
Digital Design Department

Lars Rune Christensen

Assistant professor

iT University of Copenhagen
Department of Business IT

Arisa Shollo

Associate Professor

Copenhagen Business School
Department of Digitalization

Rasmus Larsen

AI Specialist

The Alexandra Institute

Peter C. Damm

Applied Research Director

KMD

Mathias Niepert

Manager & Chief Research Scientist

NEC Labs Europe
Heidelberg

Tobias Jacobs

Senior Researcher

NEC Labs Europe
Department of Computer Science

Morten Marquard

Founder & CEO

DCR Solutions

Partners

Categories
Bridge project

REWORK – The future of hybrid work

Project type: Bridge Project

REWORK – The future of hybrid work

The recent COVID-19 pandemic, and the attendant lockdown, have demonstrated the potential benefits and possibilities of remote work practices, as well as the glaring deficiencies such practices bring. Zoom fatigue, resulting from high cognitive loads and intense amounts of eye contact, is just the tip of an uncomfortable iceberg where the problem of embodied presence remains a stubborn limitation. Remote and hybrid work will certainly be part of the future of most work practices, but what should these future work practices look like? Should we merely attempt to fix what we already have or can we be bolder and speculate different kinds of workplace futures? We seek a vision of the future that integrates hybrid work experiences with grace and decency. This project will focus on the following research question: what are the possible futures of embodied presence in hybrid and remote work conditions?

There are a multitude of reasons to embrace remote and hybrid work. Climate concerns are increasing, borders are difficult to cross, work/life balance may be easier to attain, power distributions in society could potentially be redressed, to name a few. This means that the demand for Computer Supported Cooperative Work (CSCW) systems that support hybrid work will increase significantly. At the same time, we consistently observe and collectively experience that current digital technologies struggle to mediate the intricacies of collaborative work of many kinds. Even when everything works, from network connectivity to people being present and willing to engage, there are aspects of embodied co-presence that are almost impossible to achieve digitally.

We argue that one major weakness in current remote work technologies is the lack of support for relation work and articulation work, caused by limited embodiment. The concept of relation work denotes the fundamental activities of creating socio-technical connections between people and artefacts during collaborative activities, enabling actors in a global collaborative setting to engage each other in activities such as articulation work. We know that articulation work cannot be handled in the same way in hybrid remote environments. The fundamental difference is that strategies of awareness and coordination mechanisms are embedded in the physical surroundings, and use of artefacts cannot simply be applied to the hybrid setting, but instead requires translation.

Actors in hybrid settings must create and connect the foundational network of globally distributed people and artefacts in a multitude of ways.

In REWORK, we focus on enriching digital technologies for hybrid work. We will investigate ways to strengthen relation work and articulation work through explorations of embodiment and presence. To imagine futures and technologies that can be otherwise, we look to artistic interventions, getting at the core of engagement and reflection on the future of remote and hybrid work by imagining and making alternatives through aesthetic speculations and prototyping of novel multimodal interactions (using the audio, haptic, visual, and even olfactory modalities). We will explore the limits of embodiment in remote settings by uncovering the challenges and limitations of existing technical solutions, following a similar approach as some of our previous research.

Scientific value
REWORK will develop speculative techniques and ideas that can help rethink the practices and infrastructures of remote work and its future. REWORK focuses on more than just the efficiency of task completion in hybrid work. Rather, we seek to foreground and productively support the invisible relation and articulation work that is necessary to ensure overall wellbeing and productivity.

Specifically, REWORK will contribute:

  1. Speculative techniques for thinking about the future of remote work;
  2. Multimodal prototypes to inspire a rethink of remote work;
  3. Design Fictions anchoring future visions in practice;
  4. Socio-technical framework for the future of hybrid remote work practices;
  5. Toolkits for industry.

The research conducted as part of REWORK will produce substantial scientific contributions disseminated through scientific publications in top international journals and conferences relevant to the topic. The scientific contributions will constitute both substantive insights and methodological innovations. These will be targeting venues such as the Journal of Human-Computer Interaction, ACM TOCHI, Journal of Computer Supported Cooperative Work, the ACM CHI conference, NordiCHI, UIST, DIS, Ubicomp, ICMI, CSCW, and others of a similar level.

The project will also engage directly and closely with industries of different kinds, from startups that are actively envisioning new technology to support different types of hybrid work (Cadpeople, Synergy XR, and Studio Koh) to organizations that are trying to find new solutions to accommodate changes in work practices (Arla, Bankdata, Keyloop, BEC).

Part of the intent of engagement with the artistic collaboratory is to create bridges between artistic explorations and practical needs articulated by relevant industry actors. REWORK will enable the creation of hybrid fora to enable such bridging. The artistic collaboratory will enable the project to engage with the general public through an art exhibit at Catch, public talks, and workshops. It is our goal to exhibit some of the artistic output at a venue, such as Ars Electronica, that crosses artistic and scientific audiences.

Societal value
The results of REWORK have the potential to change everybody’s work life broadly. We all know that “returning to work after COVID-19” will not be the same – and the combined situation of hybrid work will be a challenge. Through the research conducted in REWORK, individuals that must navigate the demands of hybrid work and the organizations that must develop policies and practices to support such work will benefit from the improved sense of embodiment and awareness, leading to more effective collaboration.

REWORK will take broadening participation and public engagement seriously, by offering online and in-person workshops/events through a close collaboration with the arts organization Catch (catch.dk). The workshops will be oriented towards particular stakeholder groups – artists interested in exploring the future of hybrid work, industry organizations interested in reconfiguring their existing practices – and open public events.

Capacity building
There are several ways in which REWORK contributes to capacity building. Firstly, by collaborating with the Alexandra Institute, we will create a multimodal toolbox/demonstrator facility that can be used in education, and in industry.

REWORK will work closely with both industry partners (through the Alexandra Institute) and cultural (e.g. catch.dk)/public institutions for collaboration and knowledge dissemination, in the general spirit of DIREC.

We will include the findings from REWORK in our research-based teaching at all three universities. Furthermore, we plan to host a PhD course, or a summer school, on the topic in Year 2 or Year 3. Participants will be recruited nationally and internationally.

Lastly, in terms of public engagement, HCI and collaborative technologies are disciplines that can be attractive to the public at large, so there will be at least one REWORK Open Day where we will invite interested participants, and the DIREC industrial collaborators.

January 1, 2022 – December 31, 2024 – 3 years.

Participants

Project Manager

Eve Hoggan

Professor

Aarhus University
Department of Computer Science

E: eve.hoggan@cs.au.dk

Susanne Bødker

Professor

Aarhus University
Department of Computer Science

Irina Shklovski

Professor

University of Copenhagen
Department of Computer Science

Pernille Bjørn

Professor

University of Copenhagen
Department of Computer Science

Louise Barkhuus

Professor

IT University of Copenhagen
Department of Computer Science

Naja Holten Møller

Assistant Professor

University of Copenhagen
Department of Computer Science

Nina Boulus-Rødje

Associate Professor

Roskilde University
Department of People and Technology

Allan Hansen

Head of Digital Experience and Solutions Lab

The Alexandra Institute

Mads Darø Kristensen

Principal Application Architect

The Alexandra Institute

Partners

Categories
Bridge project

SIOT – Secure Internet of Things – Risk analysis in design and operation

Project type: Bridge Project

SIOT – Secure Internet of Things – Risk analysis in design and operation

When developing novel IoT services or products today, it is essential to consider the potential security implications of the system and to take those into account before deployment. Due to the criticality and widespread deployment of many IoT systems, the need for security in these systems has even been recognised at the government and legislative level, e.g., in the US and the UK, resulting in proposed legislation to enforce at least a minimum of security consideration in deployed IoT products.

However, developing secure IoT systems is notoriously difficult, not least due to the characteristics of many such systems: they often operate in unknown and frequently in privacy‐sensitive environments, engage in communication using a wide variety of protocols and technologies, and must perform essential tasks such as monitoring and controlling (physical) entities. In addition, IoT systems must often perform within real‐ time bounds on limited computing platforms and at times even with a limited energy budget. Moreover, with the increasing number of safety‐critical IoT devices (such as medical devices and industrial IoT devices), IoT security has become a public safety issue. To develop a secure IoT system, one should take into account all of the factors and characteristics mentioned above, and balance them against functionality and performance requirements. Such a risk analysis must be performed not only at the design stage, but also throughout the lifetime of the product. Besides technical aspects, the analysis should also take into account the human and organizational aspects. This type of analysis will form an essential activity for standardization and certification purposes.

In this project, we will develop a modelling formalism with automated tool support, for performing such risk assessments and allowing for extensive “what‐if” scenario analysis. The starting point will be the well‐ known and widely used formalism of attack‐defense trees extended to include various quantities, e.g., cost or energy consumption, as well as game features, for modelling collaboration and competition between systems and between a system and its environment.


In summary, the project will deliver:

  • a modeling method for a systematic description of the relevant IoT system/service aspects
  • a special focus on their security, interaction, performance, and cost aspects
  • a systematic approach, through a new concept of attack‐defense‐games
  • algorithms to compute optimal strategies and trade‐offs between performance, cost and security
  • a tool to carry out quantitative risk assessment of secure IoT systems
  • a tool to carry out “what‐if” scenario analysis, to harden a secure IoT system’s design and/or operation
  • usability studies and design for usability of the tools within organizations around IoT services
  • design of training material to enforce security policies for employees within these organizations.

The main research problems are:

  1. To identify safety and security requirements (including threats, attacker models and counter measures) for IoT systems, as well as the inherent design limitations in the IoT problem domain (e.g., limited computing resources and a limited energy budget).
  2. To organize the knowledge in a comprehensive model. We propose to extend attack‐defense trees with strategic game features and quantitative aspects (time, cost, energy, probability).
  3. To transform this new model into existing “computer models” (automata and games) that are amenable to automatic analysis algorithms. We consider stochastic priced timed games as an underlying framework for such models due to their generality and existing tool support.
  4. To develop/extend the algorithms needed to perform analysis and synthesis of optimal response strategies, which form the basis of quantitative risk assessment and decision‐making.
  5. To translate the findings into instruments and recommendations for the partner companies, addressing both technical and organizational needs.
  6. To design, evaluate, and assess the user interface of the IoT security tools, which serve as important backbones supporting to design and certify IoT security training programs for stakeholder organizations.

Throughout the project, we focus on the challenges and needs of the partner companies. The concrete results and outcomes of the project will also be evaluated in the contexts of these companies. The project will combine the expertise of five partners of DIREC (AAU, AU, Alexandra, CBS and DTU) and four Work Streams from DIREC (WS7: Verification, WS6: CPS and IoT systems, WS8: Cybersecurity and WS5: HCI, CSCW and InfoVis) in a synergistic and collaborative way.

Business value
While it is difficult to make a precise estimate of the number of IoT devices, most estimates are in the range 7‐15 billion connected devices and expected to increase dramatically over the next 5‐10 years. The impact of a successful attack on IoT systems can range from nuisance, e.g., when baby monitors or thermostats are hacked, over potentially expensive DDoS attacks, e.g., when the Mirai malware turned many IoT devices into a DDoS botnet, to life‐threatening, e.g., when pacemakers are not secure. Gartner predicted that the worldwide spending on IoT security will increase from roughly USD 900M to USD 3.1B in 2021 out of a total IoT market up to USD 745B.

The SIOT project will concretely contribute to the agility of the Danish IoT industry. By applying the risk analysis and secure design technologies developed in the project, these companies get a fast path to certification of secure IoT devices. Hence, this project will give Danish companies a head‐start for the near future where the US and UK markets will demand security certification for IoT devices. Also, EU is already working on security regulation for IoT devices. Furthermore, it is well known that the earlier in the development process a security vulnerability or programming error is found, the cheaper it is to fix it. This is even more important for IoT products that may not be updatable “over‐the‐air” and thus require a product recall or physical update process. The methods and technologies developed in this project will help companies find and fix security vulnerabilities already from the design phase and exploration phase, thus reducing long‐term cost of maintenance.

Societal value
It is an academic duty to contribute to safer and more secure IoT systems, since they are permeating the society. Security issues quickly become safety incidents, for instance since IoT systems are monitoring against dangerous physical conditions. In addition, compromised IoT devices can be detrimental for our privacy, since they are measuring all aspects of human life. DTU and Alexandra Institute will disseminate the knowledge and expertise through the network built in the joint CIDI project (Cybersecure IoT in Danish Industry, ending in 2021), in particular a network of Danish IoT companies interested in security, with a clear understanding of companies’ needs for security concerns.

We will strengthen the cybersecurity level of Danish companies in relation to Industry 4.0 and Internet of Things (IoT) security, which are key technological pillars of digital transformation. We will do this by means of research and lectures on several aspects of IoT security, with emphasis on security‐by‐design, risk analysis, and remote attestation techniques as a counter measure.

Capacity building
The education of PhD students itself already contributes to “capacity building”. We will organize a PhD Summer school towards the end of the project, to disseminate the results, across the PhD students from DIREC and students abroad.

We will also prepare learning materials to be integrated in existing course offerings (e.g., existing university courses, and the PhD and Master training networks of DIREC) to ensure that the findings of the project are injected into the current capacity building processes.

Through this education, we will also attract more students for the Danish labor market. The lack of skilled people is even larger in the security area than in other parts of computer science and engineering.

February 1, 2022 – January 31, 2025 – 3 years.

Total budget DKK 25,10 million / DIREC investment DKK 6,74 million

Participants

Project Manager

Jaco van de Pol

Professor

Aarhus University
Department of Computer Science

E: jaco@cs.au.dk

Torkil Clemmensen

Professor

Copenhagen Business School
Department of Digitalization

Qiqi Jiang

Associate Professor

Copenhagen Business School
Department of Digitalization

Kim Guldstrand Larsen

Professor

Aalborg University
Department of Computer Science

René Rydhof Hansen

Associate Professor

Aalborg University
Department of Computer Science

Flemming Nielson

Professor

Technical University of Denmark
DTU Compute

Alberto Lluch Lafuente

Associate Professor

Technical University of Denmark
DTU Compute

Nicola Dragoni

Professor

Technical University of Denmark
DTU Compute

Gert Læssøe Mikkelsen

Head of Security Lab

The Alexandra Institute

Laura Lynggaard Nielsen

Senior Anthropologist

The Alexandra Institute

Zaruhi Aslanyan

Security Architect

The Alexandra Institute

Claus Riber

Senior Manager, Software Cybersecurity

Beumer Group

Poul Møller Eriksen

CTO

Develco Products

Mike Aarup

senior quality engineer

Grundfos

Mads Pii

Chief Technical Officer

Logos Payment Solutions

Anders Qvistgaard Sørensen

R&D Manager

Micro Technic

Jørgen Hartig

CEO & Strategic Advisor

SecuriOT

Daniel Lux

Chief Technology Officer

Seluxit

Samant Khajuria

Chief Specialist Cybersecurity

Terma

Alyzia-Maria Konsta

PhD Student

Technical University of Denmark
DTU Compute

Mikael Bisgaard Dahlsen-Jensen

PhD Student

Aarhus University
Department of Computer Science

Partners

Categories
DIREC TALKS

DIREC TALKS: Formal Verification and Machine Learning Joining Forces

Formal Verification and Machine Learning Joining Forces

The growing pervasiveness of computerised systems such as intelligent traffic control or energy supply makes our society vulnerable to faults or attacks on such systems. Rigorous software engineering methods and supporting efficient verification tools are crucial to encounter this threat.

In this DIREC talk Kim Guldstrand Larsen will present and discuss how to combine formal verification and AI in order to obtain optimal AND guaranteed safe strategies.

The ultimate goal of synthesis is to disrupt traditional software development. Rather than tedious manual programming with endless testing and revision effort, synthesis comes with the promise of automatic correct-by-construction control software.

In formal verification synthesis has a long history for discrete systems dating back to Church’s problem concerning realization of logic specifications by automata. Within AI the use of (deep) reinforcement learning (Q- and M-learning) has emerged as a popular method for learning optimal control strategies through training, e.g. as applied by autonomous driving.

The formal verification approach and the AI approach to synthesis are highly complementary: Formal verification synthesis comes with absolute guarantees but are computationally expensive with resulting strategies being extremely large. In contrast, AI synthesis comes with no guarantees but is highly scalable with neural networks providing compact strategy representation.

Kim Guldstrand Larsen will present the tool UPPAAL Stratego that combines symbolic techniques with reinforcement learning to achieve (near-)optimality and safety for hybrid Markov decision processes and highlight some of the applications that include water management, traffic light control, and energy aware building.

Emphasis will be on the challenges of implementing learning algorithms, argue for their convergence and designing data structures for compact and understandable strategy representation.

KIM GULDSTRAND LARSEN

PROFESSOR OF COMPUTER SCIENCE,
AALBORG UNIVERSITY
Speaker

KIM GULDSTRAND LARSEN

Kim Guldstrand Larsen is a Professor of Computer Science at Aalborg University since 1993. He received Honorary Doctorate from Uppsala University (1999), ENS Cachan (2007), International Chair at INRIA (2016) and Distinguished Professor at North-Eastern University, Shenyang, China (2018). His research interests cover modeling, verification, performance analysis of real-time and embedded systems with applications to concurrency theory, model checking and machine learning.  

He is the prime investigator of the verification tool UPPAAL for which he received the CAV Award in 2013. Other prizes received include Danish Citation Laureates Award, Thomson Scientific Award as the most cited Danish Computer Scientist in the period 1990-2004 (2005), Grundfos Prize (2016), Ridder af Dannebrog (2007). He is member of the Royal Danish Academy of Sciences and Letters, The Danish Academy of Technical Science, where he is Digital wiseman. Also, he is member of the Academia Europaea.

In 2015 he received the prestigious ERC Advanced Grant (LASSO), and in 2021 he won Villum Investigator Grant (S4OS).  He has been PI and director of several large centers and initiatives including CISS (Center for Embedded Software systems, 2002-2008), MT-LAB (Villum-Kahn Rasmussen Center of Excellence, 2009-2013), IDEA4CPS (Danish-Chinese Research Center, 2011-2017), INFINIT National ICT Innovation Network, 2009-2020), DiCyPS (Innovation Fund Center, 2015-2021). Finally, he is co-founder of the companies UP4ALL (2000), ATS (2017) and VeriAal (2020).

Categories
Bridge project

Embedded AI

Project type: Bridge Project

Embedded AI

AI is currently limited by the need for massive data centres and centralized architectures, as well as the need to move this data to algorithms. To overcome this key limitation, AI will evolve from today’s highly structured, controlled, and centralized architecture to a more flexible, adaptive, and distributed network of devices. This transformation will bring algorithms to the data, made possible by algorithmic agility and autonomous data discovery, and it will drastically reduce the need for high-bandwidth connectivity, which is required to transport massive data sets, and eliminate any potential sacrifice of the data’s security and privacy. Furthermore, it will eventually allow true real-time learning at the edge.

This transformation is enabled by the merging of AI and IoT into “Artificial Intelligence of Things” (AIoT), and has created an emerging sector of Embedded AI (eAI), where all or parts of the AI processing is done on the sensor devices at the edge, rather than sent to the cloud. The major drivers for Embedded AI are, increased responsiveness and functionality, reduced data transfer, and increased resilience, security, and privacy. To deliver these benefits, development engineers need to acquire new skills in embedded development and systems design.

To enter and compete in the AI era, companies are hiring data scientists to build expertise in AI and create value from data. This is true for many companies developing embedded systems, for instance to control water, heat and air flow in large facilities, large ship engines or industrial robots, all with the aim to optimize their products and services.
However, there is a challenging gap between programming AI in the cloud using tools like Tensorflow, and programming at the edge, where resources are extremely constrained. This project will develop methods and tools to migrate AI algorithms from the cloud to a distributed network of AI enabled edge-devices. The methods will be demonstrated on several use-cases from the industrial partners.

In a traditional, centralized AI architecture, all the technology blocks would be combined in the cloud or at a single cluster (Edge computing) to enable AI. Data collected by IoT, i.e., individual edge-devices, will be sent towards the cloud. To limit the amount of data needed to be sent, data aggregation may be performed along the way to the cloud. The AI stack, the training, and the later inference, will be performed in the cloud, and results for actions will be transferred back to the relevant edge-devices. While the cloud provides complex AI algorithms which can analyse huge datasets fast and efficiently, it cannot deliver true real-time response and data security and privacy may be challenged.

When it comes to Embedded AI, where AI algorithms are moved to the edge, there is a need to transform the foundation of the AI Stack by enabling transformational advances, algorithmic agility and distributed processing will enable AI to perceive and learn in real-time by mirroring critical AI functions across multiple disparate systems, platforms, sensors, and devices operating at the edge. We propose to address these challenges in the following steps, starting with single edge-devices.

  1. Tiny inference engines – Algorithmic agility of the inference engines will require new AI algorithms and new processing architectures and connectivity. We will explore suitable microcontroller architectures and reconfigurable platform technologies, such as Microchip’s low power FPGA’s, for implementing optimized inference engines. Focus will be on achieving real-time performance and robustness. This will be tested on cases from the industry partners.
  2. µBrains – Extending the edge-devices from pure inference engines to also provide local learning. This will allow local devices to provide continuous improvements. We will explore suitable reconfigurable platform technologies with ultra-low power consumption, such as Renesas’ DRP’s using 1/10 of the power budget of current solutions, and Microchip’s low power FPGA’s for optimizing neural networks. Focus will be on ensuring the performance, scheduling, and resource allocation of the new AI algorithms running on very resource constrained edge-devices.
  3. Collective intelligence – The full potential of Embedded AI will require distributed algorithmic processing of the AI algorithms. This will be based on federated learning and computing (microelectronics) optimized for neural networks, but new models of distributed systems and stochastic analysis, is necessary to ensure the performance,
    prioritization, scheduling, resource allocation, and security of the new AI algorithms—especially with the very dynamic and opportunistic communications associated with IoT.

The expected outcome is an AI framework which supports autonomous discovery and processing of disparate data from a distributed collection of AI-enabled edge-devices. All three presented steps will be tested on cases from the industry partners.

 

Deep neural networks have changed the capabilities of machine learning reaching higher accuracy than hitherto. They are in all cases on learning from unstructured data now the de facto standard. These networks often include millions of parameters and may take months to train on dedicated hardware in terms of GPUs in the cloud. This has resulted in high demand of data scientists with AI skills and hence, an increased demand for educating such profiles. However, an increased use of IoT to collect data at the edge, have created a wish for training and executing deep neural networks at the edge rather than transferring all data to the cloud for processing. As IoT end- or edge-devices are characterized by low memory, low processing power, and low energy (powered by battery or energy harvesting), training or executing deep neural networks is considered infeasible. However, developing dedicated accelerators, novel hardware circuits and architectures, or executing smaller discretized networks may provide feasible solutions for the edge.

The academic partners DTU, KU, AU and CBS, will not only create the scientific value from the results disseminated through the four PhDs, but will also create important knowledge, experience, and real-life cases to be included in the education, and hence, create capacity building in this important merging field of embedded AI or AIoT.

The industry partners Indesmatech, Grundfos, MAN ES, and VELUX are all strong examples of companies who will benefit from mastering embedded AI, i.e., being able to select the right tools and execution platforms for implementing and deploying embedded AI in their products.

  • Indesmatech expect to gain leading edge knowledge about how AI can be implemented on various chip processing platforms, with a focus on finding the best and most efficient path to build cost and performance effective industrial solutions across industries as their customers are represented from most industries.
  • Grundfos will create value in applications like condition monitoring of pump and pump systems, predictive maintenance, heat energy optimization in buildings and waste-water treatment where very complex tasks can be optimized significant by AI. The possibility to deploy embedded AI directly on low cost and low power End and Edge devices instead of large cloud platforms, will give Grundfos a significant competitive advantage by reducing total energy consumption, data traffic, product cost, while at the same time increase real time application performance and secure customers data privacy.
  • MAN ES will create value from using embedded AI to predict problems faster than today. Features such as condition monitoring and dynamic engine optimization will give MAN ES competitive advantages, and
    the exploitation of embedded AI together with their large amount of data collected in the cloud will in the long run create marked advantages for MAN ES.
  • VELUX will increase their competitive edge by attaining a better understanding of the ability to implement the right level of embedded AI into their products. The design of new digital smart products with embedded intelligence, will create value from driving the digital product transformation of VELUX.

January 1, 2022 – December 31, 2024 – 3 years.

Total budget DKK 22,5 million / DIREC investment DKK 6,54 million.

Participants

Project Manager

Jan Madsen

Professor

Technical University of Denmark
DTU Compute

E: jama@dtu.dk

Peter Gorm Larsen

Professor

Aarhus University
Dept. of Electrical and Computer Engineering

Mads Nielsen

Professor

University of Copenhagen
Department of Computer Science

Jan Damsgaard

Professor

Copenhagen Business School
Department of Digitalization

Thorkild Kvisgaard

Head of Electronics

Grundfos

Henrik R. Olesen

Senior Manager

MAN Energy Solutions

Thomas S. Toftegaard

Director, Smart Product Technology

Velux

Rune Domsten

Co-founder & CEO

Indesmatech

Partners