Search
Close this search box.

DIREC project

Verifiable and safe ai for autonomous systems

Summary

The rapidly growing application of machine learning techniques in cyber-physical systems leads to better solutions and products in terms of adaptability, performance, efficiency, functionality and usability.

However, cyber-physical systems are often safety critical, e.g., self-driving cars or medical devices, and the need for verification against potentially fatal accidents is of key importance.

Together with industrial partners, this project aims to develop methods and tools that will enable industry to automatically synthesize correct-by-construction and near-optimal controllers for safety critical systems within a variety of domains.

Project period: 2021-2024
Budget: DKK 9,12 million

AI technologies may present new safety risks for users when they are embedded in products and services. For example, as result of a flaw in the object recognition technology, an autonomous car can wrongly identify an object on the road and cause an accident involving injuries and material damage. This in turn makes it difficult to place liability in case of malfunctioning:
Under the Product Liability Directive, a manufacturer is liable for damage caused by a defective product. However, in the case of an AI based system such as autonomous cars, it may be difficult to prove that there is a defect in the product, the damage that has occurred and the causal link between the two.

What is needed are new methods, where machine learning is integrated with model-based techniques such that machine-learned solutions, typically optimising expected performance, are ensured to not violate crucial safety constraints, and can be certified not to do so. Relevant domains include all types of autonomous systems, where machine learning is applied to control safety critical systems.

The research aim of the project is to develop methods and tools that will enable industry to automatically synthesise correct-by-construction and near-optimal controllers for safety critical 45 systems within a variety of domains. The project will involve a number of scientific challenges including representation of strategies – neural networks (for compactness), decision trees (for explainability). Also, development of strategy learning methods with statistical guarantees is crucial.

A key challenge is understanding and specifying what safety and risk means for model-free controllers based on neural networks. Once formal specifications are created, we aim at combining the existing knowledge about property-based testing, Bayesian probabilistic programming, and model checking.

Value creation
The scientific value of the project are new fundamental theories, algorithmic methods and tools together with evaluation of their performance and adequacy in industrial settings. These are important contributions bridging between the core research themes on AI and Verification in DIREC.

For capacity building the value of the project is to educate PhD students and Post Docs in close collaboration with industry. The profile of these PhD students will meet a demand in the companies for staff with competences on both machine learning, data science and traditional software engineering. In addition, the project will offer a number of affiliated students projects at master-level.

For the growing number of companies relying of using AI in their products the ability to produce safety certification using approved processes and tools will be vital in order to bring safety critical applications to the market. At the societal level trustworthiness of AI-based systems is of prime concern within EU. Here methods and tools for providing safety guarantees can play a crucial role.

Impact

For the growing number of companies relying of using AI in their products the ability to produce safety certification using approved processes and tools will be vital in order to bring safety critical applications to the market.

At the societal level trustworthiness of AI-based systems is of prime concern within EU. Here methods and tools for providing safety guarantees can play a crucial role.

News / coverage

Participants

Project Manager

Kim Guldstrand Larsen

Professor

Aalborg University
Department of Computer Science

E: kgl@cs.aau.dk

Thomas Dyhre Nielsen

Professor

Aalborg University
Department of Computer Science

Andrzej Wasowski

Professor

IT University of Copenhagen
Department of Computer Science

Martijn Goorden

PostDoc

Aalborg University
Department of Computer Science

Esther Hahyeon Kim

PhD Student

Aalborg University
Department of Computer Science

Mohsen Ghaffari

PhD student

IT University of Copenhagen
Department of Computer Science

Martin Zimmermann

Associate Professor

Aalborg University
Department of Computer Science

Christian Schilling

Assistant Professor

Aalborg University
Department of Computer Science

Thomas Asger Hansen

Head of Analytics and AI

Grundfos

Daniel Lux

CEO

Seluxit

Karsten Lumbye

Chief Innovation Officer

Aarhus Vand

Kristoffer Tønder Nielsen

Project Manager

Aarhus Vand

Malte Skovby Ahm

Research and business lead

Aarhus Vand

Mathias Schandorff Arberg

Engineer

Aarhus Vand

Gitte Rosenkranz

Project Manager

HOFOR

Susanne Skov-Mikkelsen

Chief Consultant

HOFOR

Lone Bo Jørgensen

Senior Specialist

HOFOR

partners