Project type: Bridge Project

Verifiable and Safe AI for Autonomous Systems

The rapidly growing application of machine learning techniques in Cyber-Physical Systems leads to better solutions and products in terms of adaptability, performance, efficiency, functionality and usability. However, Cyber-Physical Systems are often safety critical (e.g., self-driving cars or medical devices), and the resulting need for verification against potentially fatal accidents is self-evident and of key importance. Most recently, in the EU White Paper: “On Artificial Intelligence – A European approach to excellence and trust” (February 2020) the safety risks that come with usage of AI are 
stipulated:
 
AI technologies may present new safety risks for users when they are embedded in products and services. For example, as result of a flaw in the object recognition technology, an autonomous car can wrongly identify an object on the road and cause an accident involving injuries and material damage. This in turn makes it difficult to place liability in case of malfunctioning:
Under the Product Liability Directive, a manufacturer is liable for damage caused by a defective product. However, in the case of an AI based system such as autonomous cars, it may be difficult to prove that there is a defect in the product, the damage that has occurred and the causal link between the two.
 
What is needed are new methods, where machine learning is integrated with model-based techniques such that machine-learned solutions, typically optimising expected performance, are ensured to not violate crucial safety constraints, and can be certified not to do so. Relevant domains include all types of autonomous systems, where machine learning is applied to control safety critical systems.

The research aim of the project is to develop methods and tools that will enable industry to automatically synthesise correct-by-construction and near-optimal controllers for safety critical 45 systems within a variety of domains. The project will involve a number of scientific challenges including representation of strategies – neural networks (for compactness), decision trees (for explainability). Also, development of strategy learning methods with statistical guarantees is crucial.

A key challenge is understanding and specifying what safety and risk means for model-free controllers based on neural networks. Once formal specifications are created, we aim at combining the existing knowledge about property-based testing, Bayesian probabilistic programming, and model checking.

The scientific value of the project are new fundamental theories, algorithmic methods and tools together with evaluation of their performance and adequacy in industrial settings. These are important contributions bridging between the core research themes on AI and Verification in DIREC.

For capacity building the value of the project is to educate PhD students and Post Docs in close collaboration with industry. The profile of these PhD students will meet a demand in the companies for staff with competences on both machine learning, data science and traditional software engineering. In addition, the project will offer a number of affiliated students projects at master-level.

For the growing number of companies relying of using AI in their products the ability to produce safety certification using approved processes and tools will be vital in order to bring safety critical applications to the market. At the societal level trustworthiness of AI-based systems is of prime concern within EU. Here methods and tools for providing safety guarantees can play a crucial role.

The project involves the research themes of Verification (WS7), AI (WS2), and CyPhys (WS6).

March 1, 2021 – March 1, 2024 – 3 years

Total budget DKK 9,12 million / DIREC investment DKK 3,73 million

Participants

Project Manager

Kim Guldstrand Larsen

Professor

Aalborg University
Department of Computer Science

E: kgl@cs.aau.dk

Thomas Dyhre Nielsen

Professor

Aalborg University
Department of Computer Science

Andrzej Wasowski

Professor

IT University of Copenhagen
Department of Computer Science

Martijn Goorden

PostDoc

Aalborg University
Department of Computer Science

Esther Hahyeon Kim

PhD Fellow

Aalborg University
Department of Computer Science

Mohsen Ghaffari

PhD fellow

IT University of Copenhagen
Department of Computer Science

Thomas Asger Hansen

Head of Analytics and AI

Grundfos

Ole Fritz Adeler

Constituted CTO

HOFOR

Brian Boyles

Marketing and Pre-Sales

Seluxit

Malte Skovby Ahm

Research and business lead

Aarhus Vand

Thor Danielsen

Project Manager

HOFOR A/S

Daniel Lux

CEO

Seluxit

Karsten Lumbye

Chief Innovations Officer

Aarhus Vand

Kristoffer Tønder Nielsen

Project Manager

Aarhus Vand

Christian Schilling

Assistant Professor

Aalborg University
Department of Computer Science

Martin Zimmermann

Associate Professor

Aalborg University
Department of Computer Science

Partners