Den hastigt voksende anvendelse af maskinlæringsteknikker i cyberfysiske systemer fører til bedre løsninger og produkter med hensyn til tilpasningsevne, ydeevne, effektivitet, funktionalitet og brugervenlighed.
Cyberfysiske systemer er dog ofte sikkerhedskritiske, fx selvkørende biler eller medicinsk udstyr, og behovet for verifikation mod potentielt dødsulykker er af afgørende betydning.
Sammen med virksomhedsdeltagerne har dette projekt til formål at udvikle metoder og værktøjer, der vil sætte industrien i stand til automatisk at sammensætte konstruktionsmæssigt korrekte og næsten optimale controllere til sikkerhedskritiske systemer inden for en række forskellige domæner.
AI technologies may present new safety risks for users when they are embedded in products and services. For example, as result of a flaw in the object recognition technology, an autonomous car can wrongly identify an object on the road and cause an accident involving injuries and material damage. This in turn makes it difficult to place liability in case of malfunctioning: Under the Product Liability Directive, a manufacturer is liable for damage caused by a defective product. However, in the case of an AI based system such as autonomous cars, it may be difficult to prove that there is a defect in the product, the damage that has occurred and the causal link between the two.
What is needed are new methods, where machine learning is integrated with model-based techniques such that machine-learned solutions, typically optimising expected performance, are ensured to not violate crucial safety constraints, and can be certified not to do so. Relevant domains include all types of autonomous systems, where machine learning is applied to control safety critical systems.
The research aim of the project is to develop methods and tools that will enable industry to automatically synthesise correct-by-construction and near-optimal controllers for safety critical 45 systems within a variety of domains. The project will involve a number of scientific challenges including representation of strategies – neural networks (for compactness), decision trees (for explainability). Also, development of strategy learning methods with statistical guarantees is crucial.
A key challenge is understanding and specifying what safety and risk means for model-free controllers based on neural networks. Once formal specifications are created, we aim at combining the existing knowledge about property-based testing, Bayesian probabilistic programming, and model checking.
Value creation
The scientific value of the project are new fundamental theories, algorithmic methods and tools together with evaluation of their performance and adequacy in industrial settings. These are important contributions bridging between the core research themes on AI and Verification in DIREC.
For capacity building the value of the project is to educate PhD students and Post Docs in close collaboration with industry. The profile of these PhD students will meet a demand in the companies for staff with competences on both machine learning, data science and traditional software engineering. In addition, the project will offer a number of affiliated students projects at master-level.
For the growing number of companies relying of using AI in their products the ability to produce safety certification using approved processes and tools will be vital in order to bring safety critical applications to the market. At the societal level trustworthiness of AI-based systems is of prime concern within EU. Here methods and tools for providing safety guarantees can play a crucial role.
For det stigende antal virksomheder, der er afhængige af at bruge AI i deres produkter, vil evnen til at producere sikkerhedscertificering ved hjælp af godkendte processer og værktøjer være afgørende for at bringe sikkerhedskritiske applikationer på markedet.
På samfundsniveau er troværdigheden af AI-baserede systemer af største betydning i EU. Her kan metoder og værktøjer til at stille sikkerhedsgarantier spille en afgørende rolle.
Aalborg University
Department of Computer Science
IT University of Copenhagen
Department of Computer Science
Aalborg University
Department of Computer Science
Aalborg University
Department of Computer Science
IT University of Copenhagen
Department of Computer Science
Aalborg University
Department of Computer Science
Aalborg University
Department of Computer Science
Grundfos
Seluxit
Aarhus Vand
Aarhus Vand
Aarhus Vand
Aarhus Vand
HOFOR
HOFOR
HOFOR