Zimmermann is part of the DIREC project Verifiable and Safe AI for Autonomous Systems. The aim of the project is to develop methods and tools for safety critical systems within a variety of domains. Here, he works on understanding the foundations of correct and secure systems.
Can you tell about your research area?
Software and embedded systems are everywhere in our daily lives, from medical devices to aircrafts and the airbags in our cars. These software systems are often very complex, and it is challenging to develop correct systems. Therefore, we need verification software that can check such systems for errors.
The news is full of stories of potential vulnerabilities in software and embedded systems. Some of these vulnerabilities have been there for several years and are very hard to find. They might not be seen in daily use – only when you try to exploit a system.
It is even more pronounced when you look at distributed systems made up of several components interacting with each other. Like a website for the seat reservation system in a cinema where you click on the seat, which you want to book, while others do it at the same time. The system must be able to deal with many concurrent requests. Verification tries to automate the reasoning and automatically proves that the system is correct and safe.
How can we make these systems more secure?
Personally, I am interested in viewing this as a kind of game. I want to design a system that lives in an environment, so I understand this as a game between the system and the environment. The system wants to satisfy a certain property and the environment wants to break the system. And by that game-view you can get very strong guarantees.
It’s very hard to get complex systems correct. And if you have a safety-critical system you need those guarantees to be obtained by verification software. If you employ software that controls an airbag, then you want to be sure that it works correctly. It’s easy to miss errors – so you cannot rely on humans to check the code.
What is the potential of verification?
Verification is a very challenging task. It is challenging for a human to argue that a system is correct, and it is also hard for a computer, so unfortunately, it is not applicable universally. Verification is used for systems that are safety-critical, but even here there is a tradeoff between verification cost and development cost.
One of our goals is to develop techniques that are easy to use in practice. We work on the foundations of verification languages and try to understand how far we can push their expressiveness before it becomes infeasible to verify something. It can take hours or days to verify something, so it is a computationally expensive task. We try to understand what is possible and try to find problems and application areas where you can solve this task faster.
Another important thing is that we need precise specification languages for verification. You cannot use natural language. The verification algorithm needs a precise specification with precise semantics, so we are developing different logics to see if they can be used by engineers to actually write specifications. If it is too complicated for the practitioner, e.g., the engineer, it will not be used. You must find the sweet spot between expressiveness and usability.
Did you know Aalborg University before you were employed?
I have had a connection to Aalborg since my PhD where I worked on a European project with partners from all over Europe including the DEIS group in Aalborg. I was in Aalborg a few times during my PhD and knew people here. Aalborg is central in Europe when it comes to verification and design of systems. There are many collaborators and there is a good connection to the industry compared to other places. It is a very good location.
About Martin Zimmermann
- PhD from RWTH Aachen University.
- Postdoc at Warsaw University and University of Saarland in Saarbrücken.
- Lecturer at the University of Liverpool.
- Associate Professor at Aalborg University.