New technologies can help banks, insurance companies and authorities fight fraud

26 APRIL 2022

New technologies can help banks, insurance companies and authorities fight fraud

Blockchain-based technologies can not only be used for cryptocurrencies. The technology eliminates the need for an intermediary when making transactions between two parties and can ensure that data cannot be modified.

By combining this feature with cryptographic techniques will enable banks and authorities to share sensitive personal data securely and enable them to fight fraud. This is exactly the purpose of a new project between researchers from Aarhus University, the IT University of Copenhagen and the Alexandra Institute, which is supported by DIREC – Digital Research Centre Denmark.

Read more (in Danish)


Meet Miao Zhang, who works on the black-box problem with automated deep learning

8 APRIL 2022

Meet Miao Zhang, who works on the black-box problem with automated deep learning

The 31-year-old Miao Zhang from China focuses on areas such as automated machine learning and deep learning. The areas are still in their early stage, but automated deep learning has a big potential as a system builds itself without human interaction. 

Miao works as Assistant Professor at the Department of Computer Science at Aalborg University. He is also part of the DIREC workstream Advanced and Efficient Big Data Management and Analysis. The project focuses on how we can develop new efficient prototypes that can enable the use of big data in industry. Miao focuses especially on building efficient and explainable prototypes for different tasks and data in an automated manner.

Can you tell us about your background and why you settled down in Denmark as a computer scientist?
I am interested in machine learning, automated deep learning and explainable AI. I hope that I can introduce automated deep learning and explainable AI to the Danish data science community, since research about this topic is rare.

Besides that I chose to come to Aalborg because it is a young and very active university, which provides a lot of opportunities for young researchers. I have several friends, who are working here, and they also recommend me to join their group, Center for Data-Intensive Systems (DAISY), which has an international reputation. I believe I can learn a lot here. 

I think the working environment in Denmark and Aalborg is pretty good. We have a lot of flexible time, so I can focus on my research. In addition, I think Aalborg is an environmentally-friendly city, and I really enjoy life here.

Can you tell us about your research area?
I have broad research interests in machine learning and artificial intelligence – especially automated deep learning and explainable AI. I am interested in automatic development of efficient, scalable and robust algorithms for machine learning, data mining, data management and deep learning applications with formal theoretical guarantees and explanations. I see myself working on these problems in my foreseeable research life. 

What are the scientific challenges and perspectives of your project?
Although the techniques of deep learning have been applied in different areas, such as computer vision, face recognition, medical imaging, natural language processing, data mining and data management, the design of deep learning systems is time-consuming – and it is still a black box problem to explain why the developed deep learning system is working. 

Automated deep learning is the process of building deep learning systems for different problems without human intervention. Explainable AI is to explain why the developed system is working – and it can also assist the design of the deep learning system. Automated deep learning and explainable AI are in their early-stages, and we still need to define some research problems, improve efficiency, and explain why the automated designed system works.

How can your research make a difference for companies and communities?
Automated deep learning aims to build a better deep learning system in a data-driven automated manner, so that most practitioners in deep learning can build a high-performance machine learning model without being an expert in the field of deep learning.

Automated deep learning can provide end-to-end deep learning solutions and these solutions are usually better than hand-designed deep learning systems. These automated systems can lower the threshold of deep learning and make it easy for everyone to use these techniques to solve their own problems.

About Miao Zhang
  • Masters Degree from University of Science and Technology in Beijing

  • PhD in information technology from University of Technology Sydney, Australia

  • PostDoc at the Machine Learning Group at Monash University, Australia

  • Assistant Professor at Aalborg University.

Read more


Meet Martin Zimmermann whose research focus is on verification tools

31 March 2022

Meet Martin Zimmermann whose research focus is on verification tools

39-year-old Martin Zimmermann from Germany works with correct and secure systems. Since the summer of 2021, he has worked as Associate Professor at the Distributed, Embedded and Intelligent Systems research group (DEIS) at the Department of Computer Science, Aalborg University.

Zimmermann is part of the DIREC project Verifiable and Safe AI for Autonomous Systems. The aim of the project is to develop methods and tools for safety critical systems within a variety of domains. Here, he works on understanding the foundations of correct and secure systems.

Can you tell about your research area?
Software and embedded systems are everywhere in our daily lives, from medical devices to aircrafts and the airbags in our cars. These software systems are often very complex, and it is challenging to develop correct systems. Therefore, we need verification software that can check such systems for errors.

The news is full of stories of potential vulnerabilities in software and embedded systems. Some of these vulnerabilities have been there for several years and are very hard to find. They might not be seen in daily use – only when you try to exploit a system.

It is even more pronounced when you look at distributed systems made up of several components interacting with each other. Like a website for the seat reservation system in a cinema where you click on the seat, which you want to book, while others do it at the same time. The system must be able to deal with many concurrent requests. Verification tries to automate the reasoning and automatically proves that the system is correct and safe.

How can we make these systems more secure?
Personally, I am interested in viewing this as a kind of game. I want to design a system that lives in an environment, so I understand this as a game between the system and the environment. The system wants to satisfy a certain property and the environment wants to break the system. And by that game-view you can get very strong guarantees.

It’s very hard to get complex systems correct. And if you have a safety-critical system you need those guarantees to be obtained by verification software. If you employ software that controls an airbag, then you want to be sure that it works correctly. It’s easy to miss errors – so you cannot rely on humans to check the code.

What is the potential of verification?
Verification is a very challenging task. It is challenging for a human to argue that a system is correct, and it is also hard for a computer, so unfortunately, it is not applicable universally. Verification is used for systems that are safety-critical, but even here there is a tradeoff between verification cost and development cost. 

One of our goals is to develop techniques that are easy to use in practice. We work on the foundations of verification languages and try to understand how far we can push their expressiveness before it becomes infeasible to verify something. It can take hours or days to verify something, so it is a computationally expensive task. We try to understand what is possible and try to find problems and application areas where you can solve this task faster.

Another important thing is that we need precise specification languages for verification. You cannot use natural language. The verification algorithm needs a precise specification with precise semantics, so we are developing different logics to see if they can be used by engineers to actually write specifications. If it is too complicated for the practitioner, e.g., the engineer, it will not be used. You must find the sweet spot between expressiveness and usability.

Did you know Aalborg University before you were employed?
I have had a connection to Aalborg since my PhD where I worked on a European project with partners from all over Europe including the DEIS group in Aalborg. I was in Aalborg a few times during my PhD and knew people here. Aalborg is central in Europe when it comes to verification and design of systems. There are many collaborators and there is a good connection to the industry compared to other places. It is a very good location.

About Martin Zimmermann

  • PhD from RWTH Aachen University.
  • Postdoc at Warsaw University and University of Saarland in Saarbrücken.
  • Lecturer at the University of Liverpool.
  • Associate Professor at Aalborg University.

Read more


International Workshop on Re-Using Robot Data

International Workshop on Re-Using Robot Data

We will discuss the needs of industry, current solutions and the scientific and technical challenges that are connected to the problem of an efficient re-use of robot data.

  • Do you want to make more out of your robot applications?
  • Do you want to predict failures of your system before they occur?
  • Do you want to use data in your production to improve quality inspection?
  • Are you tired of always starting from scratch when you establish new robot solutions?
  • Do you want to exploit information about trajectories, grippers and cameras from already existing robot set-ups to speed up the development of new robot solutions?
  • Do you want to apply already established control strategies by adapting those to a new problem?

Re-Using robot data is key to these challenges!

However, the potential of re-using robot data is not realized yet due to scientific, technical and IPR issues. The ReRoPro project, Re-Use of Robotic-data in Production through search, simulation and learning aims at addressing this problem.

We invite for tasks such as predictive maintenance, speeding-up the establishment of new assembly solutions and fine-tuning of critical components of your production.

At the workshop, we will discuss the needs of industry, current solutions and the scientific and technical challenges that are connected to the problem of an efficient re-use of robot data.

Find out how to make efficient use of your robot data!

Speakers (among others) are:

  • Prof. Michael Beetz, University of Bremen
  • Representatives from Novo Nordisk, RockWool, NordBo Robotics and WellTec
  • Prof. Henrik G. Petersen, University of Southern Denmark
  • Prof. Mikkel Baun Kjærgaard, University of Southern Denmark
  • Anders Meister, Chief Commercial Officer, CIM.AS

The workshop is a joint event of DIREC and MADE.


DIREC annual report 2021

2 March 2022

DIREC Annual Report 2021

We have gathered the first year’s experience in this annual report. Here you can read that DIREC initiated 11 bridge projects for a total of DKK 166 million in which researchers across universities are working together with a large number of companies and public organizations to develop new digital technologies.


Meet Tung Kieu, who has come to Denmark to detect anomalies in data

25 February 2022

Meet Tung Kieu, who has come to Denmark to detect anomalies in data

Tung Kieu came to Denmark as a PhD student and today he works as Assistant Professor at the Department of Computer Science at Aalborg University. He is associated with DIREC’s workstream Advanced and Efficient Big Data Management and Analysis.

Data is found everywhere today in our society. This applies to everything from our smartphones and GPS navigation in cars to the sensors mounted on wind turbines. And by analyzing these huge amounts of data you can detect anomalies, which can contribute to improve our state of health and optimize companies’ production.

The concept is called anomaly detection and the 31-year-old Tung Kieu has plunged into this topic.

Can you tell us a bit about your background and how you ended up working with big data and anomaly detection?

I have a Master’s degree in computer science from Vietnam National University, and I have been in Denmark for about five years. I came to Denmark because I received a PhD scholarship in the research group Daisy – Center for Data-Intensive Systems at Aalborg University, which is led by Professor Christian S. Jensen. When I finished my PhD, I became a research assistant, and after a few months, I got a position as Assistant Professor.

Aalborg has a great reputation in computer science and engineering and Christian S. Jensen is furthermore recognized for his outstanding research in databases and data mining. In Vietnam, my supervisor was affiliated with Christian S. Jensen and, in this way, I got in touch with him and received the scholarship.

In what way is research in Denmark different to research where you come from?

– The environment in Denmark is very good and, furthermore, Aalborg is the happiest city in EU, according to a study from EU-Commission. We have a very good work-life balance, where we focus more on the efficiency than the working time. Aalborg University is a young yet very active university. The university ranks very high compared to other universities, and our lab – Center for Data-Intensive Systems (DAISY) ranks 2nd best among all research groups in Europe. It’s great to be part of that.

Can you tell us about your research?

I work with databases and data mining and, more specifically, the area called anomaly detection in data. Due to the extensive digitization of processes, new data are constantly created, and by being able to analyze and utilize data, we can optimize our everyday lives.

However, there are a number of challenges. We produce such large amounts of data all the time that very efficient algorithms are required to analyze for anomalies. In addition, data quality is a challenge because much sensor data is subject to noise and potentially contains incorrect values. This means that you have to clean data to achieve the required quality. But this is also what makes the area interesting.

What do you expect to get out of your research and how can your research make a difference for companies and communities?

It may be easier to understand if I give a few examples. Anomaly detection can be used in many different places. For example, supermarkets collect data about their customers, and we can analyze these data and get an overview of people’s shopping patterns. The supermarkets can use this to customize their purchases so that they do not end up with a lot of products that they cannot sell.

Another example are data collected from sensors installed on wind turbines. Here we can use the algorithms to detect anomalies and thus predict if components in a wind turbine are about to collapse, which is of great benefit to the wind turbine manufacturers.

Today, smartphones are very common and people use them to measure their health and how much exercise they get. We can use these data to analyze people’s health state. When smartphone users record data about their heart rate, we can actually analyze when people will potentially get a heart attack. The possibilities are endless, which makes the research area interesting.

Read more about Tung Kieu


Meet Christian Schilling, who has come to Denmark to build software that can check other software for errors

21 February 2022

Meet Christian Schilling, who has come to Denmark to build software that can check other software for errors

Today we have cyber-physical software systems everywhere in our society, from thermostats to intelligent traffic management and water supply systems. It is therefore crucial to develop verification software that can check these programs for errors before they are put into operation.  

Christian Schilling from Germany is interested in formal verification and modeling and has come to Aalborg University to be part of the DEIS group. He is also part of the DIREC project Verifiable and safe AI for Autonomous Systems and explains how research in cyber-physical systems makes a difference for companies and society.

Can you tell a bit about your background and why you ended up in Denmark as a computer scientist?

I did my PhD at a German university (Freiburg) and was a postdoc at an Austrian research institute (IST Austria). Now I am a tenure-track Assistant Professor at Aalborg University. The DEIS group at Aalborg University has an international reputation and is a great fit for my interests. It is productive to work with people who “speak my language.” At the same time I can develop my own independent research directions.

What are you researching and what do you expect to get out of your research?

Broadly speaking, I am interested in the algorithmic analysis of systems. More precisely, I work on cyber-physical systems, which are systems consisting of a mix of digital (cyber) and analog (physical) components. Nowadays these systems are everywhere, from thermostats to aircraft. I want to answer the fundamental question of safety: Can a system end up in an error? My analysis is based on mathematical models, and I also work on the construction of such models from observational data.

We look at models of systems and then we try to find behaviors of that system and it might not be what you want. Or if you don’t find any errors you can get a mathematical proof that your model is correct. Of course you could make mistakes with the wiring when you implement the models in a practical system, we cannot cover that. That’s why there are still more practical aspects of our work.

What are the scientific challenges and perspectives in your project?

One of the grand challenges is to find approaches that scale to industrial systems, which are often large and complex. In full generality this goal cannot be achieved, so researchers focus on identifying a structure in practical systems that still allows us to analyze the system. The challenge is to find that structure and develop techniques that exploit this challenge.

Another recent relevant trend is the rise of artificial intelligence and how it can be safely integrated into systems without causing problems. Think about autonomous systems like vacuum cleaners, lawn mowers, and of course self-driving cars in the near future. 

It is certainly a challenge to analyze and verify systems that involve AI, because the way AI is used these days is really more like a black box where nobody understands what happens. It is very difficult to say that a self-driving car under no circumstance will kill a person. 

To make this kind of analysis you need a model, and of course you could say that an engineer could build this model, but at a certain size it becomes too complex and very difficult to do. So you want an automatic technique to do that. 

Another challenge is to go from academic models to real world systems, because usually you do some simplifications which you have to take into consideration and solve when you implement the models. 

How can your research make a difference for companies and communities?

Engineers design and build systems. Typically, they first develop a model and analyze that model. My research directly addresses this phase and helps engineers learn about the behavior only given a model. This means that they do not need to build a prototype to understand the system. This saves cost in the design phase, as changing a model is cheap but changing a prototype is expensive. On the level of a model you can actually have mathematical correctness guarantees. This is something you cannot achieve in the real world.

The DEIS group has a lot of industry collaboration, but so far I’ve been working with academic modeling. With these verification models you can make sure that intelligent traffic systems work as they should.


DIREC TALKS: Graph Models for Knowledge, Regulations, Rules and Processes

Graph Models for Knowledge, Regulations, Rules and Processes

In this DIREC TALK Thomas Hildebrandt presents how graph models can be used for representation of machine-readable regulations, rules and distributed processes in a flexible and maintainable way supporting both human understanding and automated execution.

Computer Science deals with the theory and methods for designing, analyzing and engineering systems of data and processes used by and impacting people and the society in which they are embedded. An important ingredient is the development of formal languages and structures for describing data and processes that can at the same time capture the complexity of the problem domain and be subject for analysis and execution by computers.

As the technology and use of computers has evolved and changed over time, a plethora of different languages and structures have been introduced.

Mirroring the evolution from centralized computer systems used mainly for business processes and research to ubiquitous, distributed systems handling processes spanning both our professional and private lives, a key challenge has become the design, analysis and management of distributed and frequently changing structures of data and processes and the regulations and rules they are supposed to follow.

Concretely, Thomas Hildebrandt will present the theory and tools of Dynamic Condition Response (DCR) Graph and give concrete examples of the modelling of legal regulations, rules and processes. The theory and tools are the result of more than 15 years of research and development jointly with industry and public organizations culminating in the establishment of the company in 2018 providing industrial strength tools for design, analysis and execution of decision and process models, which has so far been embedded in the widely used WorkZone enterprise information management system from KMD/NEC as well as open source case management systems used in municipalities in Denmark.

The talk will conclude with directions for current and future research, including the relation between explainable AI and DCR graphs and the award winning process mining based on DCR Graphs and how to represent more general knowledge of organisations which is currently peaking several of Gartner’s hype curves (e.g. Emergent Technologies and Government Technologies and AI) under the terms like Human-centered AI, knowledge graphs, decision intelligence and Digital twins of Government.




Thomas Hildebrandt is professor in software engineering and Head of Software, Data, People & Society research section at University of Copenhagen. With a background in formal process models he has in more than 10 years been leading inter-disciplinary research and innovation projects with focus on methods and technologies for developing reliable and flexible software systems suited for the people who use them, including digitalisation of law, workflows and business processes information systems.

The research carried out by Thomas has lead to the development of the process technology Dynamic Condition Response (DCR) Graphs in collaboration with the company Exformatics. The technology has users all over the world and is available as a service at DCR is now owned by the company DCR Solutions and is used in Denmark to support flexible case management within the KMD WorkZone case management system, which is used 65% of the employees in the Danish state, including administrative workers at several universities.


Eight new DIREC projects should contribute to accelerate the careers of young researchers

3 February 2022

Eight new DIREC projects should contribute to accelerate the careers of young researchers

Eight young researchers have just received grants for new research projects in digital technologies. The grants will ensure research in topics such as optimization of programming languages, bias in large data sets, verification of algorithms and energy optimization of hardware and software.

The national research centre DIREC has just granted DKK 5.1 million Dkr for 12 new research projects in digital technologies, eight of which are led and run by young, promising researchers. To a great extent, young researchers can contribute with creativity and energy that provide new research perspectives, and DIREC hopes that the grants will boost these researchers to drive their research even further forward.

One of the young researchers is Sophia Yakoubov, who is assistant professor at the Department of Computer Science at Aarhus University. Her project focuses on how to use technologies such as blockchain and multi-party computation (computing) to calculate data without providing personal data.

“There are many use cases in which there is a need for being able to computation on data without compromising data privacy. This applies not least to the healthcare sector, and here technologies such as multi-party computation is an efficient tool that helps us move forward in these research areas,” says Sophia Yakoubov and points out that it is important to provide young researchers with project management.

“I believe it is important to give young researchers the opportunity to learn to lead and project manage their own projects. I myself am a relatively new professor, and this is my first major grant, so I am very much looking forward to the exciting collaboration between different partners, which is provided by this project.”

Great impact on further career
Another grant recipient is Maja Hanne Kirkeby, who is Assistant Professor in computer science at the Department of Humanities and Technology at Roskilde University. She has received funding for a project to investigate energy consumption and performance when implementing algorithms in hardware and software. It is also her first research project as a project manager, and she is also looking forward to expanding her experience in the role of a project manager. As something new, she has chosen to hire both bachelor and master students for the project.

“I believe it is important to give students the opportunity to get in touch with research projects, because usually professors and PhD students participate in research projects, and that is it. This means that we have three levels involved, and I have not seen that before – and I have been involved in a couple of EU projects.”

According to Maja Hanne Kirkeby, it is of great importance for the students’ further careers to participate in research projects, whether they choose an academic or industrial career.

“Only few students choose an academic career, so therefore I find it important to demystify research. Typically, there are no unambiguous answers, and here the students get the opportunity to see how chaotic this process can be. When research results are presented, it may seem rigorous to the outside world, but a lot of questions have been investigated, and they get acquainted with this in these projects,” she explains.
Facts about the eight starter projects:

Hardware/Software Trade-off for the Reduction of Energy Consumption
The project works with the problem of chip implementation of software algorithms. Can we save power and energy by executing programs on an FPGA instead of on general-purpose computers? This project will explore classic sorting and path finding algorithms to see how much energy can be saved by implementing them directly on an FPGA.
Project manager: Assistant Professor Maja Hanne Kirkeby, Roskilde University

Ergonomic & Practical Effect Systems
The project works with effect systems, which are an extension of type systems in programming languages. Power systems can be of great value in programming languages but are too complicated and slow to use. Therefore, the project wants to optimize them.
Project manager: Associate Professor Magnus Madsen, Aarhus University

Understanding Biases and Diversity of Big Data Used for Mobility Analysis
The project will investigate biases in large data sets and try to “debias” data with statistical approaches. The project already has access to large data sets and will in collaboration with UNICEF work on issues in connection with disease detection.
Project manager: Assistant Professor Vedran Sekara, IT University of Copenhagen

Automated Verification of Sensitivity Properties for Probabilistic ProgramsThe overall objective is to explore how automated verification of sensitivity properties of probabilistic programs can support developers in increasing the trust in their software through formal assurances. The project continues to work on research in the field and will among others solve problems with previous approaches and develop a tool to support this. Sensitivity is a key property for checking whether AI solutions and security solutions work properly, which becomes important when AI solutions are implemented widely.
Project manager: Post doc Alejandro Aguirre, Aarhus University

Accountable Privacy Preserving Computation via Blockchain
The project will investigate how to combine multi party computation and blockchain to ensure correct calculations of data categorized as personal data.
Project manager: Assistant Professor Sophia Yakoubov, Aarhus University

Methodologies for scheduling and routing droplets in digital microfluidic biochips
In this project, a series of droplets are guided around a biochip using small electric charges. The droplets form a kind of minicomputer where the droplets can be routed and combined based on a program. At present, it can be complicated to program the chip as a number of factors like topological constraints, the surface on which the droplet is moving must be taken into account. The project will investigate whether it can be made easier to program the chip by making algorithms that can help.
Project manager: Assistant Professor Luca Pezzarossa, Technical University of Denmark

Certifiable Controller Synthesis for Cyber-Physical Systems
Control systems for cyber-physical systems can in some cases be autogenerated, but how do you ensure that autogenerated control systems behave correctly and how can you certify them? The project will investigate this in relation to indoor climate control, adaptive cruise control, floods and floor heating.
Project manager: Post doc. Martijn Goorden, Aalborg University

Algorithms education via animation videos
The project will produce a series of interactive educational videos that explain in new and interesting ways how complex algorithms work. With these videos, the project seeks to make it easier for the students to acquire knowledge within the difficult technical disciplines.
Project manager: Assistant Professor Radu-Cristian Curticapean, IT University of Copenhagen

Explore project

Cyber-Physical Systems with Humans in the Loop

Project type: Explore Project

Cyber-Physical Systems with Humans in the Loop


Constructing cyber-physical systems with humans in the loop is important in many application areas to enable a close co-operation between humans and machines. However, there are also many challenges to overcome when constructing such systems with current software technologies and human-centered design approaches. To foster collaboration on the topic the project will study state-of-the-art and map out challenges which is important for Danish industry to address in future work.

Value Creation

Scientific value: The project will provide a better terminology and a common understanding of state-of-theart across several areas of research within DIREC and disseminate this knowledge to the scientific community. 

Capacity building: The project will establish new collaboration setups within DIREC and involve master students in the activities. 

Business value: The project will in workshops disseminate knowledge to Danish industry and identify cases that could be relevant areas of collaboration for DIREC with Danish Industry in future larger projects. The project will among others connect to the community involved in the Nordic IoT Center.


Project Manager

Mikkel Baun Kjærgaard


University of Southern Denmark
The Maersk Mc-Kinney Moller Institute


Jan Madsen


Technical University of Denmark
DTU Compute

Peter Gorm Larsen


Aarhus University
Department of Electrical and Computer Engineering

Torkild Clemmensen


Copenhagen Business School
Department of Digitalization

Kim Guldstrand Larsen


Aalborg University
Department of Computer Science