New technologies can help banks, insurance companies and authorities fight fraud

26 APRIL 2022

New technologies can help banks, insurance companies and authorities fight fraud

Blockchain-based technologies can not only be used for cryptocurrencies. The technology eliminates the need for an intermediary when making transactions between two parties and can ensure that data cannot be modified.

By combining this feature with cryptographic techniques will enable banks and authorities to share sensitive personal data securely and enable them to fight fraud. This is exactly the purpose of a new project between researchers from Aarhus University, the IT University of Copenhagen and the Alexandra Institute, which is supported by DIREC – Digital Research Centre Denmark.

Read more (in Danish)


Meet Miao Zhang, who works on the black-box problem with automated deep learning

8 APRIL 2022

Meet Miao Zhang, who works on the black-box problem with automated deep learning

The 31-year-old Miao Zhang from China focuses on areas such as automated machine learning and deep learning. The areas are still in their early stage, but automated deep learning has a big potential as a system builds itself without human interaction. 

Miao works as Assistant Professor at the Department of Computer Science at Aalborg University. He is also part of the DIREC workstream Advanced and Efficient Big Data Management and Analysis. The project focuses on how we can develop new efficient prototypes that can enable the use of big data in industry. Miao focuses especially on building efficient and explainable prototypes for different tasks and data in an automated manner.

Can you tell us about your background and why you settled down in Denmark as a computer scientist?
I am interested in machine learning, automated deep learning and explainable AI. I hope that I can introduce automated deep learning and explainable AI to the Danish data science community, since research about this topic is rare.

Besides that I chose to come to Aalborg because it is a young and very active university, which provides a lot of opportunities for young researchers. I have several friends, who are working here, and they also recommend me to join their group, Center for Data-Intensive Systems (DAISY), which has an international reputation. I believe I can learn a lot here. 

I think the working environment in Denmark and Aalborg is pretty good. We have a lot of flexible time, so I can focus on my research. In addition, I think Aalborg is an environmentally-friendly city, and I really enjoy life here.

Can you tell us about your research area?
I have broad research interests in machine learning and artificial intelligence – especially automated deep learning and explainable AI. I am interested in automatic development of efficient, scalable and robust algorithms for machine learning, data mining, data management and deep learning applications with formal theoretical guarantees and explanations. I see myself working on these problems in my foreseeable research life. 

What are the scientific challenges and perspectives of your project?
Although the techniques of deep learning have been applied in different areas, such as computer vision, face recognition, medical imaging, natural language processing, data mining and data management, the design of deep learning systems is time-consuming – and it is still a black box problem to explain why the developed deep learning system is working. 

Automated deep learning is the process of building deep learning systems for different problems without human intervention. Explainable AI is to explain why the developed system is working – and it can also assist the design of the deep learning system. Automated deep learning and explainable AI are in their early-stages, and we still need to define some research problems, improve efficiency, and explain why the automated designed system works.

How can your research make a difference for companies and communities?
Automated deep learning aims to build a better deep learning system in a data-driven automated manner, so that most practitioners in deep learning can build a high-performance machine learning model without being an expert in the field of deep learning.

Automated deep learning can provide end-to-end deep learning solutions and these solutions are usually better than hand-designed deep learning systems. These automated systems can lower the threshold of deep learning and make it easy for everyone to use these techniques to solve their own problems.

About Miao Zhang
  • Masters Degree from University of Science and Technology in Beijing

  • PhD in information technology from University of Technology Sydney, Australia

  • PostDoc at the Machine Learning Group at Monash University, Australia

  • Assistant Professor at Aalborg University.

Read more


Meet Martin Zimmermann whose research focus is on verification tools

31 March 2022

Meet Martin Zimmermann whose research focus is on verification tools

39-year-old Martin Zimmermann from Germany works with correct and secure systems. Since the summer of 2021, he has worked as Associate Professor at the Distributed, Embedded and Intelligent Systems research group (DEIS) at the Department of Computer Science, Aalborg University.

Zimmermann is part of the DIREC project Verifiable and Safe AI for Autonomous Systems. The aim of the project is to develop methods and tools for safety critical systems within a variety of domains. Here, he works on understanding the foundations of correct and secure systems.

Can you tell about your research area?
Software and embedded systems are everywhere in our daily lives, from medical devices to aircrafts and the airbags in our cars. These software systems are often very complex, and it is challenging to develop correct systems. Therefore, we need verification software that can check such systems for errors.

The news is full of stories of potential vulnerabilities in software and embedded systems. Some of these vulnerabilities have been there for several years and are very hard to find. They might not be seen in daily use – only when you try to exploit a system.

It is even more pronounced when you look at distributed systems made up of several components interacting with each other. Like a website for the seat reservation system in a cinema where you click on the seat, which you want to book, while others do it at the same time. The system must be able to deal with many concurrent requests. Verification tries to automate the reasoning and automatically proves that the system is correct and safe.

How can we make these systems more secure?
Personally, I am interested in viewing this as a kind of game. I want to design a system that lives in an environment, so I understand this as a game between the system and the environment. The system wants to satisfy a certain property and the environment wants to break the system. And by that game-view you can get very strong guarantees.

It’s very hard to get complex systems correct. And if you have a safety-critical system you need those guarantees to be obtained by verification software. If you employ software that controls an airbag, then you want to be sure that it works correctly. It’s easy to miss errors – so you cannot rely on humans to check the code.

What is the potential of verification?
Verification is a very challenging task. It is challenging for a human to argue that a system is correct, and it is also hard for a computer, so unfortunately, it is not applicable universally. Verification is used for systems that are safety-critical, but even here there is a tradeoff between verification cost and development cost. 

One of our goals is to develop techniques that are easy to use in practice. We work on the foundations of verification languages and try to understand how far we can push their expressiveness before it becomes infeasible to verify something. It can take hours or days to verify something, so it is a computationally expensive task. We try to understand what is possible and try to find problems and application areas where you can solve this task faster.

Another important thing is that we need precise specification languages for verification. You cannot use natural language. The verification algorithm needs a precise specification with precise semantics, so we are developing different logics to see if they can be used by engineers to actually write specifications. If it is too complicated for the practitioner, e.g., the engineer, it will not be used. You must find the sweet spot between expressiveness and usability.

Did you know Aalborg University before you were employed?
I have had a connection to Aalborg since my PhD where I worked on a European project with partners from all over Europe including the DEIS group in Aalborg. I was in Aalborg a few times during my PhD and knew people here. Aalborg is central in Europe when it comes to verification and design of systems. There are many collaborators and there is a good connection to the industry compared to other places. It is a very good location.

About Martin Zimmermann

  • PhD from RWTH Aachen University.
  • Postdoc at Warsaw University and University of Saarland in Saarbrücken.
  • Lecturer at the University of Liverpool.
  • Associate Professor at Aalborg University.

Read more


DIREC annual report 2021

2 March 2022

DIREC Annual Report 2021

We have gathered the first year’s experience in this annual report. Here you can read that DIREC initiated 11 bridge projects for a total of DKK 166 million in which researchers across universities are working together with a large number of companies and public organizations to develop new digital technologies.


Meet Tung Kieu, who has come to Denmark to detect anomalies in data

25 February 2022

Meet Tung Kieu, who has come to Denmark to detect anomalies in data

Tung Kieu came to Denmark as a PhD student and today he works as Assistant Professor at the Department of Computer Science at Aalborg University. He is associated with DIREC’s workstream Advanced and Efficient Big Data Management and Analysis.

Data is found everywhere today in our society. This applies to everything from our smartphones and GPS navigation in cars to the sensors mounted on wind turbines. And by analyzing these huge amounts of data you can detect anomalies, which can contribute to improve our state of health and optimize companies’ production.

The concept is called anomaly detection and the 31-year-old Tung Kieu has plunged into this topic.

Can you tell us a bit about your background and how you ended up working with big data and anomaly detection?

I have a Master’s degree in computer science from Vietnam National University, and I have been in Denmark for about five years. I came to Denmark because I received a PhD scholarship in the research group Daisy – Center for Data-Intensive Systems at Aalborg University, which is led by Professor Christian S. Jensen. When I finished my PhD, I became a research assistant, and after a few months, I got a position as Assistant Professor.

Aalborg has a great reputation in computer science and engineering and Christian S. Jensen is furthermore recognized for his outstanding research in databases and data mining. In Vietnam, my supervisor was affiliated with Christian S. Jensen and, in this way, I got in touch with him and received the scholarship.

In what way is research in Denmark different to research where you come from?

– The environment in Denmark is very good and, furthermore, Aalborg is the happiest city in EU, according to a study from EU-Commission. We have a very good work-life balance, where we focus more on the efficiency than the working time. Aalborg University is a young yet very active university. The university ranks very high compared to other universities, and our lab – Center for Data-Intensive Systems (DAISY) ranks 2nd best among all research groups in Europe. It’s great to be part of that.

Can you tell us about your research?

I work with databases and data mining and, more specifically, the area called anomaly detection in data. Due to the extensive digitization of processes, new data are constantly created, and by being able to analyze and utilize data, we can optimize our everyday lives.

However, there are a number of challenges. We produce such large amounts of data all the time that very efficient algorithms are required to analyze for anomalies. In addition, data quality is a challenge because much sensor data is subject to noise and potentially contains incorrect values. This means that you have to clean data to achieve the required quality. But this is also what makes the area interesting.

What do you expect to get out of your research and how can your research make a difference for companies and communities?

It may be easier to understand if I give a few examples. Anomaly detection can be used in many different places. For example, supermarkets collect data about their customers, and we can analyze these data and get an overview of people’s shopping patterns. The supermarkets can use this to customize their purchases so that they do not end up with a lot of products that they cannot sell.

Another example are data collected from sensors installed on wind turbines. Here we can use the algorithms to detect anomalies and thus predict if components in a wind turbine are about to collapse, which is of great benefit to the wind turbine manufacturers.

Today, smartphones are very common and people use them to measure their health and how much exercise they get. We can use these data to analyze people’s health state. When smartphone users record data about their heart rate, we can actually analyze when people will potentially get a heart attack. The possibilities are endless, which makes the research area interesting.

Read more about Tung Kieu


Meet Christian Schilling, who has come to Denmark to build software that can check other software for errors

21 February 2022

Meet Christian Schilling, who has come to Denmark to build software that can check other software for errors

Today we have cyber-physical software systems everywhere in our society, from thermostats to intelligent traffic management and water supply systems. It is therefore crucial to develop verification software that can check these programs for errors before they are put into operation.  

Christian Schilling from Germany is interested in formal verification and modeling and has come to Aalborg University to be part of the DEIS group. He is also part of the DIREC project Verifiable and safe AI for Autonomous Systems and explains how research in cyber-physical systems makes a difference for companies and society.

Can you tell a bit about your background and why you ended up in Denmark as a computer scientist?

I did my PhD at a German university (Freiburg) and was a postdoc at an Austrian research institute (IST Austria). Now I am a tenure-track Assistant Professor at Aalborg University. The DEIS group at Aalborg University has an international reputation and is a great fit for my interests. It is productive to work with people who “speak my language.” At the same time I can develop my own independent research directions.

What are you researching and what do you expect to get out of your research?

Broadly speaking, I am interested in the algorithmic analysis of systems. More precisely, I work on cyber-physical systems, which are systems consisting of a mix of digital (cyber) and analog (physical) components. Nowadays these systems are everywhere, from thermostats to aircraft. I want to answer the fundamental question of safety: Can a system end up in an error? My analysis is based on mathematical models, and I also work on the construction of such models from observational data.

We look at models of systems and then we try to find behaviors of that system and it might not be what you want. Or if you don’t find any errors you can get a mathematical proof that your model is correct. Of course you could make mistakes with the wiring when you implement the models in a practical system, we cannot cover that. That’s why there are still more practical aspects of our work.

What are the scientific challenges and perspectives in your project?

One of the grand challenges is to find approaches that scale to industrial systems, which are often large and complex. In full generality this goal cannot be achieved, so researchers focus on identifying a structure in practical systems that still allows us to analyze the system. The challenge is to find that structure and develop techniques that exploit this challenge.

Another recent relevant trend is the rise of artificial intelligence and how it can be safely integrated into systems without causing problems. Think about autonomous systems like vacuum cleaners, lawn mowers, and of course self-driving cars in the near future. 

It is certainly a challenge to analyze and verify systems that involve AI, because the way AI is used these days is really more like a black box where nobody understands what happens. It is very difficult to say that a self-driving car under no circumstance will kill a person. 

To make this kind of analysis you need a model, and of course you could say that an engineer could build this model, but at a certain size it becomes too complex and very difficult to do. So you want an automatic technique to do that. 

Another challenge is to go from academic models to real world systems, because usually you do some simplifications which you have to take into consideration and solve when you implement the models. 

How can your research make a difference for companies and communities?

Engineers design and build systems. Typically, they first develop a model and analyze that model. My research directly addresses this phase and helps engineers learn about the behavior only given a model. This means that they do not need to build a prototype to understand the system. This saves cost in the design phase, as changing a model is cheap but changing a prototype is expensive. On the level of a model you can actually have mathematical correctness guarantees. This is something you cannot achieve in the real world.

The DEIS group has a lot of industry collaboration, but so far I’ve been working with academic modeling. With these verification models you can make sure that intelligent traffic systems work as they should.


Eight new DIREC projects should contribute to accelerate the careers of young researchers

3 February 2022

Eight new DIREC projects should contribute to accelerate the careers of young researchers

Eight young researchers have just received grants for new research projects in digital technologies. The grants will ensure research in topics such as optimization of programming languages, bias in large data sets, verification of algorithms and energy optimization of hardware and software.

The national research centre DIREC has just granted DKK 5.1 million Dkr for 12 new research projects in digital technologies, eight of which are led and run by young, promising researchers. To a great extent, young researchers can contribute with creativity and energy that provide new research perspectives, and DIREC hopes that the grants will boost these researchers to drive their research even further forward.

One of the young researchers is Sophia Yakoubov, who is assistant professor at the Department of Computer Science at Aarhus University. Her project focuses on how to use technologies such as blockchain and multi-party computation (computing) to calculate data without providing personal data.

“There are many use cases in which there is a need for being able to computation on data without compromising data privacy. This applies not least to the healthcare sector, and here technologies such as multi-party computation is an efficient tool that helps us move forward in these research areas,” says Sophia Yakoubov and points out that it is important to provide young researchers with project management.

“I believe it is important to give young researchers the opportunity to learn to lead and project manage their own projects. I myself am a relatively new professor, and this is my first major grant, so I am very much looking forward to the exciting collaboration between different partners, which is provided by this project.”

Great impact on further career
Another grant recipient is Maja Hanne Kirkeby, who is Assistant Professor in computer science at the Department of Humanities and Technology at Roskilde University. She has received funding for a project to investigate energy consumption and performance when implementing algorithms in hardware and software. It is also her first research project as a project manager, and she is also looking forward to expanding her experience in the role of a project manager. As something new, she has chosen to hire both bachelor and master students for the project.

“I believe it is important to give students the opportunity to get in touch with research projects, because usually professors and PhD students participate in research projects, and that is it. This means that we have three levels involved, and I have not seen that before – and I have been involved in a couple of EU projects.”

According to Maja Hanne Kirkeby, it is of great importance for the students’ further careers to participate in research projects, whether they choose an academic or industrial career.

“Only few students choose an academic career, so therefore I find it important to demystify research. Typically, there are no unambiguous answers, and here the students get the opportunity to see how chaotic this process can be. When research results are presented, it may seem rigorous to the outside world, but a lot of questions have been investigated, and they get acquainted with this in these projects,” she explains.
Facts about the eight starter projects:

Hardware/Software Trade-off for the Reduction of Energy Consumption
The project works with the problem of chip implementation of software algorithms. Can we save power and energy by executing programs on an FPGA instead of on general-purpose computers? This project will explore classic sorting and path finding algorithms to see how much energy can be saved by implementing them directly on an FPGA.
Project manager: Assistant Professor Maja Hanne Kirkeby, Roskilde University

Ergonomic & Practical Effect Systems
The project works with effect systems, which are an extension of type systems in programming languages. Power systems can be of great value in programming languages but are too complicated and slow to use. Therefore, the project wants to optimize them.
Project manager: Associate Professor Magnus Madsen, Aarhus University

Understanding Biases and Diversity of Big Data Used for Mobility Analysis
The project will investigate biases in large data sets and try to “debias” data with statistical approaches. The project already has access to large data sets and will in collaboration with UNICEF work on issues in connection with disease detection.
Project manager: Assistant Professor Vedran Sekara, IT University of Copenhagen

Automated Verification of Sensitivity Properties for Probabilistic ProgramsThe overall objective is to explore how automated verification of sensitivity properties of probabilistic programs can support developers in increasing the trust in their software through formal assurances. The project continues to work on research in the field and will among others solve problems with previous approaches and develop a tool to support this. Sensitivity is a key property for checking whether AI solutions and security solutions work properly, which becomes important when AI solutions are implemented widely.
Project manager: Post doc Alejandro Aguirre, Aarhus University

Accountable Privacy Preserving Computation via Blockchain
The project will investigate how to combine multi party computation and blockchain to ensure correct calculations of data categorized as personal data.
Project manager: Assistant Professor Sophia Yakoubov, Aarhus University

Methodologies for scheduling and routing droplets in digital microfluidic biochips
In this project, a series of droplets are guided around a biochip using small electric charges. The droplets form a kind of minicomputer where the droplets can be routed and combined based on a program. At present, it can be complicated to program the chip as a number of factors like topological constraints, the surface on which the droplet is moving must be taken into account. The project will investigate whether it can be made easier to program the chip by making algorithms that can help.
Project manager: Assistant Professor Luca Pezzarossa, Technical University of Denmark

Certifiable Controller Synthesis for Cyber-Physical Systems
Control systems for cyber-physical systems can in some cases be autogenerated, but how do you ensure that autogenerated control systems behave correctly and how can you certify them? The project will investigate this in relation to indoor climate control, adaptive cruise control, floods and floor heating.
Project manager: Post doc. Martijn Goorden, Aalborg University

Algorithms education via animation videos
The project will produce a series of interactive educational videos that explain in new and interesting ways how complex algorithms work. With these videos, the project seeks to make it easier for the students to acquire knowledge within the difficult technical disciplines.
Project manager: Assistant Professor Radu-Cristian Curticapean, IT University of Copenhagen


Meet Tijs Slaats, who just won a prize for best process mining algorithm

Meet Tijs Slaats, who just won a prize for best process mining algorithm

Tijs is Associate Professor at the Department of Computer Science at the University of Copenhagen and Head of the Business Process Modeling and Intelligence research group. In DIREC, he works on the Bridge project AI and Blockchains for Complex Business Processes.

Tijs’ research interests include declarative and hybrid process technologies, blockchain technologies, process mining, and information systems development.  

He co-invented the flagship declarative Dynamic Condition Response (DCR) Graphs process notation and was a primary driver in its early commercial adoption. In addition, he led the invention and development of the DisCoveR process miner, which was recognized as the best process discovery algorithm in 2021. 

Can you tell us briefly about your research and what value you expect to get from it?
We try to describe processes. It can be basic things that we do as human beings. It could be assembling a car at a factory, but it could also be treating patients at a hospital. If a patient is admitted to a hospital, they need help and treatment.

What it has in common for these examples is that you need to go through a number of steps and activities to reach your goal, and those activities are related to each other. It may be medication that needs to be taken in a certain order.

In our research, we have developed a mathematical method for describing these processes. The reason for doing this is because it gives you the tools to ensure that the process goes the way you want it to.

In the new DIREC project, we take one step further. We have observed that many companies and organizations have large amounts of data on how they have performed their jobs. And we can look at these data and analyze them to see how they actually perform their jobs, because the way many people do their jobs does not necessarily match the way they expect to do it. Maybe they take shortcuts unintentionally.

Our idea is to find these data and analyze them and on that basis we get a model.

It is important that such a model also is understandable to the users so that they can understand how they perform their work. We call this process mining, and it is a reasonably large academic area. Two years ago, I developed an algorithm, and it was in a contest, where you compare which algorithm is most accurate to describe these “logs of behaviour”, and we won the contest.

Read more

What results do you expect from your research?
Our cooperation with industry is particularly important. In the project, we collaborate with the company Gekkobrain, which works with DevOps, and they are interested in analyzing large ERP systems and in finding tools that can optimize a system and find abnormalities. These systems are quite complex, so it is important to be able to identify where things are going wrong.

Gekkobrain has a lot of data because they work with large companies that have huge amounts of log data, and these systems are so complex that it adds some extra challenges for our algorithms. To get access to such complex data is an important perspective.

How can your research make a difference to companies and society?
The biggest impact of our work and models is that you can gain insight into how you perform your work. It gives you an objective picture of what has been done.

Companies can use it to find out if there are places where work processes are performed in an inappropriate way and thus avoid the extra costs.

Can you tell us about your background and how you ended up working with this research area?
I initially got a Bachelor degree in Information & Communication Technology from Fontys University of Professional Education, then worked in industry where I led the webshop development team of a Dutch e-commerce provider and acted as project leader on the implementation of our product for two major customers; Ferrari and Hewlett Packard.

I decided to move to Denmark after meeting my (Danish) wife, at the time I was already considering pursuing further education, while my wife was fairly settled in Denmark, so it made sense for me to be the one to move.

I got my MSc and PhD degrees at the IT University of Copenhagen. There I became interested in the field of business process modeling because it allows me to combine foundational theoretical research with very concrete industrial applications. Process mining in particular provides really interesting challenges because it requires learned models to be understandable for business users, something that has only recently come into focus in the more general field of AI. 

After a short postdoc at ITU I accepted a tenure-track assistant professorship at DIKU, which was a very good opportunity because it offers a (near) permanent position for relatively junior researchers. At the time this was uncommon in Denmark.


Project will transfer AI from the cloud to the IoT device

13 December 2021

Project will transfer AI from the cloud to the IoT device

In a new DIREC project, computer science researchers collaborate with industry to develop artificial intelligence to let the IoT devices handle more things.

Photo: Kaare Smith, DTU

Digitization of society is one of the prerequisites for achieving the climate goal of 70 percent CO2 reduction by 2030. And there small sensors (IoT devices) installed in e.g. buildings, heating systems, and treatment plants will play an important role in managing energy consumption, heat, indoor climate, etc.

In a new project Embedded AI – supported by the national research center DIREC – researchers together with industry will investigate how to develop AI (artificial intelligence) that can be implemented in IoT devices so that they can do more themselves. Today, sensors are dependent on AI algorithms on cloud platforms or decentralized networks (Edge Computing), where data and commands are sent via internet / wireless networks.

“It is quite obvious that you will not be able to do the same as with the cloud and edge, but it will cost less, use less energy and be able to react faster. It will also increase security and privacy because data can be kept where it is collected. So there are many benefits to embedded AI, says the project manager, Professor, Section Manager, and Deputy Director at DTU Compute Jan Madsen.

In the project, DTU, Aarhus University, the University of Copenhagen, and CBS collaborate with the pump manufacturer Grundfos Holding, the engine and machine manufacturer MAN Energy Solution, the window manufacturer VELUX, and the technology company Indesmatech.

Move AI from large platforms to small ones
During the three years, the partners will work on specific issues within the four industry partners. They are strong representatives of companies that will be able to strengthen competitiveness by knowing the right tools and platforms to leverage embedded AI (eAI) in their products.

The project will examine the process of going from large platforms to small ones, explore suitable tool platforms, check what opportunities new types of chip provide for embedded AI and map out how embedded AI will be able to change the business models for companies.

Grundfos is experiencing a knowledge gap
The idea for the DIREC project has come through network meetings, where research institutions and industry talk about future competencies and technology needs. Here, Thorkild Kvisgaard, Head of Electronics, Director Technology Innovation at Grundfos, has participated.

He says the company sees a clear need to be able to move some of the artificial intelligence from the large platforms that run on mainframe computers, etc., down and run in more embedded devices (AIoT), even though it will be very resource-limited platforms to work on. Because you can save energy, and you avoid having to send data over the Internet and be dependent on the Internet and cloud solutions that run outside your own control.

“It will, of course, turn out that you can not do quite as much on platforms with limited resources, but we do not know those limits today. And maybe we can do a lot more than we think. If we work with something that is not time-critical, it does not matter that the embedded AI has to spend several minutes figuring something out if it is a slow and complex process,” says Thorkild Kvisgaard.

“At Grundfos, we have experimented with the technology ourselves, but we are experiencing a gap between what data science experts work with on large cloud platforms and what IoT programmers work with. So we hope that the project will also create a better understanding of each other’s work areas.”

Chip becomes crucial

The industry partner Indesmatech acts as both the local office for chip manufacturers, facilitates various development projects with new technology and helps companies to develop technology.

The company is looking forward to clarifying the possibilities when working with Embedded AI algorithms, explains Co-founder of Indesmatech Rune Domsten:

“What is interesting about the Embedded AI project, in addition to the software used for AI, is to investigate which chip and hardware platforms to execute on and use in the different situations. Because the battery consumption in sensors really depends on which chip you use, and it can be a question of whether the battery lasts for e.g. five or ten years.”

Although the industry partners in the DIREC project as large companies are already working on AI, the project could also have great significance for especially small companies that lag behind with artificial intelligence, says Project Manager Jan Madsen:

“While it may seem rather uninteresting in research to develop small AI algorithms, there are actually major research challenges in developing efficient architectures and methods that can be used in smaller and resource-limited sensors / IoT devices. It can also be what gets a small business started using AI for complex tasks and processes.”

About DIREC – Digital Research Centre Denmark

The purpose of the national research centre DIREC is to bring Denmark at the forefront of the latest digital technologies through world-class digital research. To meet the great demand for highly educated IT specialists, DIREC also works to expand the capacity within both research and education of computer scientists. The centre has a total budget of DKK 275 million and is supported by the Innovation Fund Denmark with DKK 100 million. The partnership consists of a unique collaboration across the computer science departments at Denmark’s eight universities and the Alexandra Institute.

The activities in DIREC are based on societal needs, where research is continuously translated into value-creating solutions in collaboration with the business community and the public sector. The projects operate across industries with focus on artificial intelligence, Internet of Things, algorithms and cybersecurity among others.


Embedded AI

Partners in Embedded AI:

  • DTU
  • Aarhus Universitet
  • Københavns Universitet
  • CBS
  • Grundfos Holding A/S
  • MAN Energy Solution
  • Indesmatech

Jan Madsen
DTU Compute
Technical University of Denmark


A new project will make it easier to design and certify IoT systems

8 December 2021

A new project will make it easier to design and certify IoT systems

IoT devices are blending into the infrastructure of both society and our personal lives. Many of these devices run in uncontrolled, potentially hostile environments, which makes them vulnerable to security attacks. Moreover, with the increasing number of safety critical IoT devices, such as medical and industrial IoT devices, IoT security is a public safety issue. Thus, the need for security in these systems has even been recognized at governmental and legislative level, e.g. in the EU, US and UK, resulting in a proposed legislation to enforce at least a minimum of security consideration in deployed IoT products.

Photo by Søren Kjeldgaard

Professor Jaco van de Pol will lead the DIREC project Secure IoT systems (SIoT), which aims to model security threats and countermeasures for IoT systems and services, to develop secure solutions, and to analyze residual security risks.

“Our goal with the SiOT project is to make it easier to design and certify secure IoT devices. Security and privacy are very important to many people and organizations that use IoT devices for measurements in smart cities, natural environments, logistics chains, and in their private homes. Engineering IoT devices is challenging, since they are physically small and must run on low power. Yet, they must perform accurate measurements and communicate with high efficiency. So how can one achieve security on top of that? We will provide new tools to model security threats, implement countermeasures, and analyze the final security risks”.

Jaco van de Pol continues: “I am happy to be able to work with a team that includes both academic researchers and industrial experts. This will ensure that the project addresses the right questions, and that we can find new solutions by combining the expertise from several disciplines. And we can evaluate the solutions in an industrial setting.”

The strategy is to use algorithms from automata theory and game theory to automate risk analysis and security strategy synthesis. The implementation of the security policies will consider both technical as well as social aspects, in particular usability in organizations and training of people.

For TERMA A/S, who are part of the project, their motivation is to be aware of the landscape in IoT systems in order to make them more cyber-resilient. Samant Khajuria, Chief Specialist Cybersecurity at TERMA A/S, explains:

“When we integrate IoT systems in our line of business, our main purpose is to provide safety for critical systems. Our systems go both to the defense and civilian sector such as Wind Farms, airports or harbors. We know that IoT devices sooner or later become obvious pieces of the puzzle in providing good systems in the future. And before integrating in systems like this we need to understand the threats and risks. Secondly, we would like to collaborate with universities in Denmark, because the researchers are working with this everyday. We are merely the users of the technology.”

Jørgen Hartig is Managing Director and Partner in SecurIOT, who are also part of the project. He hopes the project will help create the needed awareness on both sides of the “table” about the environment of industry 4.0. They often hear customers saying: “Why would the hackers go for us? We do not produce anything interesting…” or “the production has been for 25 years, and we haven’t had an issue” or “there are no connections between IT systems and OT systems.”

“The last statement will be challenged dramatically in the next 5-10 years. IoT and OT vendors will come out with new technology solutions that will utilize cloud-enabled applications and 5G connections to the factory floor, so there will be no “air-gap” in the future. I am not saying it is wrong, I am just saying that the consumers and IoT vendors need to work with the cyber threats and risks in a structured way.”

According to Gert Læssøe Mikkelsen, Head of Security Lab at the Alexandra Institute, there is a need for improved cyber security in IoT, which is also the reason why they participate in the project:

“We see a need for academic research in close collaboration with industry to deal with this. We hope that the tools and methodologies developed in this project will be deployed and improve the cybersecurity of IoT so we are all ready for the future, where we both expect an increase in the threats from cybercriminals and, as a consequence, an increase in requirements and regulation in this area that the industry must be ready to handle.”

About DIREC – Digital Research Centre Denmark

The purpose of the national research centre DIREC is to bring Denmark at the forefront of the latest digital technologies through world-class digital research. To meet the great demand for highly educated IT specialists, DIREC also works to expand the capacity within both research and education of computer scientists. The centre has a total budget of DKK 275 million and is supported by the Innovation Fund Denmark with DKK 100 million. The partnership consists of a unique collaboration across the computer science departments at Denmark’s eight universities and the Alexandra Institute.

The activities in DIREC are based on societal needs, where research is continuously translated into value-creating solutions in collaboration with the business community and the public sector. The projects operate across industries with focus on artificial intelligence, Internet of Things, algorithms and cybersecurity among others.



In SIoT, the following parties will participate as collaborators:

  • Aarhus University
  • Aalborg University
  • DTU
  • Copenhagen Business School
  • Alexandra Institute
  • Terma
  • Grundfos
  • Develco Products
  • Beumer Group
  • Micro Technic
  • SecuriOT
  • Seluxit

Jaco van de Pol
Department of Computer Science
Aarhus University