Categories
News Researcher spotlight

Thomas Hildebrandt advocates for reliable AI in the public sector – It’s time to end “Probability guessing”

21 January 2025

Researcher Relay #1

Thomas Hildebrandt advocates for reliable AI in the public sector – It’s time to end “probability guessing”

For the past 15 years, Thomas Hildebrandt has been researching the use of AI in the public sector. Time and again, these projects have ended up on the state’s IT graveyard. After encountering numerous challenges, he now senses a shift towards models that may be less glamourous than language models but are far more reliable and energy-efficient. This is the first article in the Researcher Relay series.

It is easy to see how AI could benefit hospitals or municipalities. Automated processes and more time for citizen-facing tasks are just some of the potential advantages in the public sector’s AI utopia that has been sought after for decades.

Early efforts began with profiling citizens though data registries, followed by chatbots powered by language models. Yes, success has been limited, and many of these initiatives have ended up on the public sector’s graveyard of failed IT projects.

With 15 years of experience researching AI in the public sector, Thomas Hildebrandt, Professor at the Department of Computer Science at the University of Copenhagen, is one of the most prominent voices in the field. According to him, language models like ChatGPT have gained popularity due to their ability to produce responses that seem convincing and trustworthy. However, these models are not designed to operate in a context where answers must always be correct.

“These models don’t generate answers based on true logic. They are merely text generators that predict the most likely next sentence—not based on facts, but on statistical probabilities. This doesn’t work in a public context, where we must be able to explain why a citizen is being denied welfare benefits or why children are being forcibly removed from their homes,” he explains.

Hildebrandt emphasizes that public decision-making demands transparency. Citizens need to be able to trust that AI systems are not simply guessing but are in fact complying with the law. Moreover, these processes must be documentable.

“We must be able to explain how the system arrived at its decision. This is a fundamental requirement that we must demand in any society governed by law.”

From predictive models to hybrid solutions


The history of AI in the public sector is full of failed attempts to integrate hyped technology. Initially, there was a belief that linking data from various registries could predict everything from long-term unemployment to child benefits.

“The result was that caseworkers lost trust in the technology, and users began to complain about discrimination,” Thomas Hildebrandt notes.

A few years ago, language models emerged as a new ready-made solution promising significant changes. Public employees were trained to use prompts, and retrieval-augmented generation (RAG) solutions were developed to train models using databases of legal texts. But even these solutions have proven problematic.

“These systems are too complex to maintain, and what happens when a software update occurs? There are just too many unknown factors,” Hildebrandt warns.

In addition to their imprecision, language models are extremely energy-consuming.

“It takes billions of calculations to run a language model, making them extremely energy-inefficient. A rule-based chatbot uses 1,000 times less energy. It may be less exciting, but it is far more reliable and traceable,” he says.

Hybrid AI may be the answer


For Thomas Hildebrandt and his research team, the future lies in hybrid AI, which combines language models with rule-based systems. This approach allows citizens to interact with the language model in a familiar way, while the answers are derived from a rule-based system that encodes legal provisions.

“It’s about using the best of both worlds. Rule-based systems offer reliability and traceability, while language models provide flexibility and user interaction,” he explains, using a construction scenario as an example:

“If your application to build a carport is rejected, the system must be able to explain the specific rules behind the decision. This is the kind of AI we need—not probability guessing.”
As a result, his team has received support for new projects focused on developing rule-based systems.

“It’s encouraging that we are starting to see signs from the public sector that this is the direction they want to pursue. That’s why I’m optimistic about the future of AI in the public sector,” he concludes.

See the video here:

Researcher relay

The interview with Thomas Hildebrandt is part of the Researcher Relay series, a collaboration where researchers from Danish universities pass the baton to one another.

Thomas Hildebrandt has selected Naja Holten Møller, Associate Professor at DIKU, to take over the relay. She works with AI in the healthcare sector on a daily basis.

Categories
Future of work News Researcher spotlight

For four decades, Susanne has been shaping the digital everyday life of Danes

4 October 2023

For four decades, Susanne has been shaping the digital everyday life of Danes

Susanne Bødker’s research in Human-Computer Interaction (HCI) has had a profound influence for decades on how we interact with technology in our daily lives and work. In September, she celebrated her 40th anniversary at Aarhus University.

Photo: Morten Koldby

IT solutions should be designed by humans for humans; digital tools should make a difference in everyday life and function like an extended arm, seamlessly integrated without us having to consciously think about it.

The research field of Human-Computer Interaction (HCI) plays a central role in the technology that surrounds us daily, and in the way the job market is evolving, ensuring that new IT solutions effectively support human use.

One of Denmark’s leading researchers in the HCI field is Susanne Bødker, a computer science professor at Aarhus University, honored multiple times for her research results in human-machine interaction.

Since the 1980s, this researcher from Aarhus has been involved in designing the  digital everyday life and work of the Danes, ensuring that technology develops in a constructive way and critically addresses challenges and opportunities.

Currently, she is particularly focused on how hybrid work challenges companies and employees, and how it fundamentally alters the interaction and relationship between people in a workplace, for better or worse.

– Hybrid work is only becoming more prevalent, so we need to critically consider the possibilities and limitations of technology, as well as the way we organise and lead. When a workplace with several hundred employees, for example, decides that all activities should be hybrid going forward, it imposes new demands on personnel management. It changes the very nature of work and meetings when employees must always be able to participate remotely. This affects what can be shared, when and how – it essentially changes everything participants see, hear, and experience because on the screen, we are still only ‘flat people,’ says Susanne Bødker.

Examine your organisation critically and inquisitively

When advising organisations on how to adapt to being a modern hybrid workplace, this entails considering technology, physical environments, and the managerial aspects of hybrid work.

– Companies face vastly different challenges, and the technology must be integrated into the specific context. Are you a software company with employees all over the world, struggling with the issue that people are reluctant to move to Aarhus? Are you a bank looking to replace physical customer meetings with online ones? Do you simply want people to have the freedom to work from home and only physically come into the office a few days a week? In that case, it is necessary to organise differently so people come into the office on the same days. Every company needs to address its own reality and current challenges.

Her extensive research in user interfaces and user experiences has led to new methods and theories that have gained international attention. In 2017, she received an ERC Advanced Grant of over 2 million euros from the European Research Council for research in user interfaces for complex human use of computers and the research project “Common Interactive Objects.” The goal was to explore the possibility of building open and shareable platforms and communities based on the user’s – not the computer systems’ – terms.

Most recently, she is participating in the REWORK project, funded by the Digital Research Centre Denmark. REWORK is a multidisciplinary research project where researchers, various companies, and three recognised artists explore the future of the hybrid workplace particularly focusing on new technologies that support aspects such as human needs, relational and articulation work, as well as embodiment and presence.

Categories
News Researcher spotlight

Meet Miao Zhang, who works on the black-box problem with automated deep learning

8 APRIL 2022

Meet Miao Zhang, who works on the black-box problem with automated deep learning

The 31-year-old Miao Zhang from China focuses on areas such as automated machine learning and deep learning. The areas are still in their early stage, but automated deep learning has a big potential as a system builds itself without human interaction. 

Miao works as Assistant Professor at the Department of Computer Science at Aalborg University. He is also part of the DIREC workstream Advanced and Efficient Big Data Management and Analysis. The project focuses on how we can develop new efficient prototypes that can enable the use of big data in industry. Miao focuses especially on building efficient and explainable prototypes for different tasks and data in an automated manner.

Can you tell us about your background and why you settled down in Denmark as a computer scientist?
I am interested in machine learning, automated deep learning and explainable AI. I hope that I can introduce automated deep learning and explainable AI to the Danish data science community, since research about this topic is rare.

Besides that I chose to come to Aalborg because it is a young and very active university, which provides a lot of opportunities for young researchers. I have several friends, who are working here, and they also recommend me to join their group, Center for Data-Intensive Systems (DAISY), which has an international reputation. I believe I can learn a lot here. 

I think the working environment in Denmark and Aalborg is pretty good. We have a lot of flexible time, so I can focus on my research. In addition, I think Aalborg is an environmentally-friendly city, and I really enjoy life here.

Can you tell us about your research area?
I have broad research interests in machine learning and artificial intelligence – especially automated deep learning and explainable AI. I am interested in automatic development of efficient, scalable and robust algorithms for machine learning, data mining, data management and deep learning applications with formal theoretical guarantees and explanations. I see myself working on these problems in my foreseeable research life. 

What are the scientific challenges and perspectives of your project?
Although the techniques of deep learning have been applied in different areas, such as computer vision, face recognition, medical imaging, natural language processing, data mining and data management, the design of deep learning systems is time-consuming – and it is still a black box problem to explain why the developed deep learning system is working. 

Automated deep learning is the process of building deep learning systems for different problems without human intervention. Explainable AI is to explain why the developed system is working – and it can also assist the design of the deep learning system. Automated deep learning and explainable AI are in their early-stages, and we still need to define some research problems, improve efficiency, and explain why the automated designed system works.

How can your research make a difference for companies and communities?
Automated deep learning aims to build a better deep learning system in a data-driven automated manner, so that most practitioners in deep learning can build a high-performance machine learning model without being an expert in the field of deep learning.

Automated deep learning can provide end-to-end deep learning solutions and these solutions are usually better than hand-designed deep learning systems. These automated systems can lower the threshold of deep learning and make it easy for everyone to use these techniques to solve their own problems.

About Miao Zhang
  • Masters Degree from University of Science and Technology in Beijing

  • PhD in information technology from University of Technology Sydney, Australia

  • PostDoc at the Machine Learning Group at Monash University, Australia

  • Assistant Professor at Aalborg University.

Read more

Categories
News Researcher spotlight

Meet Martin Zimmermann whose research focus is on verification tools

31 March 2022

Meet Martin Zimmermann whose research focus is on verification tools

39-year-old Martin Zimmermann from Germany works with correct and secure systems. Since the summer of 2021, he has worked as Associate Professor at the Distributed, Embedded and Intelligent Systems research group (DEIS) at the Department of Computer Science, Aalborg University.

Zimmermann is part of the DIREC project Verifiable and Safe AI for Autonomous Systems. The aim of the project is to develop methods and tools for safety critical systems within a variety of domains. Here, he works on understanding the foundations of correct and secure systems.

Can you tell about your research area?
Software and embedded systems are everywhere in our daily lives, from medical devices to aircrafts and the airbags in our cars. These software systems are often very complex, and it is challenging to develop correct systems. Therefore, we need verification software that can check such systems for errors.

The news is full of stories of potential vulnerabilities in software and embedded systems. Some of these vulnerabilities have been there for several years and are very hard to find. They might not be seen in daily use – only when you try to exploit a system.

It is even more pronounced when you look at distributed systems made up of several components interacting with each other. Like a website for the seat reservation system in a cinema where you click on the seat, which you want to book, while others do it at the same time. The system must be able to deal with many concurrent requests. Verification tries to automate the reasoning and automatically proves that the system is correct and safe.

How can we make these systems more secure?
Personally, I am interested in viewing this as a kind of game. I want to design a system that lives in an environment, so I understand this as a game between the system and the environment. The system wants to satisfy a certain property and the environment wants to break the system. And by that game-view you can get very strong guarantees.

It’s very hard to get complex systems correct. And if you have a safety-critical system you need those guarantees to be obtained by verification software. If you employ software that controls an airbag, then you want to be sure that it works correctly. It’s easy to miss errors – so you cannot rely on humans to check the code.

What is the potential of verification?
Verification is a very challenging task. It is challenging for a human to argue that a system is correct, and it is also hard for a computer, so unfortunately, it is not applicable universally. Verification is used for systems that are safety-critical, but even here there is a tradeoff between verification cost and development cost. 

One of our goals is to develop techniques that are easy to use in practice. We work on the foundations of verification languages and try to understand how far we can push their expressiveness before it becomes infeasible to verify something. It can take hours or days to verify something, so it is a computationally expensive task. We try to understand what is possible and try to find problems and application areas where you can solve this task faster.

Another important thing is that we need precise specification languages for verification. You cannot use natural language. The verification algorithm needs a precise specification with precise semantics, so we are developing different logics to see if they can be used by engineers to actually write specifications. If it is too complicated for the practitioner, e.g., the engineer, it will not be used. You must find the sweet spot between expressiveness and usability.

Did you know Aalborg University before you were employed?
I have had a connection to Aalborg since my PhD where I worked on a European project with partners from all over Europe including the DEIS group in Aalborg. I was in Aalborg a few times during my PhD and knew people here. Aalborg is central in Europe when it comes to verification and design of systems. There are many collaborators and there is a good connection to the industry compared to other places. It is a very good location.

About Martin Zimmermann

  • PhD from RWTH Aachen University.
  • Postdoc at Warsaw University and University of Saarland in Saarbrücken.
  • Lecturer at the University of Liverpool.
  • Associate Professor at Aalborg University.

Read more

Categories
News Researcher spotlight

Meet Tung Kieu, who has come to Denmark to detect anomalies in data

25 February 2022

Meet Tung Kieu, who has come to Denmark to detect anomalies in data

Tung Kieu came to Denmark as a PhD student and today he works as Assistant Professor at the Department of Computer Science at Aalborg University. He is associated with DIREC’s workstream Advanced and Efficient Big Data Management and Analysis.

Data is found everywhere today in our society. This applies to everything from our smartphones and GPS navigation in cars to the sensors mounted on wind turbines. And by analyzing these huge amounts of data you can detect anomalies, which can contribute to improve our state of health and optimize companies’ production.

The concept is called anomaly detection and the 31-year-old Tung Kieu has plunged into this topic.

Can you tell us a bit about your background and how you ended up working with big data and anomaly detection?

I have a Master’s degree in computer science from Vietnam National University, and I have been in Denmark for about five years. I came to Denmark because I received a PhD scholarship in the research group Daisy – Center for Data-Intensive Systems at Aalborg University, which is led by Professor Christian S. Jensen. When I finished my PhD, I became a research assistant, and after a few months, I got a position as Assistant Professor.

Aalborg has a great reputation in computer science and engineering and Christian S. Jensen is furthermore recognized for his outstanding research in databases and data mining. In Vietnam, my supervisor was affiliated with Christian S. Jensen and, in this way, I got in touch with him and received the scholarship.

In what way is research in Denmark different to research where you come from?

– The environment in Denmark is very good and, furthermore, Aalborg is the happiest city in EU, according to a study from EU-Commission. We have a very good work-life balance, where we focus more on the efficiency than the working time. Aalborg University is a young yet very active university. The university ranks very high compared to other universities, and our lab – Center for Data-Intensive Systems (DAISY) ranks 2nd best among all research groups in Europe. It’s great to be part of that.

Can you tell us about your research?

I work with databases and data mining and, more specifically, the area called anomaly detection in data. Due to the extensive digitization of processes, new data are constantly created, and by being able to analyze and utilize data, we can optimize our everyday lives.

However, there are a number of challenges. We produce such large amounts of data all the time that very efficient algorithms are required to analyze for anomalies. In addition, data quality is a challenge because much sensor data is subject to noise and potentially contains incorrect values. This means that you have to clean data to achieve the required quality. But this is also what makes the area interesting.

What do you expect to get out of your research and how can your research make a difference for companies and communities?

It may be easier to understand if I give a few examples. Anomaly detection can be used in many different places. For example, supermarkets collect data about their customers, and we can analyze these data and get an overview of people’s shopping patterns. The supermarkets can use this to customize their purchases so that they do not end up with a lot of products that they cannot sell.

Another example are data collected from sensors installed on wind turbines. Here we can use the algorithms to detect anomalies and thus predict if components in a wind turbine are about to collapse, which is of great benefit to the wind turbine manufacturers.

Today, smartphones are very common and people use them to measure their health and how much exercise they get. We can use these data to analyze people’s health state. When smartphone users record data about their heart rate, we can actually analyze when people will potentially get a heart attack. The possibilities are endless, which makes the research area interesting.

Read more about Tung Kieu

Categories
News Researcher spotlight

Meet Christian Schilling, who has come to Denmark to build software that can check other software for errors

21 February 2022

Meet Christian Schilling, who has come to Denmark to build software that can check other software for errors

Today we have cyber-physical software systems everywhere in our society, from thermostats to intelligent traffic management and water supply systems. It is therefore crucial to develop verification software that can check these programs for errors before they are put into operation.  

Christian Schilling from Germany is interested in formal verification and modeling and has come to Aalborg University to be part of the DEIS group. He is also part of the DIREC project Verifiable and safe AI for Autonomous Systems and explains how research in cyber-physical systems makes a difference for companies and society.

Can you tell a bit about your background and why you ended up in Denmark as a computer scientist?

I did my PhD at a German university (Freiburg) and was a postdoc at an Austrian research institute (IST Austria). Now I am a tenure-track Assistant Professor at Aalborg University. The DEIS group at Aalborg University has an international reputation and is a great fit for my interests. It is productive to work with people who “speak my language.” At the same time I can develop my own independent research directions.

What are you researching and what do you expect to get out of your research?

Broadly speaking, I am interested in the algorithmic analysis of systems. More precisely, I work on cyber-physical systems, which are systems consisting of a mix of digital (cyber) and analog (physical) components. Nowadays these systems are everywhere, from thermostats to aircraft. I want to answer the fundamental question of safety: Can a system end up in an error? My analysis is based on mathematical models, and I also work on the construction of such models from observational data.

We look at models of systems and then we try to find behaviors of that system and it might not be what you want. Or if you don’t find any errors you can get a mathematical proof that your model is correct. Of course you could make mistakes with the wiring when you implement the models in a practical system, we cannot cover that. That’s why there are still more practical aspects of our work.

What are the scientific challenges and perspectives in your project?

One of the grand challenges is to find approaches that scale to industrial systems, which are often large and complex. In full generality this goal cannot be achieved, so researchers focus on identifying a structure in practical systems that still allows us to analyze the system. The challenge is to find that structure and develop techniques that exploit this challenge.

Another recent relevant trend is the rise of artificial intelligence and how it can be safely integrated into systems without causing problems. Think about autonomous systems like vacuum cleaners, lawn mowers, and of course self-driving cars in the near future. 

It is certainly a challenge to analyze and verify systems that involve AI, because the way AI is used these days is really more like a black box where nobody understands what happens. It is very difficult to say that a self-driving car under no circumstance will kill a person. 

To make this kind of analysis you need a model, and of course you could say that an engineer could build this model, but at a certain size it becomes too complex and very difficult to do. So you want an automatic technique to do that. 

Another challenge is to go from academic models to real world systems, because usually you do some simplifications which you have to take into consideration and solve when you implement the models. 

How can your research make a difference for companies and communities?

Engineers design and build systems. Typically, they first develop a model and analyze that model. My research directly addresses this phase and helps engineers learn about the behavior only given a model. This means that they do not need to build a prototype to understand the system. This saves cost in the design phase, as changing a model is cheap but changing a prototype is expensive. On the level of a model you can actually have mathematical correctness guarantees. This is something you cannot achieve in the real world.

The DEIS group has a lot of industry collaboration, but so far I’ve been working with academic modeling. With these verification models you can make sure that intelligent traffic systems work as they should.

Categories
News Researcher spotlight

Meet Tijs Slaats, who just won a prize for best process mining algorithm

Meet Tijs Slaats, who just won a prize for best process mining algorithm

Tijs is Associate Professor at the Department of Computer Science at the University of Copenhagen and Head of the Business Process Modeling and Intelligence research group. In DIREC, he works on the Bridge project AI and Blockchains for Complex Business Processes.

Tijs’ research interests include declarative and hybrid process technologies, blockchain technologies, process mining, and information systems development.  

He co-invented the flagship declarative Dynamic Condition Response (DCR) Graphs process notation and was a primary driver in its early commercial adoption. In addition, he led the invention and development of the DisCoveR process miner, which was recognized as the best process discovery algorithm in 2021. 

Can you tell us briefly about your research and what value you expect to get from it?
We try to describe processes. It can be basic things that we do as human beings. It could be assembling a car at a factory, but it could also be treating patients at a hospital. If a patient is admitted to a hospital, they need help and treatment.

What it has in common for these examples is that you need to go through a number of steps and activities to reach your goal, and those activities are related to each other. It may be medication that needs to be taken in a certain order.

In our research, we have developed a mathematical method for describing these processes. The reason for doing this is because it gives you the tools to ensure that the process goes the way you want it to.

In the new DIREC project, we take one step further. We have observed that many companies and organizations have large amounts of data on how they have performed their jobs. And we can look at these data and analyze them to see how they actually perform their jobs, because the way many people do their jobs does not necessarily match the way they expect to do it. Maybe they take shortcuts unintentionally.

Our idea is to find these data and analyze them and on that basis we get a model.

It is important that such a model also is understandable to the users so that they can understand how they perform their work. We call this process mining, and it is a reasonably large academic area. Two years ago, I developed an algorithm, and it was in a contest, where you compare which algorithm is most accurate to describe these “logs of behaviour”, and we won the contest.

Read more

What results do you expect from your research?
Our cooperation with industry is particularly important. In the project, we collaborate with the company Gekkobrain https://gekkobrain.com, which works with DevOps, and they are interested in analyzing large ERP systems and in finding tools that can optimize a system and find abnormalities. These systems are quite complex, so it is important to be able to identify where things are going wrong.

Gekkobrain has a lot of data because they work with large companies that have huge amounts of log data, and these systems are so complex that it adds some extra challenges for our algorithms. To get access to such complex data is an important perspective.

How can your research make a difference to companies and society?
The biggest impact of our work and models is that you can gain insight into how you perform your work. It gives you an objective picture of what has been done.

Companies can use it to find out if there are places where work processes are performed in an inappropriate way and thus avoid the extra costs.

Can you tell us about your background and how you ended up working with this research area?
I initially got a Bachelor degree in Information & Communication Technology from Fontys University of Professional Education, then worked in industry where I led the webshop development team of a Dutch e-commerce provider and acted as project leader on the implementation of our product for two major customers; Ferrari and Hewlett Packard.

I decided to move to Denmark after meeting my (Danish) wife, at the time I was already considering pursuing further education, while my wife was fairly settled in Denmark, so it made sense for me to be the one to move.

I got my MSc and PhD degrees at the IT University of Copenhagen. There I became interested in the field of business process modeling because it allows me to combine foundational theoretical research with very concrete industrial applications. Process mining in particular provides really interesting challenges because it requires learned models to be understandable for business users, something that has only recently come into focus in the more general field of AI. 

After a short postdoc at ITU I accepted a tenure-track assistant professorship at DIKU, which was a very good opportunity because it offers a (near) permanent position for relatively junior researchers. At the time this was uncommon in Denmark.