Kategorier
Explore project

Algorithms education via animation videos

Project type: Explore Project

Algorithms education via animation videos

Summary

Lectures on algorithms traditionally consist of blackboard/slide talks and reading material. This mode however need not be optimal for all students: Several highly popular YouTube channels for mathematics and other scientific content (e.g., 3blue1brown, Numberphile, Veritasium) with millions of views indicate that learners may respond very positively to professionally produced educational videos. This project aims at creating and evaluating an initial library of such videos to supplement teaching in algorithms.

Value Creation

This project primarily creates value in computer science education. It allows students to approach abstract concepts within CS through online videos, a familiar medium for digital natives. This medium gives educators and learners an alternative way of approaching material that is inherently abstract and generally considered hard to grasp. In alignment with the goals of workstream 12, this project also supports scaling the teaching of algorithms topics through digital technology. With a focus on clear and engaging communication and visual design, we also expect this alternative teaching mode to inspire students that are otherwise deterred from the classical technical image of CS. This may have beneficial effects on student diversity. With the necessary infrastructure set up, the project serves as a basis for further research projects and BSc/MSc theses on algorithm visualization and education. This attracts more students to algorithms topics, and possibly also to research in algorithms as PhD students. Examples for previous theses supervised by the PI are shown at the bottom of this page. With their educational value and production quality, coupled with our dissemination efforts, the videos indirectly also serve as outreach and publicity for CS education in Denmark, the DIREC project, and computer science in general. This may attract the general public and prospective students to computer science topics. Here, we also rely on the outreach expertise of project partner Thore Husfeldt. Formal publications in computer science education conferences/journals may also follow from this project

Participants

Project Manager

Radu-Christian Curticapean

Assistant Professor

IT University of Copenhagen
Department of Computer Science

E: racu@itu.dk

Thore Husfeldt

Professor

IT University of Copenhagen
Department of Computer Science

Nutan Limaye

Associate Professor

IT University of Copenhagen
Department of Computer Science

Christian Wulff-Nilsen

Associate Professor

University of Copenhagen
Department of Computer Science

Mikkel Abrahamsen

Assistant Professor

University of Copenhagen
Department of Computer Science

Philip Bille

Professor

Technical University of Denmark
DTU Compute

Inge Li Gørtz

Professor

Technical University of Denmark
DTU Compute

Eva Rotenberg

Associate Professor

Technical University of Denmark
DTU Compute

Srikanth Srinivasan

Associate Professor

Aarhus University
Department of Computer Science

Kategorier
Explore project

Accountability Privacy Preserving Computation via Blockchain

Project type: Explore Project

Accountability Privacy Preserving Computation via Blockchain

Summary

We will investigate how to combine secure multiparty computation and blockchain techniques to obtain more efficient privacy-preserving computation with accountability. Privacy-preserving computation with accountability allows computation on private data (without compromising data privacy), while obtaining an audit trail that allows third parties to verify that the computation succeeded or to identify bad actors who tried to cheat. Applications include data analysis (e.g., in the context of discrimination detection and bench marking) and fraud detection (e.g. in the financial and insurance industries).

Value Creation

Using this kind of auditable continuous secure computation can help fight discrimination and catch unethical and fraudulent behaviour. Computations that advance these goals include aggregate statistics on salary information  to help identify and eliminate wage gaps (e.g. as seen in the case of the Boston wage gap study [4]), statistics on bids in an auction or bets on a gambling site to determine whether those bids or bets are fraudulent, and many others. Organizations would not be able to carry out such computations without the use of privacy-preserving technologies due to privacy regulations; so, secure computation is necessary here. To be useful, these secure computations crucially require authenticity and consistency of the inputs. Organizations, which will not necessarily be driven by altruism, will have several incentives to participate in these computations. First, by using secure computation to detect fraud, the participants can guard against financial loss. Second, when participants are public organizations, honest participation (which anyone can verify) will generate positive publicity.

Participants

Sophia Yakoubov

Assistant Professor

Aarhus University
Department of Computer Science

E: sophia.yakoubov@cs.au.dk

Tore Frederiksen

Senior Cryptography Engineer

The Alexandra Institute

E: tore.frederiksen@alexandra.dk

Bernardo David

Associate Professor

IT University of Copenhagen
Department of Computer Science

E: beda@itu.dk

Mads Schaarup Andersen

Senior Usable Security Expert

The Alexandra Institute

Laura Lynggaard Nielsen

Senior Anthropologist

The Alexandra Institute

Louise Barkhuus

Professor

IT University of Copenhagen
Department of Computer Science

Kategorier
Explore project

Certifiable Controller Synthesis for Cyber-Physical Systems

Project type: Explore Project

Certifiable Controller Synthesis for Cyber-Physical Systems

Summary

As cyber-physical systems (CPSs) are becoming ever more ubiquitous, many of them are considered safetycritical. We want to help CPS manufacturers and regulators with establishing high levels of trust in automatically synthesized control software for safety-critical CPSs. To this end, we propose to extend the technique of formal certification towards controller synthesis: controllers are synthesized together with a safety certificate that can be verified by highly trusted theorem provers.

Value Creation

From a distant view point, our project aims to increase confidence in safety-critical CPSs that interact with individuals and the society at large. This is the main motivation for applying formal methods to the construction of CPSs. However, our project aims to give a unique spin to this. By cleverly combining the existing methods of controller synthesis, (timed automata) mode checking, and interactive theorem proving via means of certificate extraction and checking, we aim to facilitate the construction of control software for CPSs that ticks all the boxes: high efficiency, a very high level of trust in the safety of the system, and the possibility to independently audit the software. Given that CPSs have already conquered every sector of life, with the bulk of the development still ahead of us, we believe such an approach could make an important contribution towards technology that benefits the people.

Moreover, our approach aims to ease the interaction between the CPS industry and certification authorities. We believe it is an important duty of regulatory authorities to safeguard their citizens from failures of critical CPSs. Even so, regulation should not grind development to a halt. With our work, we hope to somewhat remedy this apparent conflict of interests. By providing a means to check the safety of synthesized controllers in a well-documented, reproducible, and efficient manner, we believe that the interaction between producers and certifying bodies could be sped up significantly, while increasing reliability at the same time. On top of that, controller synthesis has already been intensely studied and seems to be a rather mature technology from an academic perspective. However, it has barely set a foot into industrial applications. We are confident that formal certificate extraction and checking can be an important stepping stone to help controller synthesis make this jump.

This project also contributes to the objective of DIREC to bring new academic partners together in the Danish eco-system. The two principal investigators have their specialization background in two different fields (certification theory and control theory) and have not collaborated before. Thus the project strengthens the collaboration between the two fields as well as the collaboration between the two research groups at AU and AAU. This creates the opportunity for the creation of new scientific results benefiting both research fields.

Finally, we plan to generate tangible value for industry. There are many present-day use cases for control software of critical CPSs. During our project, we want to aid these use cases with controllers that tick all of the aforementioned “boxes”. This can be done by initiating several student projects and theses supporting theory development, tool implementation, and use case demonstration. The Problem Based Learning approach of Aalborg University facilitates this greatly. Furthermore, those students can use their experience
in future positions after graduating.

Participants

Martijn Goorden

Postdoc

Aalborg University
Department of Computer Science

E: mgoorden@cs.aau.dk

Simon Wimmer

Postdoc

Aarhus University
Department of Computer Science

E: swimmer@cs.au.dk

Kategorier
Explore project

Methodologies for scheduling and routing droplets in digital microfluidic biochips

Project type: Explore Project

Methodologies for scheduling and routing droplets in digital microfluidic biochips

Summary

The overall purpose of this project is to define, investigate, and provide preliminary methodologies for scheduling and routing microliter-sized liquid droplets on a planar surface in the context of digital microfluidics.

The main idea is to use a holistic approach in the design of scheduling and routing methodologies that takes into account real-world physical, topological, and behavioral constraints. Thus, producing solutions that can immediately find use in practical applications.

Value Creation

DMF biochips have been in the research spotlight for over a decade. However, the technology is still not mature at a level where it can deliver extensive automation to be used in applied biochemistry processes or for research purposes. One of the main reasons is that, although rather simple in construction, DMF biochips lack a clear automated procedure for being programmed and used. The existing methodologies for programming DMF biochips require an advanced level of understanding of software programming and of the architecture of the biochip itself. These skills are not commonly found in potential target users of this technology, such as biologists and chemists.

A fully automated compilation pipeline able to translate biochemical protocols expressed in a high-level representation into the low-level biochip control sequences would enable access to the DMF technology by a larger number of researchers and professionals. The advanced scheduling and routing methodologies investigated by this project are one of the main obstacles towards broadly accessible DMF technology. This is particularly relevant for researchers and small businesses which cannot afford the large pipetting robots commonly used to automate biochemical industrial protocol. One or more DMF biochips can be programmed to execute ad-hoc repetitive and tedious laboratory tasks. Thus, freeing qualified working hours for more challenging laboratory tasks.

In addition, the scheduling and routing methodologies targeted by this project enable for online decisions, such as controlling the flow of the biochemical protocols depending upon on-the-fly sensing results from the processes occurring on the biochip. This opens for a large set of possibilities in the biochemical research field. For instance, the behavior of complex biochemical protocols can be automatically adapted during execution using decisional constructs (if-then-else) allowing for real-time protocol optimizations and monitoring.

From a scientific perspective, this project would enable cross-field collaboration, develop new methodologies, and potentially re-purpose those techniques that are well known in one research field to solve problems of another field. For the proposed project, interesting possibilities include adapting advanced routing and
graph-related algorithms or applying well-known online algorithms techniques to manage the real-time flow control nature of the biochemical protocol. The cross-field nature of the project has the potential of providing a better understanding of how advanced scheduling and routing techniques can be applied in the context of a strongly constrained application such as DMF biochips. Thus, laying the ground for novel solutions, collaborations, and further research.

Finally, it should be mentioned that the outcome of this project, or of a future larger project based on the proposed explorative research, is characterized by a concrete business value. Currently, some players have entered the market with DMF biochips built to perform a specific biochemical functionality [12,13]. A software stack that includes compilation tools supporting programmability and enabling the same DMF biochip to perform different protocols largely expands the potential market of such technology. This is not the preliminary aim of this research project, but it is indeed a long-term possibility.

Participants

Project Manager

Luca Pezzarossa

Assistant Professor

Technical University of Denmark
DTU Compute

E: lpez@dtu.dk

Eva Rotenberg

Associate Professor

Technical University of Denmark
DTU Compute

Lene M. Favrholdt

Associate Professor

University of Southern Denmark
Department of Mathematics and Computer Science

Kategorier
Explore project

Automated Verification of Sensitivity Properties for Probabilistic Programs

Project type: Explore Project

Automated Verification of Sensitivity Properties for Probabilistic Programs

Sensitivity measures how much program outputs vary when changing inputs. We propose exploring novel methodologies for specifying and verifying sensitivity properties of probabilistic programs such that they (a) are comprehensible to everyday programmers, (b) can be verified using automated theorem provers, and (c) cover properties from the machine learning and security literature.

This work will bring together two junior researchers who recently arrived in Denmark and obtained their PhDs working on probabilistic verification.

Project description

Our overall objective is to explore how automated verification of sensitivity properties of probabilistic programs can support developers in increasing the trust in their software through formal assurances.

Probabilistic programs are programs with the ability to sample from probability distributions. Examples include randomized algorithms, where sampling is exploited to ensure that expensive executions have a low probability, cryptographic protocols, where randomness is essential for encoding secrets, and statistics, where programs are becoming a popular alternative to graphical models for describing complex distributions.

The sensitivity of a program determines how its outputs are affected by changes to its input; programs with low sensitivity are robust against fluctuations in their input – a key property for improving trust in software. Minor input changes should, for example, not affect the result of a classifier learned from training data. In the probabilistic setting, the output of a program depends not only on the input but also on the source of randomness. Hence, the notion of sensitivity – as well as techniques for reasoning about it – needs refinement.

Automated verification takes a deductive approach to proving that a program satisfies its specification: users annotate their programs with logical assertions; a verifier then generates verification conditions (VCs) whose validity implies that the program’s specification holds. Deductive verifiers are more complete and more scalable than fully automatic techniques but require significant user interaction. The main challenge for users of automated verifiers lies in finding suitable intermediate assertions, particularly loop invariants, such that an automated theorem prover can discharge the generated VCs. A significant challenge for developers of automated verifiers is to keep the amount and complexity of necessary annotations as low as possible.

Previous work [1] co-authored by the applicants provides a theoretical framework for reasoning about the sensitivity of probabilistic programs: the above paper presents a calculus to carry out “pen-and-paper” proofs of sensitivity in a principled and syntax-directed manner. The proposed technique deals with sampling instructions by requiring users to identify suitable probabilistic couplings, which act as synchronization points, on top of finding loop invariants. However, the technique is limited in the sense that it does not provide tight sensitivity bounds when changes to the input cause a program to take a different branch on a conditional.

Our project has four main goals. First, we will develop methodologies that do not suffer from the limitations of [1]. We believe that conditional branching can be treated by carefully tracking the possible divergence. Second, we will develop an automated verification tool for proving sensitivity properties of probabilistic programs. The tool will generate VCs based on the calculus from [1], which will be discharged using an SMT solver. In designing the specification language, we aim to achieve a balance so that (a) users can conveniently specify synchronization points for random samples (via so-called probabilistic couplings) and (b) existing solvers can prove the resulting VCs. Third, we aim to aid the verification process by assisting users in finding synchronization points. Invariant synthesis has been extensively studied in the case of deterministic programs. Similarly, coupling synthesis has been recently studied for the verification of probabilistic programs [2]. We believe these techniques can be adapted to the study of sensitivity. Finally, we will validate the overall verification system by applying it to case studies from machine learning, statistics, and randomized algorithms.

 

Participants

Alejandro Aguirre

Postdoc

Aarhus University
Department of Computer Science

Christoph Matheja

Assistant Professor

Technical University of Denmark
DTU Compute

Kategorier
Explore project

Understanding Biases and Diversity of Big Data used for Mobility Analysis

Project type: Explore Project

Understanding Biases and Diversity of Big Data used for Mobility Analysis

Summary

Our capabilities to collect, store and analyze vast amounts of data have greatly increased in the last two decades, and today big data plays a critical role in a large majority of statistical algorithms. Unfortunately, our understanding of biases in data has not kept up. While there has been lot of progress in developing new models to analyze data, there has been much less focus on understanding the fundamental shortcomings of big data.

This project will quantify the biases and uncertainties associated with human mobility data collected through digital means, such a smartphone GPS traces, cell phone data, and social media data.

Ultimately, we want to ask the question: is it possible to fix big mobility data through a fundamental understanding of how biases manifest themselves?

Value Creation

We expect this project to have a long-lasting scientific and societal impact. The scientific impact of this work will allow us to explicitly model bias in algorithmic systems relying on human mobility data and provide insights into which population are left out. For example, it will allow us to correct for gender, wealth, age, and other types of biases in data globally used for epidemic modeling, urban planning, and many other usecases. Further, having methods to debias data will allow us to understand what negative impacts results derived from biased data might have. Given the universal nature of bias, we expect our developed debiasing frameworks will also pave the way for quantitative studies of bias in other realms of data science.

The societal impact will be actionable recommendations provided to policy makers regarding: 1) guidelines for how to safely use mobility datasets in data-driven decision processes, 2) tools (including statistical and interactive visualizations) for quantifying the effects of bias in data, and 3) directions for building fairer and equitable algorithm that rely on mobility data.

It is important to address these issues now, because in their “Proposal for a Regulation on a European approach for Artificial Intelligence” from April 2021 the European Commission (European Union) outlines potential future regulations for addressing the opacity, complexity, bias, and unpredictability of algorithmic systems. This document states that high-quality data is essential for algorithmic performance and suggest that any dataset should be subject to appropriate data governance and management practices, including examination in view of possible biases. This implies that in the future businesses and governmental agencies will need to have data-audit methods in place. Our project addresses this gap and provides value by developing methodologies to audit mobility data for different types of biases — producing tools which Danish society and Danish businesses will benefit from.

Participants

Project Manager

Vedran Sekara

Assistant Professor

IT University of Copenhagen
Department of Computer Science

E: vsek@itu.dk

Laura Alessandretti

Associate Professor

Technical University of Denmark
DTU Compute

Manuel Garcia-Herranz

Chief Scientist

UNICEF
New York

Elisa Ormodei

Assistant Professor

Central European University

Kategorier
Explore project

Ergonomic & Practical Effect Systems

Project type: Explore Project

Ergonomic & Practical Effect Systems

Summary

Effect systems are currently a hot research subject in type theory. Yet many effect systems, whilst powerful, are very complicated to use, particularly by programmers that are not experts at type theory. Effect systems with inference can provide useful guarantees to programming languages, while being simple enough to be used in practice by everyday programmers.

Building on the Boolean unification-based polymorphic effect system in the Flix programming language, we want to pursue two practical short-term objectives: to (a) improve the quality of effect error messages, and to (b) develop techniques to improve the performance of Boolean unification and effect inference. Thus laying the foundation for a more ambitious objective: The Futhark programming language supports a form of referentially transparent in-place updates, controlled by a system of uniqueness types inspired by Clean, but which is too limited in the presence of polymorphic higher-order functions. Recasting the type system in terms of effects, based on the one in Flix, might provide a more intuitive system.

A unique aspect of this project is that it brings together two programming language researchers, one from Aarhus and one from Copenhagen, who are both working on full-blown programming language implementations.

Value Creation

We address value creation following the three outlined categories:

Scientific Value: We see two clear publishable scientific contributions: (a) new techniques to improve the performance of Boolean unification and (b) new applications of type and effect systems based on Boolean unification.

Capacity Building: Flix and Futhark are the two the major academic efforts towards building new programming languages in Denmark. Bringing the two research groups together will facilitate knowledge sharing and technology transfer; enabling both projects to thrive and grow even further. This unique opportunity exists because both languages are based on similar technology and because they do not compete in the same space. Success for one is not at the expense of the other, and they can rise together.

Business and Societal Value: A significant amount of research effort has been expended on designing effect systems. Despite widespread belief that such systems can lead to safer programs, few systems have been implemented in real-world programming languages. By focusing on improving the ergonomics, we want to make these technologies more accessible. Being the designers of Flix and Futhark, we are in great position to conduct such work. We can show the way for other mainstream programming languages by having real, full-blown implementations.

After decades of relative stagnation, programming languages are now rapidly absorbing features previously only seen in obscure or academic programming languages. Java and C# and prominent examples of originally very orthodox object-oriented languages that have been augmented with concepts from functional programming. We believe that effect systems and other fancy type system features are a logical next step, but before they can be added to mainstream languages, it must be shown that they can be designed and implemented in a form that will be palatable to industrial users. Thus, while Flix and Futhark may or may not be the languages of the future, we believe that our research can help impact the direction of future programming languages by providing solid formal foundations and real-world implementations that others can build on directly or indirectly.

Participants

Project Manager

Magnus Madsen

Associate Professor

Aarhus University
Department of Computer Science

E: magnusm@cs.au.dk

Troels Henriksen

Assistant Professor

University of Copenhagen
Department of Computer Science

Kategorier
Explore project

Hardware/software Trade-off for the Reduction of Energy Consumption

Project type: Explore Project

Hardware/software Trade-off for the Reduction of Energy Consumption

Summary

Computing devices consume a considerable amount of energy. Within data centers this has an impact on climate change and in small embedded systems, i.e., battery powered devices, energy consumption influences battery life. Implementing an algorithm in hardware (in a chip) is more energy efficient than executing it in software in a processor. Up until recently processor performance and energy efficiency have been good enough to just use software on a standard processor or on a graphic processing unit. However, this performance increase comes to an end and energy-efficient computing systems need domain specific hardware accelerators.

However, the cost of producing a chip is very high. Between fixed hardware and software there is the technology of field-programmable gate arrays (FPGAs). FPGAs are programmable hardware, the algorithm can be changed at runtime. However, FPGAs are less energy efficient than chips. We expect that for some algorithms an FPGA will be more energy efficient than the implementation in software. The research question is whether and how it is possible to reduce energy consumption of IT systems by moving algorithms from software into hardware (FPGAs). We will do this by investigating classic sorting and path-finding algorithms and compare their energy-efficiency and, in addition, their performance. Such results are essential to both data centers as well as embedded systems. However, the hardware design of these accelerators is often complex, and their development is time-consuming and error-prone. Therefore, we need a tool and methodology that enables software engineers to design efficient hardware implementation of their algorithms. We will explore a modern hardware construction language, Chisel. Chisel is a Scala-embedded hardware construction language that allows to describe hardware in a more software-like high-level language. Chisel is the enabling technology to simplify the translation of a program from software into hardware. This project will furthermore investigate the efficiency of using the functional and object-oriented hardware description language Chisel to express algorithms efficiently for execution in FPGAs.

Programs running on a general-purpose computer consume a considerable amount of energy. Some programs can be translated into hardware and executed on an FPGA. This project will explore the trade-offs between executing a program in hardware and executing it in software relative to energy consumption.

Value Creation

Scientific Value
The FPGA and software implementations of path-finding algorithms have recently been evaluated in the lense of performance, e.g., [?], whereas sorting algorithms have also been evaluated on energy consumption, e.g., [2]. Here FPGAs performed better than CPU in many cases and with similar or reduced energy consumption. The language used for implementation is Verilog and C which is then translated to Verilog using Vivado HLS. In this project, we will implement the algorithms in hardware using Chiesl and evaluate their performance and energy consumption. DTU and RUC will advance the research in the design and testing of digital systems for energy saving. Our proposed approach provides a general software engineering procedure that we plan to validate with standard algorithms used in cloud applications. This research will drive the adaption of hardware design methods to the education curriculum towards modern tools and agile methods.

Capacity Building
The project establish a new collaboration between two Danish Universities and is a first step towards building a more energy-aware profile of the Computer Science laboratory FlexLab, RUC. In return FlexLab make FPGAs available to the research assistants at RUC. Thus, this project will improve visibility of energy-aware design IT systems nationally and international. This project with the cooperation between researchers as DTU and RUC will allow Denmark to take the lead in digital research nd development for reduced energy consumption. The upcoming research positions at RUC will contribute to building RUC’s research capacity, and the project will also recruit new junior researchers directly and in future subsequent projects.

Business Value
The changes in the hardware industry indicates that the use of FPGAs will increase: A few years ago Intel bought Altera -one of the two largest FPGA production companies- to include FPGAs in future versions of their processors. Similar, AMD is aiming to buy Xilinx, the other big FPGA vendor. In addition, one can already rent a server in the cloud from Amazon that includes an FPGA. These changes all points towards that FPGAs are entering mainstream computing. Many mainstream programming languages like C# or Java already include functional features such as lambda expressions or higher-order functions. The more common languages for encoding FPGAs are Verilog, a C inspired language, and VHDL, a Pascal inspired language, Therefore, it may be efficient for mainstream software developers to use a functional language to efficiently implement algorithms in FPGAs and thus both increase performance and reduce the energy consumption.

Societal Value
Currently ICT consumes approximately 10% of the global electricity and this is estimated to increase to 20% in 2030. Thus, reducing energy consumption of ICT is critical. If successful, this project has the potential to reduce the energy consumption via rephrasing the essential software programs in FPGA units.

Participants

Project Manager

Maja Hanne Kirkeby

Assistant Professor

Roskilde University
Department of People and Technology

E: majaht@ruc.dk

Martin Schoerberl

Associate Professor

Technical University of Denmark
DTU Compute

Mads Rosendahl

Associate Professor

Roskilde Universlty
Department of People and Technology

Thomas Krabben

FlexLab Manager

Roskilde University
Department of People and Technology

Kategorier
Nyheder

Mød Tijs Slaats, der netop har vundet en pris for bedste algoritme til process mining

Mød Tijs Slaats, der netop har vundet en pris for bedste algoritme til process mining

Tijs er Associate Professor ved Datalogisk Institut på Københavns Universitet og leder af forskningsgruppen for Process Modelling and Intelligence. I DIREC arbejder han på Bridge-projektet AI and Blockchains for Complex Business Processes.

Tijs’ forskningsinteresser omfatter deklarative og hybride procesmodeller, blockchain-teknologier, process mining og udvikling af informationssystemer.

Kan du fortælle lidt om, hvad I forsker i, og hvad I forventer at få ud af jeres forskning?
Vi forsøger at beskrive processer. Det kan være grundlæggende ting, som vi gør som mennesker. Det kan være samling af en bil på en fabrik, men det kunne også være behandling af patienter på et hospital. Hvis en patient bliver indlagt på et hospital, så har de brug for hjælp og behandling.

Det har det tilfælles, at du kommer igennem et antal trin og aktiviteter, som gør, at du når dit mål, og de aktiviteter er relateret til hinanden. Det kan være medicin, som skal tages i en bestemt rækkefølge.

I vores forskning er vi kommet frem til en matematisk metode til at beskrive sådanne processer. Årsagen til at vi gør det, er at det giver dig redskaber til at sikre, at processen foregår på den måde, som du ønsker den skal.

I det nye projekt for DIREC tager vi skridtet videre. Vi har observeret, at mange virksomheder og organisationer ligger inde med mange data om, hvordan de har udført deres arbejde. Og vi kan kigge på de data og analysere os frem til, hvordan de rent faktisk udfører deres arbejde, fordi måden, som mange folk udfører deres arbejde på, ikke nødvendigvis matcher den måde, som de forventer at gøre det på. Måske laver de ubevidst shortcuts.

Vores idé er at finde de her data, analysere dem og ud fra det får vi en model.

Her er det vigtigt, at denne model også er forståelig for brugerne, så de kan forstå, hvordan de udfører deres arbejde. Det kalder vi process mining, og det er et rimeligt stort akademisk område. For to år siden udviklede jeg en algoritme, og den var med i en konkurrence, hvor man sammenligner, hvilken algoritme der er mest præcis til at beskrive de her “logs of behaviour”, og her vandt vi.

Læs mere her

Hvad forventer du at få ud af forskningen?
Vores samarbejde med virksomhederne er især vigtigt. I projektet samarbejder vi med virksomheden Gekkobrain, som arbejder med DevOps, og de er interesseret i at analysere store ERP-systemer og i at finde redskaber, som kan optimere et system, og som kan finde anormaliteter. Disse systemer er ret komplekse, så det er vigtigt at kunne identificere, hvor det går galt.

Gekkobrain ligger inde med store mængder data, fordi de samarbejder med store virksomheder, som har meget store mængder log-data, og de her systemer er så komplekse, så det er med til at give nogle ekstra udfordringer for vores algoritmer. De er lidt mere komplekse, end det vi ellers ville træne dem med.

At få adgang til sådanne komplekse data er et vigtigt perspektiv.

Hvordan kan jeres forskning gøre en forskel for virksomheder og samfund?
Den største impact med vores arbejde og modeller er, at du kan få indsigt i, hvordan du udfører dit arbejde. Det giver dig et objektivt billede af, hvad der er blevet lavet.

Virksomheder kan bruge det til at finde ud af, om der er steder, hvor de udfører deres arbejde på en uhensigtsmæssig måde, og på den måde kan undgå de ekstra omkostninger.

De kan bruge det til at finde steder, hvor man som virksomhed ikke har et overblik over, hvordan tingene udføres, så på den måde giver det et forståeligt og visuelt overblik over, at her er den måde, som I normalt udfører jeres arbejde.

Kan du fortælle lidt om din baggrund og hvordan du havnede med at arbejde med det her forskningsområde?
Jeg har oprindeligt en bachelorgrad i informations- og kommunikationsteknologi fra Fontys University of Professional Education, arbejdede derefter i industrien, hvor jeg var leder af webshopudviklingsteamet hos en hollandsk e-handelsudbyder og fungerede som projektleder på implementeringen af ​​vores produkt for to store kunder; Ferrari og Hewlett Packard.

Jeg besluttede at flytte til Danmark efter at have mødt min (danske) kone på et tidspunkt hvor jeg allerede overvejede at videreuddanne mig, mens min kone var fast bosat i Danmark. Så det gav god mening, at jeg var den, der skulle flytte.

Jeg fik min cand.scient. og ph.d.-grad på IT-Universitetet i København. Der blev jeg interesseret i området forretningsprocesmodellering, fordi det giver mig mulighed for at kombinere grundlæggende teoretisk forskning med meget konkrete industrielle anvendelser. Især Process mining giver virkelig interessante udfordringer, fordi det handler om at gøre traditionelle modeller forståelige for virksomheder, noget der først for nylig er kommet fokus på inden for det mere generelle AI-område.

Efter en kort ansættelse som postdoc på ITU takkede jeg ja til et tenure-track adjunkt-ansættelse ved DIKU, hvilket var en rigtig god mulighed, fordi det betyder en (næsten) fast stilling til relativt yngre forskere. Dette var på det tidspunkt usædvanligt i Danmark.

Kategorier
Nyheder

“Flere og bedre uddannelsesmuligheder i hele Danmark” må ikke begrænse antallet af IT-specialister

"Flere og bedre uddannelsesmuligheder i hele Danmark" må ikke begrænse antallet af IT-specialister

Der er udsolgt, vi har ikke flere digitale specialister på hylderne, og vi må ikke øge antallet af studerende på IT-uddannelserne i universitetsbyerne. Universiteterne skal fastfryse optaget på 2019-niveau.

Af Thomas Riisgaard Hansen, CEO i DIREC

Det er konsekvensen af de nuværende planer fra universiteterne, som er baseret på rammerne udstukket i aftalen om ”Flere og bedre uddannelsesmuligheder i hele Danmark”.

Med andre ord bliver universiteterne nødt til at afvise unge, der gerne vil læse en IT-uddannelse i fx hovedstaden og tilbyde dem en studieplads i en anden by, som ligger langt væk fra de jobs og virksomheder, der gerne vil tilbyde dem studiejobs og ansætte dem, når de er færdiguddannet.

Mangel på IT-specialister rammer hele samfundet
Det er et kæmpe problem for Danmark og vores konkurrenceevne, at der ikke uddannes nok IT-specialister. Hvem skal sikre Mærsk og Vestas’ IT-systemer og sørge for, at de og andre danske virksomheder ikke bliver ramt af hackerangreb? Hvem skal udvikle nye danske unicorns?

I 2021 tiltrak 11 danske IT-startups og vækstvirksomheder (1 placeret i Midtjylland og 10 i Region Hovedstaden) over 200 millioner, og samlet set blev der investeret 7 milliarder i disse selskaber. Men hvis de skal fastholdes i Danmark, skal de kunne rekruttere dygtige IT-medarbejdere. Ligeledes skal der bruges IT-specialister til at hjælpe alle de virksomheder, der er i gang med en digital transformation og inden for offentlig digitalisering. Og behovet for IT-specialister bliver kun større i takt med, at avancerede AI-algoritmer og håndtering af store datamængder bliver et grundvilkår for at være konkurrencedygtig.

Digital Research Centre Denmark (DIREC) er sat i søen for at hjælpe på problemet med en målsætning om at øge antallet af kandidater inden for det digitale med 35% frem mod 2025. Og selv om en stigning på 35% langt fra løser problemet, vil det sørge for et større udbud.

Udfordringen er størst i de store byer
På nuværende tidspunkt er manglen på IT-specialister klart størst i de store byer. En søgning på alle IT-jobs den 10. januar 2022 viste, at der i hele landet var 1.930 ledige jobs. Af de jobs ligger 78,8% af dem i de fire store universitetsbyer.

52,2 % Storkøbenhavn (1023 IT-jobs)
16,0 % Aarhus (327 IT-jobs)
5,2 % Aalborg (107 IT-jobs)
4,6 % Odense (93 IT-jobs)

For alle universitetsbyerne gælder det, at udbuddet af IT-jobs er væsentligt større end antallet af de IT-specialister, der dimitterer fra IT-uddannelserne i byerne hvert år.

Nødvendigt med et højt antal IT-specialister i de store byer
Fra DIREC’s side er der et stort ønske om at kunne opbygge gode IT-arbejdspladser i hele landet og sikre, at der er kvalificeret arbejdskraft bredt fordelt i landet, og vi vil gerne arbejde aktivt for den agenda. Men vi mener, at det er nødt til at være en både-og-mission, hvor vi parallelt med, at vi uddanner alle dem, der gerne vil have en IT-uddannelse i de store byer, aktivt arbejder på modeller, der også gør det muligt at uddanne sig inden for IT uden at skulle flytte til en af de store byer.

At begrænse antallet af digitale specialister i de store byer er alt for risikabelt for Danmarks fremtid og vækst, og det bør ikke være en konsekvens af at følge en agenda om flere og bedre uddannelsesmuligheder i hele Danmark.