Categories
Educational project

Initiatives to improve recruitment and retention of IT students

Project type: Explore Project

Initiatives to improve recruitment and retention of IT students

Denmark needs more IT specialists. But how do we get more young people to study computer science and become IT specialist? This project, consisting of two subprojects, focuses on initiatives that can improve both recruitment and retention of a larger but also more diverse group of young people e.g., female students and students without prior programming experience.

Diversity or Not: Heterogeneous vs Homogeneous study groups
Summary

The first subproject Diversity or Not: Heterogeneous vs Homogeneous Student Groups? will study the effect of diversity on the formation of CS student groups. The intent is to uncover evidence to issue recommendations on how to best form project groups. We expect this knowledge to be beneficial for the recruitment and retention of students as well as for the diversity of students.

Value Creation

We expect the outcomes of this project will create significant value for primarily the Danish universities, but also for the Danish tech industry (technology companies). The project intends to derive research-based recommendations on how to best form (student) project groups. Since group work is so widespread in Computer Science education in all of Denmark to foster communication and collaboration skills in connection with a problem, it is important to figure out what works best. This will strengthen the CS education in Denmark. 

Studying the impact of diversity on project groups will also be important as a proxy for professional groups in a work context, beyond university (with the obvious external threats to validity of this generalization). We expect this knowledge to be beneficial for the recruitment and retention of students as well as for the diversity of the students (e.g., female students and students without prior programming experience). Aside from the experiments themselves and their findings, we intend to also create and publsh (and seek independent generic approval of) generic experimental protocols for how to ethically and responsibly conduct such group diversity-performance experiments. This includes how to quantify group diversity and group performance. We imagine these generic experimental protocols would be relevant for other studies and companies seeking to specialize them in order to conduct their own more specific instances of the experiments. This also includes ethical considerations surrounding similar student experiments and how to make them ethically safe(r)

D-Pop – A Danish Annual Programming and Problemsolving Event

Summary

The second subproject D-Pop – A Danish Annual Programming and Problem Solving Event will plan, organize, and implement physical D-Pop events at Danish CS departments aimed at young people who are beginning programmers at all levels. The participants get increased programming skills and another perspective on programming and problem solving because focus is on collaboration, creativity, and curiosity.

We expect the events to have a positive effect on recruitment and retention of students as well as for the diversity of students.

Value Creation

The expected results of D-Pop are: 

1. Dramatically increased programming skills among participants. This is the expected outcome of just participating, akin to training in any other skill, and includes improved programming language mastery, problem solving skills, resilience, collaboration skills, debugging, and computational problem solving (in particular, algorithmic thinking). This competence boost is independent of the rung of the competence ladder on which the participant starts. I don’t need to reiterate the problems with recruitment of technically competent IT professionals in Denmark. 

2. Increased exposure and recruitment. D-Pop complements the existing pallette of outreach and recruitment activities currently used by Danish CS departments. Compared with similar events, D-Pop content is designed with a focus on immediate, satisfying, and positive feedback to beginning programmers, but in a way that is both honest and values competence, agency, and collaboration. Scalability is built into D-Pop’s infrastructure (both technical and social from the start. 

3. Establishment of a national network of problem setters. The value of this extends beyond D-Pop and immediately includes teaching material for high schools and universities. For another example, the Danish High School Informatics Olympiad (Dansk datalogidyst, of which Thore is a founding steering committee member) is in many aspects an opposite of D-Pop: it is individual, highly competitive, participation is restricted. However, the requirements to the network of people needed to “make DDD work” is identical to that of D-Pop. We are very far behind in Denmark on this compared to our Nordic neighbours. (Not to speak of other countries, where these activities are multi-million dollar industries.) 

Participants

Project Manager - project 1

Claus Brabrand

Associate Professor

IT University of Copenhagen
Department of Computer Science

E: brabrand@itu.dk

Project Manager - project 2

Thore Husfeldt

Professor

IT University of Copenhagen
Department of Computer Science

E: thore@itu.dk

Louise Barkhuus

Professor

IT University of Copenhagen
Department of Computer Science

Kim Normann Andersen

Professor

Copenhagen Business School
Department of Digitalization

Jacob Nørbjerg

Associate Professor

Copenhagen Business School
Department of Digitalization

Samuel Alberg Thrysøe

Associate Professor

Aarhus University
Department of Computer Science

Categories
Explore project

Algorithms education via animation videos

Project type: Explore Project

Algorithms education via animation videos

Summary

Lectures on algorithms traditionally consist of blackboard/slide talks and reading material. This mode however need not be optimal for all students: Several highly popular YouTube channels for mathematics and other scientific content (e.g., 3blue1brown, Numberphile, Veritasium) with millions of views indicate that learners may respond very positively to professionally produced educational videos. This project aims at creating and evaluating an initial library of such videos to supplement teaching in algorithms.

Value Creation

This project primarily creates value in computer science education. It allows students to approach abstract concepts within CS through online videos, a familiar medium for digital natives. This medium gives educators and learners an alternative way of approaching material that is inherently abstract and generally considered hard to grasp. In alignment with the goals of workstream 12, this project also supports scaling the teaching of algorithms topics through digital technology. With a focus on clear and engaging communication and visual design, we also expect this alternative teaching mode to inspire students that are otherwise deterred from the classical technical image of CS. This may have beneficial effects on student diversity. With the necessary infrastructure set up, the project serves as a basis for further research projects and BSc/MSc theses on algorithm visualization and education. This attracts more students to algorithms topics, and possibly also to research in algorithms as PhD students. Examples for previous theses supervised by the PI are shown at the bottom of this page. With their educational value and production quality, coupled with our dissemination efforts, the videos indirectly also serve as outreach and publicity for CS education in Denmark, the DIREC project, and computer science in general. This may attract the general public and prospective students to computer science topics. Here, we also rely on the outreach expertise of project partner Thore Husfeldt. Formal publications in computer science education conferences/journals may also follow from this project

Participants

Project Manager

Radu-Christian Curticapean

Assistant Professor

IT University of Copenhagen
Department of Computer Science

E: racu@itu.dk

Thore Husfeldt

Professor

IT University of Copenhagen
Department of Computer Science

Nutan Limaye

Associate Professor

IT University of Copenhagen
Department of Computer Science

Christian Wulff-Nilsen

Associate Professor

University of Copenhagen
Department of Computer Science

Mikkel Abrahamsen

Assistant Professor

University of Copenhagen
Department of Computer Science

Philip Bille

Professor

Technical University of Denmark
DTU Compute

Inge Li Gørtz

Professor

Technical University of Denmark
DTU Compute

Eva Rotenberg

Associate Professor

Technical University of Denmark
DTU Compute

Srikanth Srinivasan

Associate Professor

Aarhus University
Department of Computer Science

Categories
Explore project

Accountability Privacy Preserving Computation via Blockchain

Project type: Explore Project

Accountability Privacy Preserving Computation via Blockchain

Summary

We will investigate how to combine secure multiparty computation and blockchain techniques to obtain more efficient privacy-preserving computation with accountability. Privacy-preserving computation with accountability allows computation on private data (without compromising data privacy), while obtaining an audit trail that allows third parties to verify that the computation succeeded or to identify bad actors who tried to cheat. Applications include data analysis (e.g., in the context of discrimination detection and bench marking) and fraud detection (e.g. in the financial and insurance industries).

Value Creation

Using this kind of auditable continuous secure computation can help fight discrimination and catch unethical and fraudulent behaviour. Computations that advance these goals include aggregate statistics on salary information  to help identify and eliminate wage gaps (e.g. as seen in the case of the Boston wage gap study [4]), statistics on bids in an auction or bets on a gambling site to determine whether those bids or bets are fraudulent, and many others. Organizations would not be able to carry out such computations without the use of privacy-preserving technologies due to privacy regulations; so, secure computation is necessary here. To be useful, these secure computations crucially require authenticity and consistency of the inputs. Organizations, which will not necessarily be driven by altruism, will have several incentives to participate in these computations. First, by using secure computation to detect fraud, the participants can guard against financial loss. Second, when participants are public organizations, honest participation (which anyone can verify) will generate positive publicity.

Participants

Sophia Yakoubov

Assistant Professor

Aarhus University
Department of Computer Science

E: sophia.yakoubov@cs.au.dk

Tore Frederiksen

Senior Cryptography Engineer

The Alexandra Institute

E: tore.frederiksen@alexandra.dk

Bernardo David

Associate Professor

IT University of Copenhagen
Department of Computer Science

E: beda@itu.dk

Mads Schaarup Andersen

Senior Usable Security Expert

The Alexandra Institute

Laura Lynggaard Nielsen

Senior Anthropologist

The Alexandra Institute

Louise Barkhuus

Professor

IT University of Copenhagen
Department of Computer Science

Categories
Explore project

Certifiable Controller Synthesis for Cyber-Physical Systems

Project type: Explore Project

Certifiable Controller Synthesis for Cyber-Physical Systems

Summary

As cyber-physical systems (CPSs) are becoming ever more ubiquitous, many of them are considered safetycritical. We want to help CPS manufacturers and regulators with establishing high levels of trust in automatically synthesized control software for safety-critical CPSs. To this end, we propose to extend the technique of formal certification towards controller synthesis: controllers are synthesized together with a safety certificate that can be verified by highly trusted theorem provers.

Value Creation

From a distant view point, our project aims to increase confidence in safety-critical CPSs that interact with individuals and the society at large. This is the main motivation for applying formal methods to the construction of CPSs. However, our project aims to give a unique spin to this. By cleverly combining the existing methods of controller synthesis, (timed automata) mode checking, and interactive theorem proving via means of certificate extraction and checking, we aim to facilitate the construction of control software for CPSs that ticks all the boxes: high efficiency, a very high level of trust in the safety of the system, and the possibility to independently audit the software. Given that CPSs have already conquered every sector of life, with the bulk of the development still ahead of us, we believe such an approach could make an important contribution towards technology that benefits the people. 

Moreover, our approach aims to ease the interaction between the CPS industry and certification authorities. We believe it is an important duty of regulatory authorities to safeguard their citizens from failures of critical CPSs. Even so, regulation should not grind development to a halt. With our work, we hope to somewhat remedy this apparent conflict of interests. By providing a means to check the safety of synthesized controllers in a well-documented, reproducible, and efficient manner, we believe that the interaction between producers and certifying bodies could be sped up significantly, while increasing reliability at the same time. On top of that, controller synthesis has already been intensely studied and seems to be a rather mature technology from an academic perspective. However, it has barely set a foot into industrial applications. We are confident that formal certificate extraction and checking can be an important stepping stone to help controller synthesis make this jump. 

This project also contributes to the objective of DIREC to bring new academic partners together in the Danish eco-system. The two principal investigators have their specialization background in two different fields (certification theory and control theory) and have not collaborated before. Thus the project strengthens the collaboration between the two fields as well as the collaboration between the two research groups at AU and AAU. This creates the opportunity for the creation of new scientific results benefiting both research fields. 

Finally, we plan to generate tangible value for industry. There are many present-day use cases for control software of critical CPSs. During our project, we want to aid these use cases with controllers that tick all of the aforementioned “boxes”. This can be done by initiating several student projects and theses supporting theory development, tool implementation, and use case demonstration. The Problem Based Learning approach of Aalborg University facilitates this greatly. Furthermore, those students can use their experience
in future positions after graduating. 

Participants

Martijn Goorden

Postdoc

Aalborg University
Department of Computer Science

E: mgoorden@cs.aau.dk

Simon Wimmer

Postdoc

Aarhus University
Department of Computer Science

E: swimmer@cs.au.dk

Categories
Explore project

Methodologies for scheduling and routing droplets in digital microfluidic biochips

Project type: Explore Project

Methodologies for scheduling and routing droplets in digital microfluidic biochips

Summary

The overall purpose of this project is to define, investigate, and provide preliminary methodologies for scheduling and routing microliter-sized liquid droplets on a planar surface in the context of digital microfluidics.

The main idea is to use a holistic approach in the design of scheduling and routing methodologies that takes into account real-world physical, topological, and behavioral constraints. Thus, producing solutions that can immediately find use in practical applications.

Value Creation

DMF biochips have been in the research spotlight for over a decade. However, the technology is still not mature at a level where it can deliver extensive automation to be used in applied biochemistry processes or for research purposes. One of the main reasons is that, although rather simple in construction, DMF biochips lack a clear automated procedure for being programmed and used. The existing methodologies for programming DMF biochips require an advanced level of understanding of software programming and of the architecture of the biochip itself. These skills are not commonly found in potential target users of this technology, such as biologists and chemists.

A fully automated compilation pipeline able to translate biochemical protocols expressed in a high-level representation into the low-level biochip control sequences would enable access to the DMF technology by a larger number of researchers and professionals. The advanced scheduling and routing methodologies investigated by this project are one of the main obstacles towards broadly accessible DMF technology. This is particularly relevant for researchers and small businesses which cannot afford the large pipetting robots commonly used to automate biochemical industrial protocol. One or more DMF biochips can be programmed to execute ad-hoc repetitive and tedious laboratory tasks. Thus, freeing qualified working hours for more challenging laboratory tasks.

In addition, the scheduling and routing methodologies targeted by this project enable for online decisions, such as controlling the flow of the biochemical protocols depending upon on-the-fly sensing results from the processes occurring on the biochip. This opens for a large set of possibilities in the biochemical research field. For instance, the behavior of complex biochemical protocols can be automatically adapted during execution using decisional constructs (if-then-else) allowing for real-time protocol optimizations and monitoring.

From a scientific perspective, this project would enable cross-field collaboration, develop new methodologies, and potentially re-purpose those techniques that are well known in one research field to solve problems of another field. For the proposed project, interesting possibilities include adapting advanced routing and
graph-related algorithms or applying well-known online algorithms techniques to manage the real-time flow control nature of the biochemical protocol. The cross-field nature of the project has the potential of providing a better understanding of how advanced scheduling and routing techniques can be applied in the context of a strongly constrained application such as DMF biochips. Thus, laying the ground for novel solutions, collaborations, and further research.

Finally, it should be mentioned that the outcome of this project, or of a future larger project based on the proposed explorative research, is characterized by a concrete business value. Currently, some players have entered the market with DMF biochips built to perform a specific biochemical functionality [12,13]. A software stack that includes compilation tools supporting programmability and enabling the same DMF biochip to perform different protocols largely expands the potential market of such technology. This is not the preliminary aim of this research project, but it is indeed a long-term possibility.

Participants

Project Manager

Luca Pezzarossa

Assistant Professor

Technical University of Denmark
DTU Compute

E: lpez@dtu.dk

Eva Rotenberg

Associate Professor

Technical University of Denmark
DTU Compute

Lene M. Favrholdt

Associate Professor

University of Southern Denmark
Department of Mathematics and Computer Science

Categories
Explore project

Automated Verification of Sensitivity Properties for Probabilistic Programs

Project type: Explore Project

Automated Verification of Sensitivity Properties for Probabilistic Programs

Sensitivity measures how much program outputs vary when changing inputs. We propose exploring novel methodologies for specifying and verifying sensitivity properties of probabilistic programs such that they (a) are comprehensible to everyday programmers, (b) can be verified using automated theorem provers, and (c) cover properties from the machine learning and security literature.

This work will bring together two junior researchers who recently arrived in Denmark and obtained their PhDs working on probabilistic verification.

Project description

Our overall objective is to explore how automated verification of sensitivity properties of probabilistic programs can support developers in increasing the trust in their software through formal assurances.

Probabilistic programs are programs with the ability to sample from probability distributions. Examples include randomized algorithms, where sampling is exploited to ensure that expensive executions have a low probability, cryptographic protocols, where randomness is essential for encoding secrets, and statistics, where programs are becoming a popular alternative to graphical models for describing complex distributions.

The sensitivity of a program determines how its outputs are affected by changes to its input; programs with low sensitivity are robust against fluctuations in their input – a key property for improving trust in software. Minor input changes should, for example, not affect the result of a classifier learned from training data. In the probabilistic setting, the output of a program depends not only on the input but also on the source of randomness. Hence, the notion of sensitivity – as well as techniques for reasoning about it – needs refinement.

Automated verification takes a deductive approach to proving that a program satisfies its specification: users annotate their programs with logical assertions; a verifier then generates verification conditions (VCs) whose validity implies that the program’s specification holds. Deductive verifiers are more complete and more scalable than fully automatic techniques but require significant user interaction. The main challenge for users of automated verifiers lies in finding suitable intermediate assertions, particularly loop invariants, such that an automated theorem prover can discharge the generated VCs. A significant challenge for developers of automated verifiers is to keep the amount and complexity of necessary annotations as low as possible.

Previous work [1] co-authored by the applicants provides a theoretical framework for reasoning about the sensitivity of probabilistic programs: the above paper presents a calculus to carry out “pen-and-paper” proofs of sensitivity in a principled and syntax-directed manner. The proposed technique deals with sampling instructions by requiring users to identify suitable probabilistic couplings, which act as synchronization points, on top of finding loop invariants. However, the technique is limited in the sense that it does not provide tight sensitivity bounds when changes to the input cause a program to take a different branch on a conditional.

Our project has four main goals. First, we will develop methodologies that do not suffer from the limitations of [1]. We believe that conditional branching can be treated by carefully tracking the possible divergence. Second, we will develop an automated verification tool for proving sensitivity properties of probabilistic programs. The tool will generate VCs based on the calculus from [1], which will be discharged using an SMT solver. In designing the specification language, we aim to achieve a balance so that (a) users can conveniently specify synchronization points for random samples (via so-called probabilistic couplings) and (b) existing solvers can prove the resulting VCs. Third, we aim to aid the verification process by assisting users in finding synchronization points. Invariant synthesis has been extensively studied in the case of deterministic programs. Similarly, coupling synthesis has been recently studied for the verification of probabilistic programs [2]. We believe these techniques can be adapted to the study of sensitivity. Finally, we will validate the overall verification system by applying it to case studies from machine learning, statistics, and randomized algorithms.

 

Participants

Alejandro Aguirre

Postdoc

Aarhus University
Department of Computer Science

Christoph Matheja

Assistant Professor

Technical University of Denmark
DTU Compute

Categories
Explore project

Understanding Biases and Diversity of Big Data used for Mobility Analysis

Project type: Explore Project

Understanding Biases and Diversity of Big Data used for Mobility Analysis

Summary

Our capabilities to collect, store and analyze vast amounts of data have greatly increased in the last two decades, and today big data plays a critical role in a large majority of statistical algorithms. Unfortunately, our understanding of biases in data has not kept up. While there has been lot of progress in developing new models to analyze data, there has been much less focus on understanding the fundamental shortcomings of big data.

This project will quantify the biases and uncertainties associated with human mobility data collected through digital means, such a smartphone GPS traces, cell phone data, and social media data.

Ultimately, we want to ask the question: is it possible to fix big mobility data through a fundamental understanding of how biases manifest themselves?

Value Creation

We expect this project to have a long-lasting scientific and societal impact. The scientific impact of this work will allow us to explicitly model bias in algorithmic systems relying on human mobility data and provide insights into which population are left out. For example, it will allow us to correct for gender, wealth, age, and other types of biases in data globally used for epidemic modeling, urban planning, and many other usecases. Further, having methods to debias data will allow us to understand what negative impacts results derived from biased data might have. Given the universal nature of bias, we expect our developed debiasing frameworks will also pave the way for quantitative studies of bias in other realms of data science. 

The societal impact will be actionable recommendations provided to policy makers regarding: 1) guidelines for how to safely use mobility datasets in data-driven decision processes, 2) tools (including statistical and interactive visualizations) for quantifying the effects of bias in data, and 3) directions for building fairer and equitable algorithm that rely on mobility data. 

It is important to address these issues now, because in their “Proposal for a Regulation on a European approach for Artificial Intelligence” from April 2021 the European Commission (European Union) outlines potential future regulations for addressing the opacity, complexity, bias, and unpredictability of algorithmic systems. This document states that high-quality data is essential for algorithmic performance and suggest that any dataset should be subject to appropriate data governance and management practices, including examination in view of possible biases. This implies that in the future businesses and governmental agencies will need to have data-audit methods in place. Our project addresses this gap and provides value by developing methodologies to audit mobility data for different types of biases — producing tools which Danish society and Danish businesses will benefit from.

Participants

Project Manager

Vedran Sekara

Assistant Professor

IT University of Copenhagen
Department of Computer Science

E: vsek@itu.dk

Laura Alessandretti

Associate Professor

Technical University of Denmark
DTU Compute

Manuel Garcia-Herranz

Chief Scientist

UNICEF
New York

Elisa Ormodei

Assistant Professor

Central European University

Categories
Explore project

Ergonomic & Practical Effect Systems

Project type: Explore Project

Ergonomic & Practical Effect Systems

Summary

Effect systems are currently a hot research subject in type theory. Yet many effect systems, whilst powerful, are very complicated to use, particularly by programmers that are not experts at type theory. Effect systems with inference can provide useful guarantees to programming languages, while being simple enough to be used in practice by everyday programmers.

Building on the Boolean unification-based polymorphic effect system in the Flix programming language, we want to pursue two practical short-term objectives: to (a) improve the quality of effect error messages, and to (b) develop techniques to improve the performance of Boolean unification and effect inference. Thus laying the foundation for a more ambitious objective: The Futhark programming language supports a form of referentially transparent in-place updates, controlled by a system of uniqueness types inspired by Clean, but which is too limited in the presence of polymorphic higher-order functions. Recasting the type system in terms of effects, based on the one in Flix, might provide a more intuitive system.

A unique aspect of this project is that it brings together two programming language researchers, one from Aarhus and one from Copenhagen, who are both working on full-blown programming language implementations.

Value Creation

We address value creation following the three outlined categories:

Scientific Value: We see two clear publishable scientific contributions: (a) new techniques to improve the performance of Boolean unification and (b) new applications of type and effect systems based on Boolean unification.

Capacity Building: Flix and Futhark are the two the major academic efforts towards building new programming languages in Denmark. Bringing the two research groups together will facilitate knowledge sharing and technology transfer; enabling both projects to thrive and grow even further. This unique opportunity exists because both languages are based on similar technology and because they do not compete in the same space. Success for one is not at the expense of the other, and they can rise together.

Business and Societal Value: A significant amount of research effort has been expended on designing effect systems. Despite widespread belief that such systems can lead to safer programs, few systems have been implemented in real-world programming languages. By focusing on improving the ergonomics, we want to make these technologies more accessible. Being the designers of Flix and Futhark, we are in great position to conduct such work. We can show the way for other mainstream programming languages by having real, full-blown implementations.

After decades of relative stagnation, programming languages are now rapidly absorbing features previously only seen in obscure or academic programming languages. Java and C# and prominent examples of originally very orthodox object-oriented languages that have been augmented with concepts from functional programming. We believe that effect systems and other fancy type system features are a logical next step, but before they can be added to mainstream languages, it must be shown that they can be designed and implemented in a form that will be palatable to industrial users. Thus, while Flix and Futhark may or may not be the languages of the future, we believe that our research can help impact the direction of future programming languages by providing solid formal foundations and real-world implementations that others can build on directly or indirectly.

Participants

Project Manager

Magnus Madsen

Associate Professor

Aarhus University
Department of Computer Science

E: magnusm@cs.au.dk

Troels Henriksen

Assistant Professor

University of Copenhagen
Department of Computer Science

Categories
Explore project

Hardware/software Trade-off for the Reduction of Energy Consumption

Project type: Explore Project

Hardware/software Trade-off for the Reduction of Energy Consumption

Summary

Computing devices consume a considerable amount of energy. Within data centers this has an impact on climate change and in small embedded systems, i.e., battery powered devices, energy consumption influences battery life. Implementing an algorithm in hardware (in a chip) is more energy efficient than executing it in software in a processor. Up until recently processor performance and energy efficiency have been good enough to just use software on a standard processor or on a graphic processing unit. However, this performance increase comes to an end and energy-efficient computing systems need domain specific hardware accelerators.

However, the cost of producing a chip is very high. Between fixed hardware and software there is the technology of field-programmable gate arrays (FPGAs). FPGAs are programmable hardware, the algorithm can be changed at runtime. However, FPGAs are less energy efficient than chips. We expect that for some algorithms an FPGA will be more energy efficient than the implementation in software. The research question is whether and how it is possible to reduce energy consumption of IT systems by moving algorithms from software into hardware (FPGAs). We will do this by investigating classic sorting and path-finding algorithms and compare their energy-efficiency and, in addition, their performance. Such results are essential to both data centers as well as embedded systems. However, the hardware design of these accelerators is often complex, and their development is time-consuming and error-prone. Therefore, we need a tool and methodology that enables software engineers to design efficient hardware implementation of their algorithms. We will explore a modern hardware construction language, Chisel. Chisel is a Scala-embedded hardware construction language that allows to describe hardware in a more software-like high-level language. Chisel is the enabling technology to simplify the translation of a program from software into hardware. This project will furthermore investigate the efficiency of using the functional and object-oriented hardware description language Chisel to express algorithms efficiently for execution in FPGAs.

Programs running on a general-purpose computer consume a considerable amount of energy. Some programs can be translated into hardware and executed on an FPGA. This project will explore the trade-offs between executing a program in hardware and executing it in software relative to energy consumption.

Value Creation

Scientific Value
The FPGA and software implementations of path-finding algorithms have recently been evaluated in the lense of performance, e.g., [?], whereas sorting algorithms have also been evaluated on energy consumption, e.g., [2]. Here FPGAs performed better than CPU in many cases and with similar or reduced energy consumption. The language used for implementation is Verilog and C which is then translated to Verilog using Vivado HLS. In this project, we will implement the algorithms in hardware using Chiesl and evaluate their performance and energy consumption. DTU and RUC will advance the research in the design and testing of digital systems for energy saving. Our proposed approach provides a general software engineering procedure that we plan to validate with standard algorithms used in cloud applications. This research will drive the adaption of hardware design methods to the education curriculum towards modern tools and agile methods. 

Capacity Building
The project establish a new collaboration between two Danish Universities and is a first step towards building a more energy-aware profile of the Computer Science laboratory FlexLab, RUC. In return FlexLab make FPGAs available to the research assistants at RUC. Thus, this project will improve visibility of energy-aware design IT systems nationally and international. This project with the cooperation between researchers as DTU and RUC will allow Denmark to take the lead in digital research nd development for reduced energy consumption. The upcoming research positions at RUC will contribute to building RUC’s research capacity, and the project will also recruit new junior researchers directly and in future subsequent projects.

Business Value
The changes in the hardware industry indicates that the use of FPGAs will increase: A few years ago Intel bought Altera -one of the two largest FPGA production companies- to include FPGAs in future versions of their processors. Similar, AMD is aiming to buy Xilinx, the other big FPGA vendor. In addition, one can already rent a server in the cloud from Amazon that includes an FPGA. These changes all points towards that FPGAs are entering mainstream computing. Many mainstream programming languages like C# or Java already include functional features such as lambda expressions or higher-order functions. The more common languages for encoding FPGAs are Verilog, a C inspired language, and VHDL, a Pascal inspired language, Therefore, it may be efficient for mainstream software developers to use a functional language to efficiently implement algorithms in FPGAs and thus both increase performance and reduce the energy consumption. 

Societal Value
Currently ICT consumes approximately 10% of the global electricity and this is estimated to increase to 20% in 2030. Thus, reducing energy consumption of ICT is critical. If successful, this project has the potential to reduce the energy consumption via rephrasing the essential software programs in FPGA units.

Participants

Project Manager

Maja Hanne Kirkeby

Assistant Professor

Roskilde University
Department of People and Technology

E: majaht@ruc.dk

Martin Schoerberl

Associate Professor

Technical University of Denmark
DTU Compute

Mads Rosendahl

Associate Professor

Roskilde Universlty
Department of People and Technology

Thomas Krabben

FlexLab Manager

Roskilde University
Department of People and Technology

Categories
News

Meet Tijs Slaats, who just won a prize for best process mining algorithm

Meet Tijs Slaats, who just won a prize for best process mining algorithm

Tijs is Associate Professor at the Department of Computer Science at the University of Copenhagen and Head of the Business Process Modeling and Intelligence research group. In DIREC, he works on the Bridge project AI and Blockchains for Complex Business Processes.

Tijs’ research interests include declarative and hybrid process technologies, blockchain technologies, process mining, and information systems development.  

He co-invented the flagship declarative Dynamic Condition Response (DCR) Graphs process notation and was a primary driver in its early commercial adoption. In addition, he led the invention and development of the DisCoveR process miner, which was recognized as the best process discovery algorithm in 2021. 

Can you tell us briefly about your research and what value you expect to get from it?
We try to describe processes. It can be basic things that we do as human beings. It could be assembling a car at a factory, but it could also be treating patients at a hospital. If a patient is admitted to a hospital, they need help and treatment.

What it has in common for these examples is that you need to go through a number of steps and activities to reach your goal, and those activities are related to each other. It may be medication that needs to be taken in a certain order.

In our research, we have developed a mathematical method for describing these processes. The reason for doing this is because it gives you the tools to ensure that the process goes the way you want it to.

In the new DIREC project, we take one step further. We have observed that many companies and organizations have large amounts of data on how they have performed their jobs. And we can look at these data and analyze them to see how they actually perform their jobs, because the way many people do their jobs does not necessarily match the way they expect to do it. Maybe they take shortcuts unintentionally.

Our idea is to find these data and analyze them and on that basis we get a model.

It is important that such a model also is understandable to the users so that they can understand how they perform their work. We call this process mining, and it is a reasonably large academic area. Two years ago, I developed an algorithm, and it was in a contest, where you compare which algorithm is most accurate to describe these “logs of behaviour”, and we won the contest.

Read more

What results do you expect from your research?
Our cooperation with industry is particularly important. In the project, we collaborate with the company Gekkobrain https://gekkobrain.com, which works with DevOps, and they are interested in analyzing large ERP systems and in finding tools that can optimize a system and find abnormalities. These systems are quite complex, so it is important to be able to identify where things are going wrong.

Gekkobrain has a lot of data because they work with large companies that have huge amounts of log data, and these systems are so complex that it adds some extra challenges for our algorithms. To get access to such complex data is an important perspective.

How can your research make a difference to companies and society?
The biggest impact of our work and models is that you can gain insight into how you perform your work. It gives you an objective picture of what has been done.

Companies can use it to find out if there are places where work processes are performed in an inappropriate way and thus avoid the extra costs.

Can you tell us about your background and how you ended up working with this research area?
I initially got a Bachelor degree in Information & Communication Technology from Fontys University of Professional Education, then worked in industry where I led the webshop development team of a Dutch e-commerce provider and acted as project leader on the implementation of our product for two major customers; Ferrari and Hewlett Packard.

I decided to move to Denmark after meeting my (Danish) wife, at the time I was already considering pursuing further education, while my wife was fairly settled in Denmark, so it made sense for me to be the one to move.

I got my MSc and PhD degrees at the IT University of Copenhagen. There I became interested in the field of business process modeling because it allows me to combine foundational theoretical research with very concrete industrial applications. Process mining in particular provides really interesting challenges because it requires learned models to be understandable for business users, something that has only recently come into focus in the more general field of AI. 

After a short postdoc at ITU I accepted a tenure-track assistant professorship at DIKU, which was a very good opportunity because it offers a (near) permanent position for relatively junior researchers. At the time this was uncommon in Denmark.