There is an unmet need for decentralised privacy-preserving machine learning. Cloud computing has great potential, however, there is a lack of trust in the service providers and there is a risk of data breaches. A lot of data are private and stored locally for good reasons, but combining the information in a global machine learning system could lead to services that benefit all.
The centre operates with SciTech projects, which are strategic research projects with the purpose of building up research and education capacity at the universities and Explore projects, which are small agile research projects with the purpose to quickly screen new ideas. We also operate with Bridge projects, which are joint research / development / innovation projects between universities, companies, the public sector and GTS institutes with the aim of increasing the capacity within digitization and innovation in companies. See Bridge projects.
AI is radically changing society and the main driver behind new AI methods and systems is machine learning. Machine learning focuses on finding solutions for, or patterns in, new data by learning from relevant existing data. Thus, machine learning algorithms are often applied to large datasets and then they more or less autonomously find good solutions by finding relevant information or patterns hidden in the data.
Constructing cyber-physical systems with humans in the loop is important in many application areas to enable a close co-operation between humans and machines. However, there are also many challenges to overcome when constructing such systems with current software technologies and human-centered design approaches.
In contrast to other fields in AI, the potential of exploiting large data collections is not realized in robotics yet. We aim to analyze the underlying scientific and technical challenges as well as associated legal and privacy issues by means of three half days meetings of university partners and companies, one public workshop, and the preparation of four deliverables.
A recurring problem of digitalised industries is to design and coordinate hybrid systems that include IoT (Internet of Things), edge, and cloud solutions. Currently adopted methods and tools are not effective to this end, because they rely too much on informal specifications that are manually written and interpreted by humans.
Several highly popular YouTube channels for mathematics and other scientific content (e.g., 3blue1brown, Numberphile, Veritasium) with millions of views indicate that learners may respond very positively to professionally produced educational videos. This project aims at creating and evaluating an initial library of such videos to supplement teaching in algorithms.
We will investigate how to combine secure multiparty computation and blockchain techniques to obtain more efficient privacy-preserving computation with accountability.
As cyber-physical systems (CPSs) are becoming ever more ubiquitous, many of them are considered safetycritical. We want to help CPS manufacturers and regulators with establishing high levels of trust in automatically synthesized control software for safety-critical CPSs.
The overall purpose of this project is to define, investigate, and provide preliminary methodologies for scheduling and routing microliter-sized liquid droplets on a planar surface in the context of digital microfluidics.
Sensitivity measures how much program outputs vary when changing inputs. We propose exploring novel methodologies for specifying and verifying sensitivity properties of probabilistic programs.
This project will quantify the biases and uncertainties associated with human mobility data collected through digital means, such a smartphone GPS traces, cell phone data, and social media data.
Effect systems are currently a hot research subject in type theory. Yet many effect systems, whilst powerful, are very complicated to use, particularly by programmers that are not experts at type theory. Effect systems with inference can provide useful guarantees to programming languages, while being simple enough to be used in practice by everyday programmers.
Programs running on a general-purpose computer consume a considerable amount of energy. Some programs can be translated into hardware and executed on an FPGA. This project will explore the trade-offs between executing a program in hardware and executing it in software relative to energy consumption.
The challenge to the research community is how to extend existing verification technologies to cope with software systems comprising AI components. This is an unchartered territory and one of the most pressing research challenges in AI. The industrial importance of this topic is closely related to the question of liability in case of malfunctioning products. Over a 4-month period the explore project will provide a state-of-the-art survey and identify research directions to be followed.
Artificial Intelligence brings the promise of technological means to solve problems that previously were assumed to require human intelligence, and ultimately provide human-centered solutions that are both more effective and of higher quality in a synergy between the human and the AI system than solutions that are provided by humans or by an AI system alone.