Categories
Bridge project

REWORK – The future of hybrid work

Project type: Bridge Project

REWORK – The future of hybrid work

The recent COVID-19 pandemic, and the attendant lockdown, have demonstrated the potential benefits and possibilities of remote work practices, as well as the glaring deficiencies such practices bring. Zoom fatigue, resulting from high cognitive loads and intense amounts of eye contact, is just the tip of an uncomfortable iceberg where the problem of embodied presence remains a stubborn limitation. Remote and hybrid work will certainly be part of the future of most work practices, but what should these future work practices look like? Should we merely attempt to fix what we already have or can we be bolder and speculate different kinds of workplace futures? We seek a vision of the future that integrates hybrid work experiences with grace and decency. This project will focus on the following research question: what are the possible futures of embodied presence in hybrid and remote work conditions?

There are a multitude of reasons to embrace remote and hybrid work. Climate concerns are increasing, borders are difficult to cross, work/life balance may be easier to attain, power distributions in society could potentially be redressed, to name a few. This means that the demand for Computer Supported Cooperative Work (CSCW) systems that support hybrid work will increase significantly. At the same time, we consistently observe and collectively experience that current digital technologies struggle to mediate the intricacies of collaborative work of many kinds. Even when everything works, from network connectivity to people being present and willing to engage, there are aspects of embodied co-presence that are almost impossible to achieve digitally.

We argue that one major weakness in current remote work technologies is the lack of support for relation work and articulation work, caused by limited embodiment. The concept of relation work denotes the fundamental activities of creating socio-technical connections between people and artefacts during collaborative activities, enabling actors in a global collaborative setting to engage each other in activities such as articulation work. We know that articulation work cannot be handled in the same way in hybrid remote environments. The fundamental difference is that strategies of awareness and coordination mechanisms are embedded in the physical surroundings, and use of artefacts cannot simply be applied to the hybrid setting, but instead requires translation.

Actors in hybrid settings must create and connect the foundational network of globally distributed people and artefacts in a multitude of ways.

In REWORK, we focus on enriching digital technologies for hybrid work. We will investigate ways to strengthen relation work and articulation work through explorations of embodiment and presence. To imagine futures and technologies that can be otherwise, we look to artistic interventions, getting at the core of engagement and reflection on the future of remote and hybrid work by imagining and making alternatives through aesthetic speculations and prototyping of novel multimodal interactions (using the audio, haptic, visual, and even olfactory modalities). We will explore the limits of embodiment in remote settings by uncovering the challenges and limitations of existing technical solutions, following a similar approach as some of our previous research.

Scientific value
REWORK will develop speculative techniques and ideas that can help rethink the practices and infrastructures of remote work and its future. REWORK focuses on more than just the efficiency of task completion in hybrid work. Rather, we seek to foreground and productively support the invisible relation and articulation work that is necessary to ensure overall wellbeing and productivity.

Specifically, REWORK will contribute:

  1. Speculative techniques for thinking about the future of remote work;
  2. Multimodal prototypes to inspire a rethink of remote work;
  3. Design Fictions anchoring future visions in practice;
  4. Socio-technical framework for the future of hybrid remote work practices;
  5. Toolkits for industry.

The research conducted as part of REWORK will produce substantial scientific contributions disseminated through scientific publications in top international journals and conferences relevant to the topic. The scientific contributions will constitute both substantive insights and methodological innovations. These will be targeting venues such as the Journal of Human-Computer Interaction, ACM TOCHI, Journal of Computer Supported Cooperative Work, the ACM CHI conference, NordiCHI, UIST, DIS, Ubicomp, ICMI, CSCW, and others of a similar level.

The project will also engage directly and closely with industries of different kinds, from startups that are actively envisioning new technology to support different types of hybrid work (Cadpeople, Synergy XR, and Studio Koh) to organizations that are trying to find new solutions to accommodate changes in work practices (Arla, Bankdata, Keyloop, BEC).

Part of the intent of engagement with the artistic collaboratory is to create bridges between artistic explorations and practical needs articulated by relevant industry actors. REWORK will enable the creation of hybrid fora to enable such bridging. The artistic collaboratory will enable the project to engage with the general public through an art exhibit at Catch, public talks, and workshops. It is our goal to exhibit some of the artistic output at a venue, such as Ars Electronica, that crosses artistic and scientific audiences.

Societal value
The results of REWORK have the potential to change everybody’s work life broadly. We all know that “returning to work after COVID-19” will not be the same – and the combined situation of hybrid work will be a challenge. Through the research conducted in REWORK, individuals that must navigate the demands of hybrid work and the organizations that must develop policies and practices to support such work will benefit from the improved sense of embodiment and awareness, leading to more effective collaboration.

REWORK will take broadening participation and public engagement seriously, by offering online and in-person workshops/events through a close collaboration with the arts organization Catch (catch.dk). The workshops will be oriented towards particular stakeholder groups – artists interested in exploring the future of hybrid work, industry organizations interested in reconfiguring their existing practices – and open public events.

Capacity building
There are several ways in which REWORK contributes to capacity building. Firstly, by collaborating with the Alexandra Institute, we will create a multimodal toolbox/demonstrator facility that can be used in education, and in industry.

REWORK will work closely with both industry partners (through the Alexandra Institute) and cultural (e.g. catch.dk)/public institutions for collaboration and knowledge dissemination, in the general spirit of DIREC.

We will include the findings from REWORK in our research-based teaching at all three universities. Furthermore, we plan to host a PhD course, or a summer school, on the topic in Year 2 or Year 3. Participants will be recruited nationally and internationally.

Lastly, in terms of public engagement, HCI and collaborative technologies are disciplines that can be attractive to the public at large, so there will be at least one REWORK Open Day where we will invite interested participants, and the DIREC industrial collaborators.

January 1, 2022 – December 31, 2024 – 3 years.

Participants

Project Manager

Eve Hoggan

Professor

Aarhus University
Department of Computer Science

E: eve.hoggan@cs.au.dk

Susanne Bødker

Professor

Aarhus University
Department of Computer Science

Irina Shklovski

Professor

University of Copenhagen
Department of Computer Science

Pernille Bjørn

Professor

University of Copenhagen
Department of Computer Science

Louise Barkhuus

Professor

IT University of Copenhagen
Department of Computer Science

Naja Holten Møller

Assistant Professor

University of Copenhagen
Department of Computer Science

Nina Boulus-Rødje

Associate Professor

Roskilde University
Department of People and Technology

Allan Hansen

Head of Digital Experience and Solutions Lab

The Alexandra Institute

Mads Darø Kristensen

Principal Application Architect

The Alexandra Institute

Partners

Categories
Bridge project

SIOT – Secure Internet of Things – Risk analysis in design and operation

Project type: Bridge Project

SIOT – Secure Internet of Things – Risk analysis in design and operation

When developing novel IoT services or products today, it is essential to consider the potential security implications of the system and to take those into account before deployment. Due to the criticality and widespread deployment of many IoT systems, the need for security in these systems has even been recognised at the government and legislative level, e.g., in the US and the UK, resulting in proposed legislation to enforce at least a minimum of security consideration in deployed IoT products.

However, developing secure IoT systems is notoriously difficult, not least due to the characteristics of many such systems: they often operate in unknown and frequently in privacy‐sensitive environments, engage in communication using a wide variety of protocols and technologies, and must perform essential tasks such as monitoring and controlling (physical) entities. In addition, IoT systems must often perform within real‐ time bounds on limited computing platforms and at times even with a limited energy budget. Moreover, with the increasing number of safety‐critical IoT devices (such as medical devices and industrial IoT devices), IoT security has become a public safety issue. To develop a secure IoT system, one should take into account all of the factors and characteristics mentioned above, and balance them against functionality and performance requirements. Such a risk analysis must be performed not only at the design stage, but also throughout the lifetime of the product. Besides technical aspects, the analysis should also take into account the human and organizational aspects. This type of analysis will form an essential activity for standardization and certification purposes.

In this project, we will develop a modelling formalism with automated tool support, for performing such risk assessments and allowing for extensive “what‐if” scenario analysis. The starting point will be the well‐ known and widely used formalism of attack‐defense trees extended to include various quantities, e.g., cost or energy consumption, as well as game features, for modelling collaboration and competition between systems and between a system and its environment.


In summary, the project will deliver:

  • a modeling method for a systematic description of the relevant IoT system/service aspects
  • a special focus on their security, interaction, performance, and cost aspects
  • a systematic approach, through a new concept of attack‐defense‐games
  • algorithms to compute optimal strategies and trade‐offs between performance, cost and security
  • a tool to carry out quantitative risk assessment of secure IoT systems
  • a tool to carry out “what‐if” scenario analysis, to harden a secure IoT system’s design and/or operation
  • usability studies and design for usability of the tools within organizations around IoT services
  • design of training material to enforce security policies for employees within these organizations.

The main research problems are:

  1. To identify safety and security requirements (including threats, attacker models and counter measures) for IoT systems, as well as the inherent design limitations in the IoT problem domain (e.g., limited computing resources and a limited energy budget).
  2. To organize the knowledge in a comprehensive model. We propose to extend attack‐defense trees with strategic game features and quantitative aspects (time, cost, energy, probability).
  3. To transform this new model into existing “computer models” (automata and games) that are amenable to automatic analysis algorithms. We consider stochastic priced timed games as an underlying framework for such models due to their generality and existing tool support.
  4. To develop/extend the algorithms needed to perform analysis and synthesis of optimal response strategies, which form the basis of quantitative risk assessment and decision‐making.
  5. To translate the findings into instruments and recommendations for the partner companies, addressing both technical and organizational needs.
  6. To design, evaluate, and assess the user interface of the IoT security tools, which serve as important backbones supporting to design and certify IoT security training programs for stakeholder organizations.

Throughout the project, we focus on the challenges and needs of the partner companies. The concrete results and outcomes of the project will also be evaluated in the contexts of these companies. The project will combine the expertise of five partners of DIREC (AAU, AU, Alexandra, CBS and DTU) and four Work Streams from DIREC (WS7: Verification, WS6: CPS and IoT systems, WS8: Cybersecurity and WS5: HCI, CSCW and InfoVis) in a synergistic and collaborative way.

Business value
While it is difficult to make a precise estimate of the number of IoT devices, most estimates are in the range 7‐15 billion connected devices and expected to increase dramatically over the next 5‐10 years. The impact of a successful attack on IoT systems can range from nuisance, e.g., when baby monitors or thermostats are hacked, over potentially expensive DDoS attacks, e.g., when the Mirai malware turned many IoT devices into a DDoS botnet, to life‐threatening, e.g., when pacemakers are not secure. Gartner predicted that the worldwide spending on IoT security will increase from roughly USD 900M to USD 3.1B in 2021 out of a total IoT market up to USD 745B.

The SIOT project will concretely contribute to the agility of the Danish IoT industry. By applying the risk analysis and secure design technologies developed in the project, these companies get a fast path to certification of secure IoT devices. Hence, this project will give Danish companies a head‐start for the near future where the US and UK markets will demand security certification for IoT devices. Also, EU is already working on security regulation for IoT devices. Furthermore, it is well known that the earlier in the development process a security vulnerability or programming error is found, the cheaper it is to fix it. This is even more important for IoT products that may not be updatable “over‐the‐air” and thus require a product recall or physical update process. The methods and technologies developed in this project will help companies find and fix security vulnerabilities already from the design phase and exploration phase, thus reducing long‐term cost of maintenance.

Societal value
It is an academic duty to contribute to safer and more secure IoT systems, since they are permeating the society. Security issues quickly become safety incidents, for instance since IoT systems are monitoring against dangerous physical conditions. In addition, compromised IoT devices can be detrimental for our privacy, since they are measuring all aspects of human life. DTU and Alexandra Institute will disseminate the knowledge and expertise through the network built in the joint CIDI project (Cybersecure IoT in Danish Industry, ending in 2021), in particular a network of Danish IoT companies interested in security, with a clear understanding of companies’ needs for security concerns.

We will strengthen the cybersecurity level of Danish companies in relation to Industry 4.0 and Internet of Things (IoT) security, which are key technological pillars of digital transformation. We will do this by means of research and lectures on several aspects of IoT security, with emphasis on security‐by‐design, risk analysis, and remote attestation techniques as a counter measure.

Capacity building
The education of PhD students itself already contributes to “capacity building”. We will organize a PhD Summer school towards the end of the project, to disseminate the results, across the PhD students from DIREC and students abroad.

We will also prepare learning materials to be integrated in existing course offerings (e.g., existing university courses, and the PhD and Master training networks of DIREC) to ensure that the findings of the project are injected into the current capacity building processes.

Through this education, we will also attract more students for the Danish labor market. The lack of skilled people is even larger in the security area than in other parts of computer science and engineering.

February 1, 2022 – January 31, 2025 – 3 years.

Total budget DKK 25,10 million / DIREC investment DKK 6,74 million

Participants

Project Manager

Jaco van de Pol

Professor

Aarhus University
Department of Computer Science

E: jaco@cs.au.dk

Torkil Clemmensen

Professor

Copenhagen Business School
Department of Digitalization

Qiqi Jiang

Associate Professor

Copenhagen Business School
Department of Digitalization

Kim Guldstrand Larsen

Professor

Aalborg University
Department of Computer Science

René Rydhof Hansen

Associate Professor

Aalborg University
Department of Computer Science

Flemming Nielson

Professor

Technical University of Denmark
DTU Compute

Alberto Lluch Lafuente

Associate Professor

Technical University of Denmark
DTU Compute

Nicola Dragoni

Professor

Technical University of Denmark
DTU Compute

Gert Læssøe Mikkelsen

Head of Security Lab

The Alexandra Institute

Laura Lynggaard Nielsen

Senior Anthropologist

The Alexandra Institute

Zaruhi Aslanyan

Security Architect

The Alexandra Institute

Claus Riber

Senior Manager, Software Cybersecurity

Beumer Group

Poul Møller Eriksen

CTO

Develco Products

Mike Aarup

senior quality engineer

Grundfos

Mads Pii

Chief Technical Officer

Logos Payment Solutions

Anders Qvistgaard Sørensen

R&D Manager

Micro Technic

Jørgen Hartig

CEO & Strategic Advisor

SecuriOT

Daniel Lux

Chief Technology Officer

Seluxit

Samant Khajuria

Chief Specialist Cybersecurity

Terma

Alyzia-Maria Konsta

PhD Student

Technical University of Denmark
DTU Compute

Mikael Bisgaard Dahlsen-Jensen

PhD Student

Aarhus University
Department of Computer Science

Partners

Categories
Bridge project

Embedded AI

Project type: Bridge Project

Embedded AI

AI is currently limited by the need for massive data centres and centralized architectures, as well as the need to move this data to algorithms. To overcome this key limitation, AI will evolve from today’s highly structured, controlled, and centralized architecture to a more flexible, adaptive, and distributed network of devices. This transformation will bring algorithms to the data, made possible by algorithmic agility and autonomous data discovery, and it will drastically reduce the need for high-bandwidth connectivity, which is required to transport massive data sets, and eliminate any potential sacrifice of the data’s security and privacy. Furthermore, it will eventually allow true real-time learning at the edge.

This transformation is enabled by the merging of AI and IoT into “Artificial Intelligence of Things” (AIoT), and has created an emerging sector of Embedded AI (eAI), where all or parts of the AI processing is done on the sensor devices at the edge, rather than sent to the cloud. The major drivers for Embedded AI are, increased responsiveness and functionality, reduced data transfer, and increased resilience, security, and privacy. To deliver these benefits, development engineers need to acquire new skills in embedded development and systems design.

To enter and compete in the AI era, companies are hiring data scientists to build expertise in AI and create value from data. This is true for many companies developing embedded systems, for instance to control water, heat and air flow in large facilities, large ship engines or industrial robots, all with the aim to optimize their products and services.
However, there is a challenging gap between programming AI in the cloud using tools like Tensorflow, and programming at the edge, where resources are extremely constrained. This project will develop methods and tools to migrate AI algorithms from the cloud to a distributed network of AI enabled edge-devices. The methods will be demonstrated on several use-cases from the industrial partners.

In a traditional, centralized AI architecture, all the technology blocks would be combined in the cloud or at a single cluster (Edge computing) to enable AI. Data collected by IoT, i.e., individual edge-devices, will be sent towards the cloud. To limit the amount of data needed to be sent, data aggregation may be performed along the way to the cloud. The AI stack, the training, and the later inference, will be performed in the cloud, and results for actions will be transferred back to the relevant edge-devices. While the cloud provides complex AI algorithms which can analyse huge datasets fast and efficiently, it cannot deliver true real-time response and data security and privacy may be challenged.

When it comes to Embedded AI, where AI algorithms are moved to the edge, there is a need to transform the foundation of the AI Stack by enabling transformational advances, algorithmic agility and distributed processing will enable AI to perceive and learn in real-time by mirroring critical AI functions across multiple disparate systems, platforms, sensors, and devices operating at the edge. We propose to address these challenges in the following steps, starting with single edge-devices.

  1. Tiny inference engines – Algorithmic agility of the inference engines will require new AI algorithms and new processing architectures and connectivity. We will explore suitable microcontroller architectures and reconfigurable platform technologies, such as Microchip’s low power FPGA’s, for implementing optimized inference engines. Focus will be on achieving real-time performance and robustness. This will be tested on cases from the industry partners.
  2. µBrains – Extending the edge-devices from pure inference engines to also provide local learning. This will allow local devices to provide continuous improvements. We will explore suitable reconfigurable platform technologies with ultra-low power consumption, such as Renesas’ DRP’s using 1/10 of the power budget of current solutions, and Microchip’s low power FPGA’s for optimizing neural networks. Focus will be on ensuring the performance, scheduling, and resource allocation of the new AI algorithms running on very resource constrained edge-devices.
  3. Collective intelligence – The full potential of Embedded AI will require distributed algorithmic processing of the AI algorithms. This will be based on federated learning and computing (microelectronics) optimized for neural networks, but new models of distributed systems and stochastic analysis, is necessary to ensure the performance,
    prioritization, scheduling, resource allocation, and security of the new AI algorithms—especially with the very dynamic and opportunistic communications associated with IoT.

The expected outcome is an AI framework which supports autonomous discovery and processing of disparate data from a distributed collection of AI-enabled edge-devices. All three presented steps will be tested on cases from the industry partners.

 

Deep neural networks have changed the capabilities of machine learning reaching higher accuracy than hitherto. They are in all cases on learning from unstructured data now the de facto standard. These networks often include millions of parameters and may take months to train on dedicated hardware in terms of GPUs in the cloud. This has resulted in high demand of data scientists with AI skills and hence, an increased demand for educating such profiles. However, an increased use of IoT to collect data at the edge, have created a wish for training and executing deep neural networks at the edge rather than transferring all data to the cloud for processing. As IoT end- or edge-devices are characterized by low memory, low processing power, and low energy (powered by battery or energy harvesting), training or executing deep neural networks is considered infeasible. However, developing dedicated accelerators, novel hardware circuits and architectures, or executing smaller discretized networks may provide feasible solutions for the edge.

The academic partners DTU, KU, AU and CBS, will not only create the scientific value from the results disseminated through the four PhDs, but will also create important knowledge, experience, and real-life cases to be included in the education, and hence, create capacity building in this important merging field of embedded AI or AIoT.

The industry partners Indesmatech, Grundfos, MAN ES, and VELUX are all strong examples of companies who will benefit from mastering embedded AI, i.e., being able to select the right tools and execution platforms for implementing and deploying embedded AI in their products.

  • Indesmatech expect to gain leading edge knowledge about how AI can be implemented on various chip processing platforms, with a focus on finding the best and most efficient path to build cost and performance effective industrial solutions across industries as their customers are represented from most industries.
  • Grundfos will create value in applications like condition monitoring of pump and pump systems, predictive maintenance, heat energy optimization in buildings and waste-water treatment where very complex tasks can be optimized significant by AI. The possibility to deploy embedded AI directly on low cost and low power End and Edge devices instead of large cloud platforms, will give Grundfos a significant competitive advantage by reducing total energy consumption, data traffic, product cost, while at the same time increase real time application performance and secure customers data privacy.
  • MAN ES will create value from using embedded AI to predict problems faster than today. Features such as condition monitoring and dynamic engine optimization will give MAN ES competitive advantages, and
    the exploitation of embedded AI together with their large amount of data collected in the cloud will in the long run create marked advantages for MAN ES.
  • VELUX will increase their competitive edge by attaining a better understanding of the ability to implement the right level of embedded AI into their products. The design of new digital smart products with embedded intelligence, will create value from driving the digital product transformation of VELUX.

January 1, 2022 – December 31, 2024 – 3 years.

Total budget DKK 22,5 million / DIREC investment DKK 6,54 million.

Participants

Project Manager

Jan Madsen

Professor

Technical University of Denmark
DTU Compute

E: jama@dtu.dk

Peter Gorm Larsen

Professor

Aarhus University
Dept. of Electrical and Computer Engineering

Mads Nielsen

Professor

University of Copenhagen
Department of Computer Science

Jan Damsgaard

Professor

Copenhagen Business School
Department of Digitalization

Thorkild Kvisgaard

Head of Electronics

Grundfos

Henrik R. Olesen

Senior Manager

MAN Energy Solutions

Thomas S. Toftegaard

Director, Smart Product Technology

Velux

Rune Domsten

Co-founder & CEO

Indesmatech

Partners

Categories
Bridge project

HERD: Human-AI collaboration: Engaging and controlling swarms of robots and drones

Project type: Bridge Project

HERD: Human-AI collaboration: Engaging and controlling swarms of robots and drones

Robots and drones take on an increasingly broad set of tasks, such as AgroIntelli’s autonomous farming robot and the drone-based emergency response systems from Robotto. Currently, however, such robots are limited in their capacity to cooperate with one another and with humans. In the case of AgroIntelli, for instance, only one robot can currently be deployed on a field at any time and is unable to respond effectively to the presence of a human-driven tractor or even another farming robot working in the same field. In the future, AgroIntelli wants to leverage the potential benefits of having multiple robots working in parallel on the same field to reduce time to completion. A straightforward way to achieve this is to partition the field into several distinct areas corresponding to the number of robots available and then assign each robot its own area. However, such an approach is inflexible and requires detailed a priori planning. If, instead, the robots were given the task collectively as a swarm, they could potentially coordinate their operation on the fly and adapt based on local conditions to achieve optimal or near-optimal task performance.

Similarly, Robotto’s system architecture currently requires one control unit to manage each deployed drone. In large area search scenarios and operations with complex terrain, the coverage provided by a single drone is insufficient. Multiple drones can provide real-time data on a larger surface area and from multiple perspectives – thereby aiding emergency response teams in their time-critical operations. In the current system, however, additional drones each requires a dedicated operator and control unit. Coordination between operators introduces an overhead and it can become a struggle to maintain a shared understanding of the rapidly evolving situation. There is thus a need to develop control algorithms for drone-to-drone coordination and interfaces that enable high-level management of the swarm from a single control console. The complexity requires advanced interactions to keep the data actionable, simple, and yet support the critical demands of the operation. This challenge is relevant to search & rescue (SAR) as well as other service offerings in the roadmap, including firefighting, inspections, and first responder missions.

For both of our industrial partners, AgroIntelli and Robotto, and for similar companies that are pushing robotics technology toward real-world application, there is a clear unmet need for approaches that enable human operators to effectively engage and control systems composed of multiple autonomous robots. This raises a whole new set of challenges compared to the current paradigm where there is a one-to-one mapping between operator and robot. The operator must be able to interact with the system at the swarm level as a single entity to set mission priorities and constraints, and at the same time, be able to intervene and take control of a single robot or a subset of robots. An emergency responder may, for instance, want to take control over a drone to follow a civilian or a group of personnel close to a search area, while a farmer may wish to reassign one or more of her farming robots to another field.

HERD will build an understanding of the challenges in multi-robot collaboration, and design and evaluate technological solutions that enable end-users to engage and control autonomous multi-robot systems. The project will build on use cases in agriculture and search & rescue supported by the industrial partners’ domain knowledge and robotic hardware. Through the research problems and aims outlined below, we seek to enable the next generation of human-swarm collaboration.

Pre-operation and on-the-fly mission planning for robot swarms: An increase in the number of robots under the user’s control has the potential to lead to faster task completion and/or a higher quality. However, the increase in unit count significantly increases the complexity of both end-user-to-robot communication and coordination between robots. As such, it is critical to support the user in efficient and effective task allocation between robots. We will answer the following research questions: (i) What are the functionalities required for humans to effectively define mission priorities and constraints at the swarm level? (ii) How can robotic systems autonomously divide tasks based on location, context, and capability, and under the constraints defined by the end-user? (iii) How does the use of autonomous multi-robot technologies change existing organizational routines, and which new ones are required?

Situational awareness under uncertainty in multi-robot tasks: Users of AI-driven (multi-)robot systems often wish to simulate robot behaviour across multiple options to determine the best possible approach to the task at hand. Given the context-dependent and algorithm-driven nature of these robots, simulation accuracy can only be achieved up to a limited degree. This inherent uncertainty negatively impacts the user’s ability to make an informed decision on the best approach to task completion. We will support situational awareness in the control of multi-robot systems by studying: (i) How to determine and visualise levels of uncertainty in robot navigation scenarios to optimise user understanding and control? (ii) What are the implications of the digital representation of the operational environment for organizational sensemaking? (iii) How can live, predictive visualisations of multi-robot trajectories and task performance support the steering and directing of robot swarms from afar?

User intervention and control of swarm subsets: Given the potentially (rapidly) changing contexts in which the robots operate, human operators will have to regularly adapt from a predetermined plan for a subset of robots. This raises novel research questions both in terms of robot control, in which the swarm might depend on a sufficient number of nearby robots to maintain communication, and in terms of user interaction, in which accurate robot selection and information overload can quickly raise issues. We will therefore answer the following research questions: (i) When a user takes low-level control of a single robot or subset of a robot swarm, how should that be done, and how should the rest of the system respond? (ii) How can the user interfaces help the user to understand the potential impact when they wish to intervene or deviate from the mission plans?

Validation of solutions in real-world applications: Based on the real-world applications of adaptive herbicide spraying by farming robots and search & rescue as provided by our industrial partners, we will validate the solutions developed in the project. While both industrial partners deal with robotic systems, their difference in both application area and technical solution (in-the-air vs. on land) allows us to assess the generalisability and efficiency of our solutions in real-world applications. We will answer the following research questions: (i) What common solutions should be validated in both scenarios and which domain-specific solutions are relevant in the respective types of scenarios? (ii) What business and organisational adaptation and innovation are necessary for swarm robotics technology to be successfully adopted in the public sector and in the private sector.

Advances in AI, computer science, and mechatronics mean that robots can be applied to an increasingly broader set of domains. To build the world class computer science research and innovation centres, as per the long-term goal of DIREC, this project focuses on building the competencies necessary to address the complex relationship between humans, artificial intelligence, and autonomous robots.

The project’s scientific value is the development of new methods and techniques to facilitate effective interaction between humans and complex AI systems and the empirical validation in two distinct use cases. The use cases provide opportunities to engage with swarm interactions across varying demands, including domains where careful a priori planning is possible (agricultural context) and chaotic and fast-paced domains (search & rescue with drones). HERD will thus lead to significant contributions in the areas of autonomous multi-robot coordination and human-robot interaction. We expect to publish at least ten rank A research articles and to demonstrate the potential of the developed technologies in concrete real-world applications. This project also gears up the partners to participate in project proposals to the EU Framework Programme on specific topics in agricultural robotics, nature conservation, emergency response, security, and so on, and in general topics related to developing key enabling technologies.

HERD will build and strengthen the research capacity in Denmark directly through the education of three PhDs, and through the collaboration between researchers, domain experts, and end-users that will lead to industrial R&D growth. Denmark has been a thought leader in robotics, innovating how humans collaborate with robots in manufacturing and architecture, e.g. Universal Robots, MiR, Odico, among others. Through HERD, we support not only the named partners in developing and improving their products and services, but the novel collaboration between the academic partners, who have not previously worked together, helps to ensure that the Danish institutes of higher education build the competencies and the workforce that are needed to ensure continued growth in the sectors of robotics and artificial intelligence. HERD will thus contribute to building the capacity required to facilitate effective interaction between end-users and complex AI systems.

HERD will create business value through the development of technologies that enable end-users to effectively engage and control systems composed of multiple robots. These technologies will significantly increase the value of the industrial partners’ products, since current tasks can be done faster and at a lower cost, and entirely new tasks that require multiple coordinated robots can be addressed. The value increase will, in turn, increase sales and exports. Furthermore, multi-robot systems have numerous potential application domains in addition to those addressed in this project, such as infrastructure inspection, construction, environmental monitoring, and logistics. The inclusion of DTI as partner will directly help explore these opportunities through a broader range of anticipated tech transfer, future market and project possibilities.

HERD will create significant societal value and directly contribute to SDGs 1 (no poverty), 2 (zero hunger), 13 (climate action), and 15 (life on land). Increased use of agricultural robots can, for instance, lead to less soil compaction and enable the adoption of precision agriculture techniques, such as mechanical weeding that eliminates the need for pesticides. Similarly, increased use of drones in search & rescue can reduce the time needed to save people in critical situations.

November 1, 2021 – July 31, 2025 – 3,75 years.

Total budget DKK 17,08 million / DIREC investment DKK 4,59 million

Participants

Project Manager

Anders Lyhne Christensen

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

E: andc@mmmi.sdu.dk

Ulrik Pagh Schultz

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Mikael B. Skov

Professor

Aalborg University
Department of Computer Science

Timothy Robert Merritt

Associate Professor

Aalborg University
Department of Computer Science

Niels van Berkel

Associate Professor

Aalborg University
Department of Computer Science

Ionna Constantiou

Professor

Copenhagen Business School
Department of Digitalization

Christiane Lehrer

Associate Professor

Copenhagen Business School
Department of Digitalization

Kenneth Richard Geipel

Chief Executive Officer

Robotto

Alea Scovill

Strategic Project Manager

Agro Intelligence ApS

Lars Dalgaard

Head of Section

Danish Technological Institute
Robot Technology

Partners

Categories
Bridge project

EXPLAIN-ME: Learning to collaborate via explainable AI in medical education

Project type: Bridge Project

EXPLAIN-ME: Learning to collaborate via explainable AI in medical education

AI is widely deployed in assistive medical technologies, such as image-based diagnosis, to solve highly specific tasks with feasible model optimization. However, AI is rarely designed as a collaborator for the healthcare professionals, but rather as a mechanical substitute for part of a diagnostic workflow. From the AI researcher’s point of view, the goal of development is to beat state-of-the-art on narrow performance parameters, which the AI may solve with superhuman accuracy. However, for more general problems such as full diagnosis, treatment execution, or explaining the background for a diagnosis, the AI is still not to be trusted. Hence, clinicians do not always perceive AI solutions as helpful in solving their clinical tasks, as they only solve part of the problem sufficiently well. The EXPLAIN-ME initiative seeks to create AIs that help solve the overall general tasks in collaboration with the human health care professional. To do so, we need not only to provide interpretability in the form of explainable AI models — we need to provide models whose explanations are easy to understand and utilize during the clinician’s workflow. Put simply, we need to provide good explanations.

Unmet technical needs
It is not hard to agree that good explanations are better than bad explanations. In this project, however, we aim to establish methods and collect data that allow us to train and validate the quality of clinical AI explanations in terms of how understandable and useful they are. AI support should neither distract nor hinder ongoing tasks, giving fluctuating need for AI support, e.g. throughout a surgical procedure. As such, the relevance and utility of AI explanations are highly context- and task-dependent. Through collaboration with Zealand University Hospital we will develop explainable AI (XAI) feedback for human-AI collaboration in static clinical procedures, where data is collected and analyzed independently — e.g. when diagnosing cancer from scans collected beforehand in a different unit. In collaboration with CAMES and NordSim, we will implement human-AI collaboration in simulation centers used to train clinicians in dynamic clinical procedures, where data is collected on the fly — e.g. for ultrasound scanning of pregnant women, or robotic surgery. We will monitor the clinicians’ behavior and performance as a function of feedback provided by the AI. As there are no actual patients involved in medical simulation, we are also free to provide clinicians with potentially bad explanations, and we may use the clinicians’ responses to freely train and evaluate the AI’s ability to explain.

Unmet clinical needs
In the Western World, medical errors are only exceeded by cancer and heart diseases in the number of fatalities caused. About one in ten diagnoses is estimated to be wrong, resulting in inadequate and even harmful care. Errors occur during clinical practice for several reasons, but most importantly, because clinicians often work alone with minimal expert supervision and support. The EXPLAIN-ME initiative aims to create AI decision support systems that take the role of an experienced mentor providing advice and feedback.

The EXPLAIN-ME initiative seeks to optimize the utility of feedback provided by healthcare explainable AI (XAI). We will approach this problem both in static healthcare applications, where clinical decisions are based on data already collected, and in dynamic applications, where data is collected on the fly to continually improve confidence in the clinical decision. Via an interdisciplinary effort between XAI, medical simulation, participatory design and HCI, we aim to optimize the explanations provided by the XAI to be of maximal utility for clinicians, supporting technology utility and acceptance in the clinic.

Case 1: Renal tumor classification
Classification of a renal tumor as malign or benign is an example of a decision that needs to be taken under time pressure. If malign, the patient should be operated immediately to prevent cancer from spreading to the rest of the body, and thus a false positive diagnosis may lead to the unnecessary destruction of a kidney and other complications. While AI methods can be shown statistically to be more precise than an expert physician, there is a need for extending it with explanation for a decision– and only the physicians know what “a good explanation” is. This motivates a collaborative design and development process to find the best balance between what is technically possible and what is clinically needed.

Case 2: Ultrasound Screening
Even before birth, patients suffer from erroneous decisions made by healthcare workers. In Denmark, 95% of all pregnant women participate in the national ultrasound screening program aimed at detecting severe maternal-fetal disease. Correct diagnosis is directly linked to the skills of the clinicians, and only about half of all serious conditions are detected before birth. AI feedback, therefore, comes with the potential to standardize care across clinicians and hospitals. At DTU, KU and CAMES, ultrasound imaging will be the main case for development, as data access and management, as well as manual annotations, are already in place. We seek to give the clinician feedback during scanning, such as whether the current image is a standard ultrasound plane (see figure); whether it has sufficient quality; whether the image can be used to predict clinical outcomes, or how to move the probe to improve image quality.

Case 3: Robotic Surgery
AAU and NordSim will collaborate on the assessment and development of robotic surgeons’ skills, associated with an existing clinical PhD project. Robotic surgery allows surgeons to do their work with more precision and control than traditional surgical tools, thereby reducing errors and increasing efficiency. AI-based decision support is expected to have a further positive effect on outcomes. The usability of AI decision support is critical, and this project will study temporal aspects of the human-AI collaboration, such as how to present AI suggestions in a timely manner without interrupting the clinician; how to hand over tasks between a member of the medical team and an AI system; and how to handle disagreement between the medical expert and the AI system.

In current healthcare AI research and development, there is often a gap between the needs of clinicians and the developed solutions. This comes with a lost opportunity for added value: We miss out on potential clinical value for creating standardized, high quality care across demographic groups. Just as importantly, we miss out on added business value: If the first, research-based step in the development food chain is unsuccessful, then there will also be fewer spin-offs and start-ups, less knowledge dissemination to industry, and overall less innovation in healthcare AI.

The EXPLAIN-ME initiative will address this problem:

  • We will improve clinical interpretability of healthcare AI by developing XAI methods and workflows that allow us to optimize XAI feedback for clinical utility, measured both on clinical performance and clinical outcomes.
  • We will improve clinical technology acceptance by introducing these XAI models in clinical training via simulation-laboratories.
  • We will improve business value by creating a prototype for collaborative, simulation-based deployment of healthcare AI. This comes with great potential for speeding up industrial development of healthcare AI: Simulation-based testing of algorithms can begin while algorithms still make mistakes, because there is no risk of harming patients. This, in particular, can speed up the timeline from idea to clinical implementation, as the simulation-based testing is realistic while not requiring the usual ethical approvals.

This comes with great potential value: While AI has transformed many aspects of society, its impact on the healthcare sector is so far limited. Diagnostic AI is a key topic in healthcare research, but only marginally deployed in clinical care. This is partly explained by the low interpretability of state-of-the-art AI, which negatively affects both patient safety and clinicians’ technology acceptance. This is also explained by the typical workflow in healthcare AI research and development, which is often structured as parallel tracks where AI researchers independently develop technical solutions to a predefined clinical problem, while only occasionally interacting with the clinical end-users. This often results in a gap between the clinicians’ needs and the developed solution. The EXPLAIN-ME initiative aims to close this gap by developing AI solutions that are designed to interact with clinicians in every step of the design-, training-, and implementation process.

October 1, 2021 – April 30, 2025 – 3,5 years.

Participants

Project Manager

Aasa Feragen

Professor

Technical University of Denmark
DTU Compute

E: afhar@dtu.dk

Anders Nymark Christensen

Associate Professor

Technical University of Denmark
DTU Compute

Mads Nielsen

Professor

University of Copenhagen
Department of Computer Science

Mikael B. Skov

Professor

Aalborg University
Department of Computer Science

Niels van Berkel

Associate Professor

Aalborg University
Department of Computer Science

Henning Christiansen

Professor

Roskilde University
Department of People and Technology

Jesper Simonsen

Professor

Roskilde University
Department of People and Technology

Henrik Bulskov Styltsvig

Associate Professor

Roskilde University
Department of People and Technology

Martin Tolsgaard

Associate Professor

CAMES Rigshopitalet
University of Copenhagen

Morten Bo Svendsen

Chief Engineer

CAMES Rigshospitalet
University of Copenhagen

Sten Rasmussen

Professor, Head

Department of Clinical Medicine
Aalborg University

Mikkel Lønborg Friis

Director

NordSim
Aalborg University

Nessn Htum Azawi

Associate Professor, Head of Research Unit & Renal Cancer team

Department of Urology
Zealand University Hospital

Partners

Categories
Bridge project

Business Transformation and Organisational AI-based Decision Making

Project type: Bridge Project

Business Transformation and Organisational AI-based Decision Making

Business processes in private companies and public organisations are today widely supported by Enterprise Resource Planning, Business Process Management  and Electronic Case Management systems, put into use with the aim to improve efficiency of the business processes.

The combined result is however often an increasingly elaborate information systems landscape, leading to ineffectiveness, limited understanding of business processes, inability to predict and find the root cause of losses, errors and fraud, and inability to adapt the business processes. This lack of understanding, agility and control over business processes places a major burden on the organisations. For instance, a recent report concludes that the Danish Ministry of Taxation’s control of the state’s annual revenue of one trillion DKK is so “deficient and weak” that there is a clear “increased risk” that employees can cheat and abuse for their own gain in the same style as the recent Britta Nielsen and Armed forces cases.

Enterprise systems generate a plethora of highly granular data recording their operation. Machine learning has a great potential to aid in the analysis of this data in order to predict errors, detect fraud and improve their efficiency. Knowledge of business processes can also be used to support the needed transformation of old and heterogeneous it landscapes to new platforms. Application areas include Anti-Money-Laundering (AML) and Know-Your-Customer (KYC) supervision of business processes in the financial sector, supply chain management in agriculture and foodstuff supply, and compliance and optimisation of workflow processes in the public sector.

The research aim of the project to develop methods and tools that enable industry to develop new efficient solutions for exploiting the huge amount of business data generated by enterprise systems, with specific focus on tools and responsible methods for the use of process insights for business intelligence and transformation. Through field studies in organizations that are using AI, BPM and process mining techniques it will be investigated how organizations implement, use and create value (both operational and strategic) through AI, BPM and process mining techniques. In particular, the project will focus on how organizational decision-making changes with the implementation of AI-based algorithms in terms of decision making skills (intuitive + analytical) of the decision makers, their roles and responsibilities, their decision rights and authority and the decision context.
The scientific value of the project is new methods and user interfaces for decision support and business transformation and associated knowledge of their performance and properties in case studies. These are important contributions to provide excellent knowledge to Danish companies and education programs within AI for business innovation and processes. For capacity building the value of the project is to educate 1 industrial PhD in close collaboration between CBS, DIKU and the industrial partner DCR Solutions. The project will also provide on-line course material that can be used in existing and new courses for industry, MSc and PhD. For the business and societal value, the project has very broad applicability, targeting improvements in terms of effectiveness and control of process aware information systems across the private and public sector. Concretely, the project considers cases of customers of the participating industry partners within the financial sector, the public sector and within energy and building management. All sectors that have vital societal role. The industry partner will create business value of estimated 10-20MDkr increased turnaround and 2-3 new employees in 5-7 years through the generation of IP by the industrial researcher and the development of state- of-the-art proprietary process analysis and decision support tools.

July 1, 2021 – December 31, 2025 – 3,5 years

Total budget DKK 16,77 million / DIREC investment DKK4,95 million

Participants

Project Manager

Arisa Shollo

Associate Professor

Copenhagen Business School
Department of Digitalization

E: ash.digi@cbs.dk

Thomas Hildebrandt

Professor

University of Copenhagen
Department of Computer Science

Raghava Mukkamala

Associate Professor

Copenhagen Business School
Department of Digitalization

Morten Marquard

Founder & CEO

DCR Solutions

Søren Debois

CTO

DCR Solutions

Partners

Categories
Bridge project

AI and Blockchains for Complex Business Processes

Project type: Bridge Project

AI and Blockchains for Complex Business Processes

Business processes in private companies and public organisations are today widely supported by Enterprise Resource Planning (ERP), Business Process Management (BPM) and Electronic Case Management (ECM) systems [1], put into use with the aim to improve efficiency of the business processes. Recently, also blockchain technologies are being proposed as a means to provide guarantees for security, computational integrity and pseudonymous agency.

The combined result is however often an increasingly elaborate information systems landscape, leading to ineffectiveness, limited understanding of business processes, inability to predict and find the root cause of losses, errors and fraud, and inability to adapt the business processes [1]. This lack of understanding, agility and control over business processes places a major burden on the organisations. For instance, a recent report concludes that the Danish Ministry of Taxation’s control of the state’s annual revenue of one trillion DKK is so “deficient and weak” that there is a clear “increased risk” that employees can cheat and abuse for their own gain in the same style as the recent Britta Nielsen and Armed forces cases.

Enterprise and block chain systems generate a plethora of highly granular data recording their operation. Machine learning has a great potential to aid in the analysis of this data in order to predict errors, detect fraud and improve their efficiency. Knowledge of business processes can also be used to support the needed transformation of old and heterogeneous it landscapes to new platforms. Application areas include Anti-Money-Laundering (AML) and Know-Your-Customer (KYC) supervision of business processes in the financial sector, supply chain management in agriculture and foodstuff supply, and compliance and optimisation of workflow processes in the public sector.

The research aim of the AI and Blockchain for Complex Business Processes project is methods and tools that enable industry to develop new efficient solutions for exploiting the huge amount of business data generated by enterprise and blockchain systems, from techniques for automatic identification of business events, via the development of new rule based process mining technologies to tools for the use of process insights for business intelligence and transformation. The project will do this through a unique bridge between industry and academia, involving two innovative, complementary industrial partners and researchers across disciplines of AI, software engineering and business intelligence from three DIREC partner universities. Open source release (under the LGPL 3.0 license) of the rule-based mining algorithms developed by the PhD assigned task 2 will ensure future enhancement and development by the research community, while simultaneously providing businesses the opportunity to include them in proprietary software.
The scientific value of the project is new methods and tools for process mining, decision support and business transformation and associated knowledge of their performance and properties in case studies. These are important contributions to provide excellent knowledge to Danish companies and education programs within AI and Blockchain technology for business innovation and processes. For capacity building the value of the project is to educate 2 PhD and 1 industrial Post Doc in close collaboration with industry. Open source availability of general project outcomes and industry collaboration enable several exploitation paths. The project will also provide on-line course material for existing and new courses for industry, MSc and PhD. For the business and societal value, the project has very broad applicability, targeting improvements in terms of effectiveness and control of process aware information systems across the private and public sector. Concretely, the project considers cases of customers of the participating industry partners within the financial sector, the public sector and within the operations and supply chains for agriculture and foodstuffs supply. All sectors that have vital societal role. The industrial partners will create business value of estimated 155MDkr increased turnaround and 10-12 new employees in 5-7 years through the generation of IP by the two industrial researchers and the development of state- of-the-art proprietary process analysis and decision support tools.

July 1, 2021 – December 31, 2025 – 3,5 years

Participants

Project Manager

Tijs Slaats

Associate Professor

University of Copenhagen
Department of Computer Science

E: slaats@di.ku.dk

Jakob Grue Simonsen

Professor

University of Copenhagen
Department of Computer Science

Thomas Hildebrandt

Professor

University of Copenhagen
Department of Computer Science

Michel Avital

Professor

Copenhagen Business School
Department of Digitalization

Henrik Axelsen

PHD Fellow

University of Copenhagen
Department of Computer Science

Christoffer Olling Back

Industry Postdoc

Gekkobrain

Hugo López

Assistant Professor

University of Copenhagen
Department of Computer Science

Søren Debois

Associate Professor

IT University of Copenhagen
Department of Computer Science

Jens Strandbygaard

CEO and Cofounder

Gekkobrain

Omri Ross

Chief Blockchain Scientist

eToro

Axel Fjelrad Christfort

PhD Fellow

University of Copenhagen
Dept. of Computer Science

Partners

Categories
Bridge project

Mobility Analytics using Sparse Mobility Data and Open Spatial Data

Project type: Bridge Project

Mobility Analytics using Sparse Mobility Data and Open Spatial Data

The mobility of people and things is an important societal process that facilitates and affects the lives of most people. Thus, society, including industry, has a substantial interest in well-functioning outdoor and indoor mobility infrastructures that are efficient, predictable, environmentally friendly, and safe. For outdoor mobility, reduction of congestion is high on the political agenda – it is estimated that congestion costs Denmark 30 billion DKK per year. Similarly, the reduction of CO2 emissions from transportation is on the political agenda, as the transportation sector is the second largest in terms of greenhouse gas emissions. Danish municipalities are interested in understanding the potentials for integrating various types of e-bikes in transportation planning. Increased use of such bicycles may contribute substantially to the greening of transportation and may also ease congestion and thus improve travel times. For indoor mobility, corridors and elevators represent bottlenecks for mobility in large building complexes (e.g. hospitals, factories and university campuses). With the addition of mobile robots, humans and robots will also be fighting to use the same space when moving indoors. Heavy use of corridors is also a source of noise that negatively impacts building occupants.

The ongoing, sweeping digitalisation has also reached outdoor and indoor mobility. Thus, increasingly massive volumes of mobility-related data, e.g. from sensors embedded in the road and building infrastructures, networked positioning (e.g. GPS or UWB) devices (e.g. smartphones and in-vehicle navigation devices) or indoor mobile robots, are becoming available. This enables an increasingly wide range of analyses related to mobility. When combined with digital representations of road networks and building interiors, this data holds the potential for enabling a more fine-grained understanding of mobility and for enabling more efficient, predictable, and environmentally friendly mobility. Long movement times equate with congestion and bad overall experiences.

The above data foundation offers a basis for understanding how well a road network or building performs across different days and across the duration of a day, and it offers the potential for decreased movement times by means of improved mobility flows and routing. However, there is an unmet need for low-cost tools that can be used by municipalities and building providers (e.g. mobile robot manufactures) that are capable of enabling a wide range of analytics on top of mobility data.

  1. Build extract-transform-load (ETL) prototypes that are able to ingest high and low frequency spatial data (e.g. GPS and indoor positioning data). These prototypes must enable map-matching of spatial data to open road network and building representations and must enable privacy protection.
  2. Design effective data warehouse schemas that can be populated with ingested spatial data.
  3. Build mobility analytics warehouse systems that are able to support a broad range of analyses in interactive time.
  4. Build software systems that enable users to formulate analyses and visualise results in maps-based interfaces for both indoor and outdoor use. This includes infrastructure for the mapping of user input into database queries and the maps-based display of results returned by the data warehouse system.
  5. Develop a range of advanced analyses that address user needs. Possible analyses include congestion maps, isochrones, aggregate travel-path analyses, origin-destination travel time matrices, and what-if analyses where the effects of reconstruction are estimated (e.g. adding an additional lane to a stretch of road or changing corridors). For outdoors settings, CO2-emissions analyses based on vehicular environmental impact models and GPS data are also considered.
  6. Develop transfer learning techniques that make it possible to leverage spatial data from dense spatio-temporal “regions” for enabling analyses in sparse spatio-temporal regions.
The envisioned prototype software infrastructure characterised above aims to be able to replace commercial road network maps with the crowd sourced OpenStreetMap (OSM) map and for indoors enable new data sources about the indoor geography. The open data might not be curated, which means that new quality control tools are required to ensure that computed travel times are correct. This will reduce cost. Next, the project will provide means of leveraging available spatial data as efficiently and effectively as possible. In particular, while more and more data becomes available, the available data will remain sparse in relation to important analyses. This is due to the cost of data that can be purchased and due to the lack of desired data. Thus, it is important to be able to exploit available data as well as possible. We will examine how to transfer data from locations and times with ample data to locations and times with insufficient data. For example, we will study transfer learning techniques for this purpose; and as part of this, we will study feature learning. This will reduce cost and will enable new analyses that where not possible previously due to a lack of data. Rambøll will be able to in-source the software infrastructure and host analytics for municipalities. Mobile Industrial Robotics (MiR) will be able to in-source the software infrastructure and host analytics for building owners. Additional value will be created because the above studies will be conducted for multiple transportation modes, with a focus on cars and different kinds of e-bikes. We have access to a unique data foundation that will enable these studies.

The project involves the research workstreams of Data Management (WS3), AI (WS2), and CyPhys (WS6), and Ethics (WS10) of DIREC.

May 1, 2021 – April 30, 2024 – 3 years

Total budget DKK 9,41 million / DIREC investment DKK 5,19 million

Participants

Project Manager

Christian S. Jensen

Professor

Aalborg University
Department of Computer Science

E: csj@cs.aau.dk

Ira Assent

Professor

Aarhus University
Department of Computer Science

Kristian Torp

Professor

Aalborg University
Department of Computer Science

Bin Yang

Professor

Aalborg University
Department of Computer Science

Martin Møller

Chief Innovation Officer

The Alexandra Institute

Mikkel Baun Kjærgaard

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Norbert Krüger

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Avgi Kollakidou

PHD FELLOW

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Kasper Fromm Pedersen

Research Assistant

Aalborg University
Dept. of Computer Science

Partners

Categories
Bridge project

Deep Learning and Automation of Image-Based Quality of Seeds and Grains

Project type: Bridge Project

Deep Learning and Automation of Image-Based Quality of Seeds and Grains

Today, the manual visual inspection of grain is still one of the most important quality assurance procedures throughout the value chain of bringing cereals from the field to the table. In order to improve performance, robustness and consistency of this inspection, there is a need for automated imaging-based solutions to replace subjective manual inspection. In order to meet this need FOSS has developed a multispectral imaging system called EyeFoss™. With this system user independent multispectral images of +10.000 individual kernels can easily be collected within minutes real time on site. The EyeFoss™ applications currently cover wheat and barley grading.

To derive maximum value from the data there is a need to develop methods of training data algorithms to automatically be able to provide industry with the best possible feedback on the quality of incoming materials. The purpose is to develop a framework which replaces the current feature-based models with deep learning methods. By using these methods, the potential is significantly to reduce the labor needed to expand the application of EyeFoss™ into new applications; e.g. maize, coffee, while at the same time increase the performance of the algorithms in accurately and reliably describing the quality of cereals.

This project aims at developing and validating, with industrial partners, a method of using deep learning neural networks to monitor quality of seeds and grains using multispectral image data. The method has the potential of providing the grain industry with a disruptive new tool for ensuring quality and optimising the value of agricultural commodities. The ambition of the project is to end up with an operationally implemented deep learning framework for deploying EyeFoss™ to new applications in the industry. In order to the achieve this, the project will team up with DTU Compute as a strong competence centre on deep learning as well as a major player within the European grain industry (to be selected).

The research aim of the project is the development of AI methods and tools that enable industry to develop new solutions for automated image-based quality assessment. End-to-end learning of features and representations for object classification by deep neural networks can lead to significant performance improvements. Several recent mechanisms have been developed for further improving performance and reducing the need for manual annotation work (labelling) including semi-supervised learning strategies and data augmentation. Semi-supervised learning  combines generative models that are trained without labels (unsupervised learning), application of pre-trained networks (transfer learning) with supervised learning on small sets of labelled data. Data augmentation employs both knowledge based transformations, such as translations and rotations and more general learned transformations like parameterised “warps” to increase variability in the training data and increase robustness to natural variation.
The scientific value of the project will be new methods and open source tools and associated knowledge of their performance and properties in an industrial setup. For capacity building the value of the project is to educate one PhD student in close collaboration with FOSS – the aim is that the student will be present at FOSS at least 40% of the time to secure a close integration and knowledge exchange with the development team at FOSS working on introducing EyeFossTM to the market. Specific focus will be on exchange at the faculty level as well; the aim is to have faculty from DTU Compute present at FOSS and vice-versa for senior FOSS specialists that supervise the PhD student. This will secure better networking, anchoring and capacity building also at the senior level. The PhD project will additionally be supported by a master-level program already established between the universities and FOSS. Specifically, the project aims to provide FOSS with new tools to assist in scaling the market potential of the EyeFossTM from its current potential of 20m EUR/year. Adding, in a cost-efficient way, applications for visual inspection of products like maize, rice or coffee has the potential to at least double the market potential. In addition, the contributions will be of generic relevance to companies developing image-based solutions for food quality/integrity assessment and will provide excellent application and AI integration knowledge of commercial solutions already on-the-market to other Danish companies.

The project involves the research themes of AI (WS2) and CyPhys (WS6) of DIREC.

October 1, 2020 – September 31, 2024 – 3.5 years

Total budget DKK 3,91 million / DIREC investment DKK 1,90 million

Participants

Project Manager

Lars Kai Hansen

Professor

Technical University of Denmark
DTU Compute

E: lkai@dtu.dk

Kim Steenstrup Pedersen

Associate Professor

University of Copenhagen
Department of Computer Science

Lenka Hýlová

PHD Fellow

Technical University of Denmark
DTU Compute

Partners

Categories
Bridge project

Edge-based AI Systems for Predictive Maintenance

Project type: Bridge Project

Edge-based AI Systems for Predictive Maintenance

Downtime of equipment is costly and a source of safety, security and legal issues. Today, organisations adopt a conservative schedule of preventive maintenance independent of the condition of equipment. This results in unnecessary service costs and occasional interruptions of production due to unexpected failures. Therefore, it is imperative in many domains to transition from regular maintenance to predictive maintenance. A recent report An AI nation: Harnessing the opportunity of artificial intelligence in Denmark estimates that enabling predictive maintenance via AI has a 14-19 billion potential for the Danish private sector.

Relevant domains include medical production (e.g. Novo Nordisk) to introduce condition-based maintenance of the machines that are used in production. The data collected by the equipment manufacturers is often not available in real-time. To address this issue, they need accurate predictive models, based on data collected by sensors under their control. Robot manufactures (e.g. UR) and their integrators want to enable condition-based maintenance of robotic systems. To address this, they need predictive models based on data from robots. In both domains due to reliability and safety requirements it is a prerequisite that data collection and processing are placed in vicinity of the equipment.

The energy sector (e.g. Energinet) wants to incorporate predictive knowledge of equipment performance. They need accurate predictive models, based on available data. E.g. for wind turbines based on local wind conditions as well as the state of the wind turbines. For this case Energinet has to collect the data externally as they do not have access to internal wind turbine data.

The research aim of the project is methods and tools that enable industry to develop new solutions with accurate AI-based maintenance predictions on edge-based software platforms.

The resulting applications must be deployed to collect and process large amounts of data locally. This data feeds high accuracy predictive models deployed at the edge, adapted to changing local conditions, and maintained with minimum intervention from operators.

These models should cover abstractions allowing the understanding of relevant dependencies in the data. The key research problem is to devise architectures and solutions that scale to the entire fleets of equipment with accurate AI predictions.

This also requires that resources in terms of processing power, storage and communication, are optimised in order to obtain low-power and real-time performance, leading to a resource efficiency vs prediction accuracy trade-off. The project will establish a bridge to enable Danish companies to develop and use AI-based predictive maintenance within several domains.

The scientific value of the project is new methods and tools and associated knowledge of their performance and properties in field tests. These are important contributions to provide excellent knowledge to Danish companies and education programs within AI and IoT.

For capacity building the value of the project is to educate 3 PhD students (including 1 Industrial PhD) and 1 Post Doc in close collaboration with industry. The open source availability of general project outcomes and industry collaboration enable several exploitation paths. In addition, for the master-level the projects will offer an industry program to 15 students at 3 universities.

The business and societal value is on a national level estimated to a 14-19 billion potential for the Danish private sector. In the project we target both the medical, robotic, industrial and energy sectors. These are Danish frontrunners in adopting the technology and creating inspiration for wider adoption by the Danish private sector. For the public sector enabling equipment with higher operation efficiency will positively impact the efficiency of the sector.

Within the area of AI the project will research and develop AI models applicable to predictive maintenance. Furthermore, research will consider methods for handling limitations on training data among others due to data ownership restrictions and confidentiality. A final aspect is work on AI models that can adapt to different edge conditions including available processing power and timing deadlines.

Within the area of Cyber-physical systems the project will research and develop software architectures and platforms for edge-based execution of AI models. The outcomes should enable AI models to adapt to changing local conditions, and be maintained with minimum intervention from operators. The key research problem is to devise architectures and solutions that scale to the entire fleets of equipment with accurate AI predictions. This requires methods that optimize resources in terms of processing power, storage and communication in order to obtain low-power and real-time performance, leading to a resource efficiency vs prediction accuracy trade-off.

October 1, 2020 – September 31, 2024 – 3.5 years

Total budget DKK 12.24 million / DIREC investment DKK 6.3 million

Participants

Project Manager

Mikkel Baun Kjærgaard

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

E: mbkj@mmmi.sdu.dk

Philippe Bonnet

Professor

IT University of Copenhagen
Department of Computer Science

Xenofon Fafoutis

Associate Professor

Technical University of Denmark
Dept. of Applied Mathematics and Computer Science

Jan Madsen

Professor

Technical University of Denmark
DTU Compute

Martin Møller

Chief Innovation Officer

The Alexandra Institute

Alexandre Alapetite

Software Solutions Architect

The Alexandra Institute

Niels Ørbæk Chemnitz

PhD fellow

IT University of Copenhagen
Department of Computer Science

Kasper Hjort Bertelsen

PhD Fellow

IT University of Copenhagen
Department of Computer Science

Emil Stubbe Kolvig-Raun

PhD Fellow

Universal Robots

Ahmad Rzgar Hamid

Software Engineer

University of Southern Denmark

Emil Njor

PhD Fellow

Technical University of Denmark

Partners