Categories
News

Maja conducts research into green algorithms: All projects count

15 June 2023

Maja conducts research into green algorithms: All projects matter  

Maja Hanne Kirkeby is Associate Professor at Roskilde University (RUC) and works closely with companies and other researchers to develop more energy efficient software solutions.

A DIREC project on green algorithms last year was the starting point for a number of new research projects and subsequently a close collaboration with the IT company Nine A/S.

Read more in Danish

Categories
News

Intelligent technology must help prevent a repeat of the floods of 2011 and 2013

13 April 2023

Intelligent technology must help prevent a repeat of the floods of 2011 and 2013  

Denmark must prepare for more extreme weather in the future. By using machine learning and artificial intelligence, researchers will effectively be able to prevent floods.

January 2023 was the wettest month ever measured in Denmark – the development is a result of climate change and in the future we must prepare for handling even greater amounts of rain and waste water.

Read more (in Danish)

Visit at HOFOR on September 22, 2022 with participants from AAU, ITU, DHI, Biofos and HOFOR

Categories
News

Climate change: We need to act now, and we need help from digital technology

28 March 2023

Climate change: We need to act now, and we need help from digital technology  

A recent report from the Intergovernmental Panel on Climate Change is full of distressing reading. Digital development must speed up, and researchers can play a leading role in the development of digital solutions to counteract climate change.

Of course, digitisation alone cannot eliminate CO2 emissions, but several reports have concluded that digital technology can make a difference when it comes to climate change. A report from the Royal Society estimates that digital technology can contribute to 1/3 of England’s CO2 reduction target in 2030, and the Boston Consulting Group estimates that digital solutions can reduce companies’ CO2 emissions by 5-10%, which corresponds to a reduction of 2.6 – 5.3 gigatons of CO2 emissions if the solutions are implemented globally.

Read more in Danish

Categories
Bridge project

Multimodal Data Processing of Earth Observation Data

DIREC project

Multimodal data processing of Earth Observation Data

Summary

Based on Earth observations, a number of Danish public organizations build and maintain important data foundations that are used for decision-making, e.g., for executing environmental law or making planning decisions in both private and public organizations in Denmark.  

Together with some of these public organizations, this project aims to support the digital acceleration of the green transition by strengthening the data foundation for environmental data. There is a need for public organizations to utilize new data sources and create a scalable data warehouse for Earth observation data. This will involve building processing pipelines for multimodal data processing and designing user-oriented data hubs and analytics. 

 

Project period: 2022-2025
Budget: DKK 12,27 million

The Danish partnership for digitalization has concluded that there is a need to support the digital acceleration of the green transition. This includes strengthening efforts to establish a stronger data foundation for environmental data. Based on observations of the Earth a range of Danish public organizations build and maintain important data foundations. Such foundations are used for decision making, e.g., for executing environmental law or making planning decisions in both private and public organizations in Denmark.

The increasing possibilities of automated data collection and processing can decrease the cost of creating and maintaining such data foundations and provide service improvements to provide more accurate and rich information. To realize such benefits, public organizations need to be able to utilize the new data sources that become available, e.g., to automize manual data curation tasks and increase the accuracy and richness of data. However, the organizations are challenged by the available methods ability to efficiently combine the different sources of data for their use cases. This is particularly the case when user-facing tools must be constructed on top of the data foundation. The availability of better data for end-users will among others help the user decrease the cost of executing environmental law and making planning decisions. In addition, the ability of public data sources to provide more value to end-users, improves the societal return-on-investment for publishing these data, which is in the interest of the public data providers as well as their end-users and the society at large.

The Danish Environmental Protection Agency (EPA) has the option to receive data from many data sources but today does not utilize this because today’s lack of infrastructure makes it cost prohibitive to take advantage of the data. Therefore, they are expressing a need for methods to enable a data hub that provide data products combining satellite, orthophoto and IoT data. The Danish GeoData Agency (GDA) collects very large quantities of Automatic Identification System (AIS) data from ships sailing in Denmark. However, they are only to a very limited degree using this data today. The GDA has a need for methods to enable a data hub that combines multiple sources of ship-based data including AIS data, ocean observation data (sea level and sea temperature) and metrological data. There is a need for analytics on top that can provide services for estimating travel-time at sea or finding the most fuel-efficient routes. This includes estimating the potential of lowering CO2 emissions at sea by following efficient routes.

Geo supports professional users in performing analysis of subsurface conditions based on their own extensive data, gathered from tens of thousands of geotechnical and environmental drilling operations, and on public sources. They deliver a professional software tool that presents this multi modal data in novel ways and are actively working on creating an educational platform giving high school students access to the same data. Geo has an interest in and need for methods for adding live, multi modal data to their platform, to support both professional decision makers and students. Furthermore, they have a need for novel new ways of querying and representing such data, to make it accessible to professionals and students alike. Creating a testbed for combining Geo’s data with satellite feeds, combined with automated processing to interpret this data, will create new synergies and has the potential to greatly improve the visualizations of the subsurface by building detailed, regional and national 3D voxel models.

Therefore, the key challenges that this project will address are how to construct scalable data warehouses for Earth observation data, how to design systems for combining and enriching multimodal data at scale and how to design user-oriented data interfaces and analytics to support domain experts. Thereby, helping the organizations to produce better data for the benefit of the green transition of the Danish society.

The aim of the project is to do use-inspired basic research on methods for multimodal processing of Earth observation data. The research will cover the areas of advanced and efficient big data management, software engineering, Internet of Things and machine learning. The project will research in these areas in the context of three domain cases with GDA on sea data and EPA/GEO on environmental data.

Scalable data warehousing is the key challenge that work within advanced and efficient big data management will address. The primary research question is how to build a data warehouse with billions of rows of all relevant domain data. AIS data from GDA will be studied and in addition to storage also data cleaning will be addressed. On top of the data warehouse, machine learning algorithms must be enabled to compute the fastest and most fuel-efficient route between two arbitrary destinations.

Processing pipelines for multimodal data processing is the key topic for work within software engineering, Internet of Things and machine learning. The primary research question is how to engineer data processing pipelines that allows for enriching data through processes of transformation and combination. In the EPA case there is a need for enriching data by combining data sources, both from multiple sources (e.g., satellite and drone) and modality (e.g., the NDVI index for quantifying vegetation greenness is a function over a green and a near infrared band). Furthermore, we will research methods for easing the process of bringing disparate data into a form that can be inspected both by a human and an AI user. For example, data sources are automatically cropped to a polygon representing a given area of interest (such as a city, municipality or country), normalized for comparability and subjected to data augmentation, in order to improve machine learning performance. We will leverage existing knowledge on graph databases. We aim to facilitate the combination of satellite data with other sources like sensor recordings at specific geo locations. This allows for advanced data analysis of a wide variety of phenomena, like detection and quantification of objects and changes over time, which again allows for prediction of future occurrences.

User-oriented data hubs and analytics is a cross cutting topic with the aim to design interfaces and user-oriented analytics on top of data warehouses and processing pipelines. In the EPA case the focus is on developing a Danish data hub with Earth observation data. The solution must provide a uniform interface to working with the data providing a user-centric view to data representation. This will then enable decision support systems, which will be worked on in the GEO case, that may be augmented by artificial intelligence and understandable to the human users through explorative graph-based user interfaces and data visualizations. For the GPA case the focus is on a web-frontend for querying AIS data as a trajectory and heat maps and estimating the travel time between two points in Danish waters. As part of the validation the data warehouse and related services will be deployed at GDA and serve as the foundation for future GDA services.

Advancing means to process, store and use Earth observation data has many potential domain applications. To build the world class computer science research and innovation centres, as per the long-term goal of DIREC, this project focuses on building the competencies necessary to address challenges with Earth observation data building on advances in advanced and efficient big data management, software engineering, Internet of Things and machine learning.

Scientific value
The project’s scientific value is the development of new methods and techniques for scalable data warehousing, processing pipelines for multimodal data and user-oriented data hubs and analytics. We expect to publish at least seven rank A research articles and to demonstrate the potential of the developed technologies in concrete real-world applications.

Capacity building
The project will build and strengthen the research capacity in Denmark directly through the education of two PhDs, and through the collaboration between researchers, domain experts, and end-users that will lead to R&D growth in the public and industrial sectors. Research competences to address a stronger digital foundation for the green transformation is important for the Danish society and associated industrial sectors.

Societal and business value
The project will create societal and business value by providing the foundation for the Blue Denmark to reduce environmental and climate impact in Danish and Greenlandic waters to help support the green transformation. With ever-increasing human activity at sea, growing transportation of goods where 90% is being transported by shipping and a goal of a European economy based on carbon neutrality there is a need for activating marine data to support this transformation. For the environmental protection sector the project will provide the foundation for efforts to increase the biodiversity in Denmark by better protection of fauna types and data-supported execution of environmental law. The project will provide significant societal value and directly contribute to SDGs 13 (climate action), 14 (life under water) and 15 (life on land).

In conclusion, the project will provide a strong contribution to the digital foundation for the green transition and support Denmark being a digital frontrunner in this area.

Impact

The project will provide the foundation for the Blue Denmark to reduce environmental and climate impact in Danish and Greenlandic waters to help support the green transformation.  

Participants

Project Manager

Aslak Johansen

Associate Professor

University of Southern Denmark
Maersk Mc-Kinney Moller Institutee

E: asjo@mmmi.sdu.dk

Christian S. Jensen

Professor

Aalborg University
Department of Computer Science

Thiago Rocha Silva

Associate Professor

University of Southern Denmark
Maersk Mc-Kinney Moller Institute

Kristian Torp

Professor

Aalborg University
Department of Computer Science

Kristian Tølbøl Rasmusssen

Head of Visual Computing Lab

The Alexandra Institute

Mads Darø Kristensen

Principal Application Architect

The Alexandra Institute

Søren Krogh Sørensen

Software Developer

The Alexandra Institute

Oliver Hjermitslev

Visual Computing Specialist

The Alexandra Institute

Mads Robenhagen Mølgaard

Department Director

GEO
Geodata & Subsurface Models

Ove Andersen

Special Consultant

Danish Geodata Agency

Niels Tvilling Larsen

Head of Department

Danish Geodata Agency
Danish Hydrographic Office

Sarah Lønholt

Special consultant

Danish Environmental Protection Agency

Partners

Categories
Explore project

Hardware/software Trade-off for the Reduction of Energy Consumption

DIREC project

Hardware/Software Trade-offs for the Reduction of Energy Consumption

Summary

Computing devices consume a considerable amount of energy. Implementing algorithms in hardware using field-programmable gate arrays (FPGAs) can be more energy efficient than executing them in software in a processor.

This project explores classic sorting and path-finding algorithms and compare their energy efficiency and performance when implemented in hardware.

The use of FPGAs is increasing in mainstream computing, and the project may enable software developers to use a functional language to efficiently implement algorithms in FPGs and reduce energy consumption.

Computing devices consume a considerable amount of energy. Within data centers this has an impact on climate change and in small embedded systems, i.e., battery powered devices, energy consumption influences battery life. Implementing an algorithm in hardware (in a chip) is more energy efficient than executing it in software in a processor. Up until recently processor performance and energy efficiency have been good enough to just use software on a standard processor or on a graphic processing unit. However, this performance increase comes to an end and energy-efficient computing systems need domain specific hardware accelerators.

However, the cost of producing a chip is very high. Between fixed hardware and software there is the technology of field-programmable gate arrays (FPGAs). FPGAs are programmable hardware, the algorithm can be changed at runtime. However, FPGAs are less energy efficient than chips. We expect that for some algorithms an FPGA will be more energy efficient than the implementation in software. The research question is whether and how it is possible to reduce energy consumption of IT systems by moving algorithms from software into hardware (FPGAs). We will do this by investigating classic sorting and path-finding algorithms and compare their energy-efficiency and, in addition, their performance.

Such results are essential to both data centers as well as embedded systems. However, the hardware design of these accelerators is often complex, and their development is time-consuming and error-prone. Therefore, we need a tool and methodology that enables software engineers to design efficient hardware implementation of their algorithms. We will explore a modern hardware construction language, Chisel. Chisel is a Scala-embedded hardware construction language that allows to describe hardware in a more software-like high-level language. Chisel is the enabling technology to simplify the translation of a program from software into hardware. This project will furthermore investigate the efficiency of using the functional and object-oriented hardware description language Chisel to express algorithms efficiently for execution in FPGAs.

Programs running on a general-purpose computer consume a considerable amount of energy. Some programs can be translated into hardware and executed on an FPGA. This project will explore the trade-offs between executing a program in hardware and executing it in software relative to energy consumption.

Scientific Value
The FPGA and software implementations of path-finding algorithms have recently been evaluated in the lense of performance, e.g., [?], whereas sorting algorithms have also been evaluated on energy consumption, e.g., [2]. Here FPGAs performed better than CPU in many cases and with similar or reduced energy consumption. The language used for implementation is Verilog and C which is then translated to Verilog using Vivado HLS. In this project, we will implement the algorithms in hardware using Chiesl and evaluate their performance and energy consumption. DTU and RUC will advance the research in the design and testing of digital systems for energy saving. Our proposed approach provides a general software engineering procedure that we plan to validate with standard algorithms used in cloud applications. This research will drive the adaption of hardware design methods to the education curriculum towards modern tools and agile methods. 

Capacity Building
The project establish a new collaboration between two Danish Universities and is a first step towards building a more energy-aware profile of the Computer Science laboratory FlexLab, RUC. In return FlexLab make FPGAs available to the research assistants at RUC. Thus, this project will improve visibility of energy-aware design IT systems nationally and international. This project with the cooperation between researchers as DTU and RUC will allow Denmark to take the lead in digital research nd development for reduced energy consumption. The upcoming research positions at RUC will contribute to building RUC’s research capacity, and the project will also recruit new junior researchers directly and in future subsequent projects.

Business Value
The changes in the hardware industry indicates that the use of FPGAs will increase: A few years ago Intel bought Altera -one of the two largest FPGA production companies- to include FPGAs in future versions of their processors. Similar, AMD is aiming to buy Xilinx, the other big FPGA vendor. In addition, one can already rent a server in the cloud from Amazon that includes an FPGA. These changes all points towards that FPGAs are entering mainstream computing. Many mainstream programming languages like C# or Java already include functional features such as lambda expressions or higher-order functions. The more common languages for encoding FPGAs are Verilog, a C inspired language, and VHDL, a Pascal inspired language, Therefore, it may be efficient for mainstream software developers to use a functional language to efficiently implement algorithms in FPGAs and thus both increase performance and reduce the energy consumption. 

Societal Value
Currently ICT consumes approximately 10% of the global electricity and this is estimated to increase to 20% in 2030. Thus, reducing energy consumption of ICT is critical. If successful, this project has the potential to reduce the energy consumption via rephrasing the essential software programs in FPGA units.

Impact

Currently ICT consumes approximately 10% of the global electricity and this is estimated to increase to 20% in 2030. Thus, reducing energy consumption of ICT is critical. If successful, this project has the potential to reduce the energy consumption via rephrasing the essential software programs in FPGA units. 
 

Participants

Project Manager

Maja Hanne Kirkeby

Assistant Professor

Roskilde University
Department of People and Technology

E: majaht@ruc.dk

Martin Schoerberl

Associate Professor

Technical University of Denmark
DTU Compute

Mads Rosendahl

Associate Professor

Roskilde Universlty
Department of People and Technology

Thomas Krabben

FlexLab Manager

Roskilde University
Department of People and Technology

Categories
Bridge project

HERD: Human-AI collaboration: Engaging and controlling swarms of robots and drones

DIREC project

HERD: Human-AI Collaboration

- Engaging and Controlling Swarms of Robots and Drones

Summary

Today, robots and drones take on an increasingly broad set of tasks. However, such robots are limited in their capacity to cooperate with one another and with humans. How can we leverage the potential benefits of having multiple robots working in parallel to reduce time to completion? If robots are given the task collectively as a swarm, they could potentially coordinate their operation on the fly and adapt based on local conditions to achieve optimal or near-optimal task performance.  

Together with industrial partners, this project aims to address multi-robot collaboration and design and evaluate technological solutions that enable users to engage and control autonomous multi-robot systems.

Project period: 2021-2025
Budget: DKK 17,08 million

Robots and drones take on an increasingly broad set of tasks, such as AgroIntelli’s autonomous farming robot and the drone-based emergency response systems from Robotto. Currently, however, such robots are limited in their capacity to cooperate with one another and with humans. In the case of AgroIntelli, for instance, only one robot can currently be deployed on a field at any time and is unable to respond effectively to the presence of a human-driven tractor or even another farming robot working in the same field. In the future, AgroIntelli wants to leverage the potential benefits of having multiple robots working in parallel on the same field to reduce time to completion. A straightforward way to achieve this is to partition the field into several distinct areas corresponding to the number of robots available and then assign each robot its own area. However, such an approach is inflexible and requires detailed a priori planning. If, instead, the robots were given the task collectively as a swarm, they could potentially coordinate their operation on the fly and adapt based on local conditions to achieve optimal or near-optimal task performance.

Similarly, Robotto’s system architecture currently requires one control unit to manage each deployed drone. In large area search scenarios and operations with complex terrain, the coverage provided by a single drone is insufficient. Multiple drones can provide real-time data on a larger surface area and from multiple perspectives – thereby aiding emergency response teams in their time-critical operations. In the current system, however, additional drones each requires a dedicated operator and control unit. Coordination between operators introduces an overhead and it can become a struggle to maintain a shared understanding of the rapidly evolving situation. There is thus a need to develop control algorithms for drone-to-drone coordination and interfaces that enable high-level management of the swarm from a single control console. The complexity requires advanced interactions to keep the data actionable, simple, and yet support the critical demands of the operation. This challenge is relevant to search & rescue (SAR) as well as other service offerings in the roadmap, including firefighting, inspections, and first responder missions.

For both of our industrial partners, AgroIntelli and Robotto, and for similar companies that are pushing robotics technology toward real-world application, there is a clear unmet need for approaches that enable human operators to effectively engage and control systems composed of multiple autonomous robots. This raises a whole new set of challenges compared to the current paradigm where there is a one-to-one mapping between operator and robot. The operator must be able to interact with the system at the swarm level as a single entity to set mission priorities and constraints, and at the same time, be able to intervene and take control of a single robot or a subset of robots. An emergency responder may, for instance, want to take control over a drone to follow a civilian or a group of personnel close to a search area, while a farmer may wish to reassign one or more of her farming robots to another field.

HERD will build an understanding of the challenges in multi-robot collaboration, and design and evaluate technological solutions that enable end-users to engage and control autonomous multi-robot systems. The project will build on use cases in agriculture and search & rescue supported by the industrial partners’ domain knowledge and robotic hardware. Through the research problems and aims outlined below, we seek to enable the next generation of human-swarm collaboration.

Pre-operation and on-the-fly mission planning for robot swarms: An increase in the number of robots under the user’s control has the potential to lead to faster task completion and/or a higher quality. However, the increase in unit count significantly increases the complexity of both end-user-to-robot communication and coordination between robots. As such, it is critical to support the user in efficient and effective task allocation between robots. We will answer the following research questions: (i) What are the functionalities required for humans to effectively define mission priorities and constraints at the swarm level? (ii) How can robotic systems autonomously divide tasks based on location, context, and capability, and under the constraints defined by the end-user? (iii) How does the use of autonomous multi-robot technologies change existing organizational routines, and which new ones are required?

Situational awareness under uncertainty in multi-robot tasks: Users of AI-driven (multi-)robot systems often wish to simulate robot behaviour across multiple options to determine the best possible approach to the task at hand. Given the context-dependent and algorithm-driven nature of these robots, simulation accuracy can only be achieved up to a limited degree. This inherent uncertainty negatively impacts the user’s ability to make an informed decision on the best approach to task completion. We will support situational awareness in the control of multi-robot systems by studying: (i) How to determine and visualise levels of uncertainty in robot navigation scenarios to optimise user understanding and control? (ii) What are the implications of the digital representation of the operational environment for organizational sensemaking? (iii) How can live, predictive visualisations of multi-robot trajectories and task performance support the steering and directing of robot swarms from afar?

User intervention and control of swarm subsets: Given the potentially (rapidly) changing contexts in which the robots operate, human operators will have to regularly adapt from a predetermined plan for a subset of robots. This raises novel research questions both in terms of robot control, in which the swarm might depend on a sufficient number of nearby robots to maintain communication, and in terms of user interaction, in which accurate robot selection and information overload can quickly raise issues. We will therefore answer the following research questions:

(i) When a user takes low-level control of a single robot or subset of a robot swarm, how should that be done, and how should the rest of the system respond?

(ii) How can the user interfaces help the user to understand the potential impact when they wish to intervene or deviate from the mission plans?

Validation of solutions in real-world applications: Based on the real-world applications of adaptive herbicide spraying by farming robots and search & rescue as provided by our industrial partners, we will validate the solutions developed in the project. While both industrial partners deal with robotic systems, their difference in both application area and technical solution (in-the-air vs. on land) allows us to assess the generalisability and efficiency of our solutions in real-world applications. We will answer the following research questions:

(i) What common solutions should be validated in both scenarios and which domain-specific solutions are relevant in the respective types of scenarios?

(ii) What business and organisational adaptation and innovation are necessary for swarm robotics technology to be successfully adopted in the public sector and in the private sector.

Advances in AI, computer science, and mechatronics mean that robots can be applied to an increasingly broader set of domains. To build the world class computer science research and innovation centres, as per the long-term goal of DIREC, this project focuses on building the competencies necessary to address the complex relationship between humans, artificial intelligence, and autonomous robots.

Scientific value
The project’s scientific value is the development of new methods and techniques to facilitate effective interaction between humans and complex AI systems and the empirical validation in two distinct use cases. The use cases provide opportunities to engage with swarm interactions across varying demands, including domains where careful a priori planning is possible (agricultural context) and chaotic and fast-paced domains (search & rescue with drones). HERD will thus lead to significant contributions in the areas of autonomous multi-robot coordination and human-robot interaction. We expect to publish at least ten rank A research articles and to demonstrate the potential of the developed technologies in concrete real-world applications. This project also gears up the partners to participate in project proposals to the EU Framework Programme on specific topics in agricultural robotics, nature conservation, emergency response, security, and so on, and in general topics related to developing key enabling technologies.

Capacity building
HERD will build and strengthen the research capacity in Denmark directly through the education of three PhDs, and through the collaboration between researchers, domain experts, and end-users that will lead to industrial R&D growth. Denmark has been a thought leader in robotics, innovating how humans collaborate with robots in manufacturing and architecture, e.g. Universal Robots, MiR, Odico, among others. Through HERD, we support not only the named partners in developing and improving their products and services, but the novel collaboration between the academic partners, who have not previously worked together, helps to ensure that the Danish institutes of higher education build the competencies and the workforce that are needed to ensure continued growth in the sectors of robotics and artificial intelligence. HERD will thus contribute to building the capacity required to facilitate effective interaction between end-users and complex AI systems.

Business value
HERD will create business value through the development of technologies that enable end-users to effectively engage and control systems composed of multiple robots. These technologies will significantly increase the value of the industrial partners’ products, since current tasks can be done faster and at a lower cost, and entirely new tasks that require multiple coordinated robots can be addressed. The value increase will, in turn, increase sales and exports. Furthermore, multi-robot systems have numerous potential application domains in addition to those addressed in this project, such as infrastructure inspection, construction, environmental monitoring, and logistics. The inclusion of DTI as partner will directly help explore these opportunities through a broader range of anticipated tech transfer, future market and project possibilities.

Societal value
HERD will create significant societal value and directly contribute to SDGs 1 (no poverty), 2 (zero hunger), 13 (climate action), and 15 (life on land). Increased use of agricultural robots can, for instance, lead to less soil compaction and enable the adoption of precision agriculture techniques, such as mechanical weeding that eliminates the need for pesticides. Similarly, increased use of drones in search & rescue can reduce the time needed to save people in critical situations.

Impact

The project will develop technologies that enable end-users to effectively engage and control systems composed of multiple robots.

Systems composed of multiple robots will significantly increase the value of industrial products, since current tasks can be done faster and at a lower cost, and entirely new tasks that require multiple coordinated robots can be addressed. 

News / coverage

Participants

Project Manager

Anders Lyhne Christensen

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

E: andc@mmmi.sdu.dk

Ulrik Pagh Schultz

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Mikael B. Skov

Professor

Aalborg University
Department of Computer Science

Timothy Robert Merritt

Associate Professor

Aalborg University
Department of Computer Science

Niels van Berkel

Associate Professor

Aalborg University
Department of Computer Science

Ionna Constantiou

Professor

Copenhagen Business School
Department of Digitalization

Kenneth Richard Geipel

Chief Executive Officer

Robotto

Christine Thagaard

Marketing Manager

Robotto

Lars Dalgaard

Head of Section

Danish Technological Institute
Robot Technology

Alea Scovill

Strategic Project Manager

Agro Intelligence ApS

Kasper Grøntved

PhD Student

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Maria-Theresa Oanh Hoang

PhD Student

Aalborg University
Department of Computer Science

Alexandra Hettich

PhD Student

Copenhagen Business School
Department of Digitalization

Partners

Categories
Bridge project

Mobility Analytics using Sparse Mobility Data and Open Spatial Data

DIREC project

Mobility Analytics using Sparse Mobility Data and Open Spatial Data

Summary

Both society and industry have a substantial interest in well-functioning outdoor and indoor mobility infrastructures that are efficient, predictable, environmentally friendly, and safe. For outdoor mobility, reduction of congestion is high on the political agenda as is the reduction of CO2 emissions, as the transportation sector is the second largest in terms of greenhouse gas emissions. For indoor mobility, corridors and elevators represent bottlenecks for mobility in large building complexes.  

The amount of mobility-related data has increased massively which enables an increasingly wide range of analyses. When combined with digital representations of road networks and building interiors, this data holds the potential for enabling a more fine-grained understanding of mobility and for enabling more efficient, predictable, and environmentally friendly mobility.   

Project period: 2021-2024
Budget: DKK 9,41 million

The mobility of people and things is an important societal process that facilitates and affects the lives of most people. Thus, society, including industry, has a substantial interest in well-functioning outdoor and indoor mobility infrastructures that are efficient, predictable, environmentally friendly, and safe. For outdoor mobility, reduction of congestion is high on the political agenda – it is estimated that congestion costs Denmark 30 billion DKK per year. Similarly, the reduction of CO2 emissions from transportation is on the political agenda, as the transportation sector is the second largest in terms of greenhouse gas emissions. Danish municipalities are interested in understanding the potentials for integrating various types of e-bikes in transportation planning. Increased use of such bicycles may contribute substantially to the greening of transportation and may also ease congestion and thus improve travel times. For indoor mobility, corridors and elevators represent bottlenecks for mobility in large building complexes (e.g. hospitals, factories and university campuses). With the addition of mobile robots, humans and robots will also be fighting to use the same space when moving indoors. Heavy use of corridors is also a source of noise that negatively impacts building occupants.

The ongoing, sweeping digitalisation has also reached outdoor and indoor mobility. Thus, increasingly massive volumes of mobility-related data, e.g. from sensors embedded in the road and building infrastructures, networked positioning (e.g. GPS or UWB) devices (e.g. smartphones and in-vehicle navigation devices) or indoor mobile robots, are becoming available. This enables an increasingly wide range of analyses related to mobility. When combined with digital representations of road networks and building interiors, this data holds the potential for enabling a more fine-grained understanding of mobility and for enabling more efficient, predictable, and environmentally friendly mobility. Long movement times equate with congestion and bad overall experiences.

The above data foundation offers a basis for understanding how well a road network or building performs across different days and across the duration of a day, and it offers the potential for decreased movement times by means of improved mobility flows and routing. However, there is an unmet need for low-cost tools that can be used by municipalities and building providers (e.g. mobile robot manufactures) that are capable of enabling a wide range of analytics on top of mobility data.

  1. Build extract-transform-load (ETL) prototypes that are able to ingest high and low frequency spatial data (e.g. GPS and indoor positioning data). These prototypes must enable map-matching of spatial data to open road network and building representations and must enable privacy protection.
  2. Design effective data warehouse schemas that can be populated with ingested spatial data.
  3. Build mobility analytics warehouse systems that are able to support a broad range of analyses in interactive time.
  4. Build software systems that enable users to formulate analyses and visualise results in maps-based interfaces for both indoor and outdoor use. This includes infrastructure for the mapping of user input into database queries and the maps-based display of results returned by the data warehouse system.
  5. Develop a range of advanced analyses that address user needs. Possible analyses include congestion maps, isochrones, aggregate travel-path analyses, origin-destination travel time matrices, and what-if analyses where the effects of reconstruction are estimated (e.g. adding an additional lane to a stretch of road or changing corridors). For outdoors settings, CO2-emissions analyses based on vehicular environmental impact models and GPS data are also considered.
  6. Develop transfer learning techniques that make it possible to leverage spatial data from dense spatio-temporal “regions” for enabling analyses in sparse spatio-temporal regions.

Value creation
The envisioned prototype software infrastructure characterised above aims to be able to replace commercial road network maps with the crowd sourced OpenStreetMap (OSM) map and for indoors enable new data sources about the indoor geography. The open data might not be curated, which means that new quality control tools are required to ensure that computed travel times are correct. This will reduce cost.

Next, the project will provide means of leveraging available spatial data as efficiently and effectively as possible. In particular, while more and more data becomes available, the available data will remain sparse in relation to important analyses. This is due to the cost of data that can be purchased and due to the lack of desired data. Thus, it is important to be able to exploit available data as well as possible. We will examine how to transfer data from locations and times with ample data to locations and times with insufficient data. For example, we will study transfer learning techniques for this purpose; and as part of this, we will study feature learning. This will reduce cost and will enable new analyses that where not possible previously due to a lack of data.

Rambøll will be able to in-source the software infrastructure and host analytics for municipalities. Mobile Industrial Robotics (MiR) will be able to in-source the software infrastructure and host analytics for building owners. Additional value will be created because the above studies will be conducted for multiple transportation modes, with a focus on cars and different kinds of e-bikes. We have access to a unique data foundation that will enable these studies.

Impact

The project will provide a prototype software infrastructure that aims to be able to replace commercial road network maps with the crowd sourced OpenStreetMap (OSM) and for indoors enable new data sources about the indoor geography.

The open data might not be curated, which means that new quality control tools are required to ensure that computed travel times are correct. This will reduce cost.

News / coverage

Participants

Project Manager

Christian S. Jensen

Professor

Aalborg University
Department of Computer Science

E: csj@cs.aau.dk

Ira Assent

Professor

Aarhus University
Department of Computer Science

Kristian Torp

Professor

Aalborg University
Department of Computer Science

Bin Yang

Professor

Aalborg University
Department of Computer Science

Martin Møller

Chief Innovation Officer

The Alexandra Institute

Mikkel Baun Kjærgaard

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Norbert Krüger

Professor

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Avgi Kollakidou

PHD student

University of Southern Denmark
The Maersk Mc-Kinney Moller Institute

Kasper Fromm Pedersen

Research Assistant

Aalborg University
Dept. of Computer Science

Helene Hauschultz

PhD Student

Aarhus University
Department of Mathematical Science

Stig Grønning Søbjærg

Engineer

Rambøll

Morten Steen Nørby

Software Manager

Mobile Industrial Robots

Hao Miao

PHD STUDENT

Aalborg University
Department of Computer Science

partners

Categories
Bridge project

Verifiable and Safe AI for Autonomous Systems

DIREC project

Verifiable and safe ai for autonomous systems

Summary

The rapidly growing application of machine learning techniques in cyber-physical systems leads to better solutions and products in terms of adaptability, performance, efficiency, functionality and usability.

However, cyber-physical systems are often safety critical, e.g., self-driving cars or medical devices, and the need for verification against potentially fatal accidents is of key importance.

Together with industrial partners, this project aims to develop methods and tools that will enable industry to automatically synthesize correct-by-construction and near-optimal controllers for safety critical systems within a variety of domains.

Project period: 2021-2024
Budget: DKK 9,12 million

AI technologies may present new safety risks for users when they are embedded in products and services. For example, as result of a flaw in the object recognition technology, an autonomous car can wrongly identify an object on the road and cause an accident involving injuries and material damage. This in turn makes it difficult to place liability in case of malfunctioning:
Under the Product Liability Directive, a manufacturer is liable for damage caused by a defective product. However, in the case of an AI based system such as autonomous cars, it may be difficult to prove that there is a defect in the product, the damage that has occurred and the causal link between the two.

What is needed are new methods, where machine learning is integrated with model-based techniques such that machine-learned solutions, typically optimising expected performance, are ensured to not violate crucial safety constraints, and can be certified not to do so. Relevant domains include all types of autonomous systems, where machine learning is applied to control safety critical systems.

The research aim of the project is to develop methods and tools that will enable industry to automatically synthesise correct-by-construction and near-optimal controllers for safety critical 45 systems within a variety of domains. The project will involve a number of scientific challenges including representation of strategies – neural networks (for compactness), decision trees (for explainability). Also, development of strategy learning methods with statistical guarantees is crucial.

A key challenge is understanding and specifying what safety and risk means for model-free controllers based on neural networks. Once formal specifications are created, we aim at combining the existing knowledge about property-based testing, Bayesian probabilistic programming, and model checking.

Value creation
The scientific value of the project are new fundamental theories, algorithmic methods and tools together with evaluation of their performance and adequacy in industrial settings. These are important contributions bridging between the core research themes on AI and Verification in DIREC.

For capacity building the value of the project is to educate PhD students and Post Docs in close collaboration with industry. The profile of these PhD students will meet a demand in the companies for staff with competences on both machine learning, data science and traditional software engineering. In addition, the project will offer a number of affiliated students projects at master-level.

For the growing number of companies relying of using AI in their products the ability to produce safety certification using approved processes and tools will be vital in order to bring safety critical applications to the market. At the societal level trustworthiness of AI-based systems is of prime concern within EU. Here methods and tools for providing safety guarantees can play a crucial role.

Impact

For the growing number of companies relying of using AI in their products the ability to produce safety certification using approved processes and tools will be vital in order to bring safety critical applications to the market.

At the societal level trustworthiness of AI-based systems is of prime concern within EU. Here methods and tools for providing safety guarantees can play a crucial role.

News / coverage

Participants

Project Manager

Kim Guldstrand Larsen

Professor

Aalborg University
Department of Computer Science

E: kgl@cs.aau.dk

Thomas Dyhre Nielsen

Professor

Aalborg University
Department of Computer Science

Andrzej Wasowski

Professor

IT University of Copenhagen
Department of Computer Science

Martijn Goorden

PostDoc

Aalborg University
Department of Computer Science

Esther Hahyeon Kim

PhD Student

Aalborg University
Department of Computer Science

Mohsen Ghaffari

PhD student

IT University of Copenhagen
Department of Computer Science

Martin Zimmermann

Associate Professor

Aalborg University
Department of Computer Science

Christian Schilling

Assistant Professor

Aalborg University
Department of Computer Science

Thomas Asger Hansen

Head of Analytics and AI

Grundfos

Daniel Lux

CEO

Seluxit

Karsten Lumbye

Chief Innovation Officer

Aarhus Vand

Kristoffer Tønder Nielsen

Project Manager

Aarhus Vand

Malte Skovby Ahm

Research and business lead

Aarhus Vand

Mathias Schandorff Arberg

Engineer

Aarhus Vand

Gitte Rosenkranz

Project Manager

HOFOR

Susanne Skov-Mikkelsen

Chief Consultant

HOFOR

Lone Bo Jørgensen

Senior Specialist

HOFOR

partners