Søg
Close this search box.

DIREC-projekt

Online algorithms with predictions

Resumé

Industrisektoren står over for optimeringsudfordringer med hensyn til ressourceeffektivitet, som kan løses ved hjælp af online-algoritmer, der skal træffe beslutninger med delvis information. Mens maskinlæring kan levere effektive løsninger baseret på typiske data, kan den fejle med uforudsete input. Derfor er en hybrid tilgang, der kombinerer garantierne fra online-algoritmer med maskinlæringsforudsigelser, nødvendig. Dette sikrer, at sikre beslutninger overtrumfer risikable forudsigelser og kvantificerer løsningskvaliteten i forhold til forudsigelsesnøjagtigheden for at håndtere risici og forhindre fejl.

Projektperiode: 2022-2025
Budget: 3,5 millioner kr

All industrial sectors face optimization problems, and usually many of them, i.e., situations where one must optimize wrt. some resource. This could be minimizing material usage or it could be optimizing towards time or space consumption. Examples include cutting shapes from expensive material, packing containers best possibly to minimize transportation costs, or scheduling routes or dependent tasks to finish earliest possible.

In some cases, all information is available when the processing of tasks commences, but in many situations, tasks arrive during the process, and decisions regarding their treatment must be made shortly after their arrival before further tasks appear. Such problems are referred to as “online”. Online problems lead to poorer solutions than one can obtain with their offline counterparts unless fairly precise, additional information about future tasks is available. In designing and analyzing algorithms, in general, the goal is to determine the quality of an algorithmic solution, preferably with guarantees on performance for all inputs, so that it is possible to promise delivery times or bounds on expenses, etc. Such an analysis also allows the designer to determine if it would be beneficial to search for other algorithmic solutions. Assessing the quality of the algorithms experimentally suffers from the difficulty of determining which inputs to test on and providing trustworthy worst-case bounds.

The area of online algorithms has existed for many years and provides analyses giving worst-case guarantees. However, since these guarantees hold for all inputs, even the most extreme and, sometimes, unrealistic, these guarantees are very pessimistic and often not suited for choosing good algorithms for the typical cases. Thus, in practice, companies often use techniques based on heuristic methods, machine learning, etc. Machine learning, especially, has proven very successful in many applications in providing solutions that are good in practice, when presented with typical inputs. However, on input not captured by training data, the algorithm may fail dramatically.

We need combinations of the desirable properties of guarantees from the online algorithms world and of the experienced good behavior on typical input from, for instance, the machine learning world. That is, we need algorithms that follow predictions given from a machine learning component, for instance, since that often gives good results, but it should not do so blindly or the worst-case behavior will generally be even worse than the guarantees provided by standard online algorithms and their analyses. Thus, a controlling algorithmic unit should monitor the predictions that are given so that safety decisions can overrule the predictions when things are progressing in a worrisome direction.

We also need ways of quantifying the guaranteed quality of our solutions as a function of how closely an input resembles the predicted (by a machine learning component, for instance) input. This is a crucial part of risk management. We want reassurance that we do not “fall off the cliff” just because predictions are slightly off. This includes limiting the ”damage” possible from machine learning adversarial attacks. As an integral part of a successful approach to this problem, we need measures developed to quantify an input’s distance from the prediction (the prediction error) that are defined in such a manner that quality can be expressed as a function of the prediction error. For online algorithm applications, this often needs to be different from standard loss functions for machine learning.

Research problem and aims

Our main aim is to further the development of generally-applicable techniques for utilizing usually good, but untrusted predictions, while at the same time providing worst-case guarantees, in the realm of online optimization problems. We want to further establish this research topic at Danish universities and subsequently disseminate knowledge of this to the industry via collaboration. Developments of this nature are of course considered internationally. Progress is to a large extent made by considering carefully chosen concrete problems, their modeling and properties, and extract general techniques from those studies, and further test their applicability on new problems.

We are planning to initiate work on online call control and scheduling with precedence constraints. The rationale is that these problems are important in their own right and at the same type represent different types of challenges. Call control focuses on admitting as many requests as possible with limited bandwidth, whereas scheduling focuses on time, handling all requests as effectively as possible.

Call control can be seen as point-to-point requests in a network with limited capacity. The goal is to accept as profitable a collection of requests as possible. Scheduling deals with jobs of different duration that must be executed on some “machine” (not necessarily a computer), respecting some constraints that some jobs cannot be executed before certain other jobs are completed. In this problem, all jobs must be scheduled on some machine, and the target is to complete all jobs as fast as possible. To fully define these problems more details are required about the structure of the resources and the precise optimization goals.

Some generic insight we would like to gain and which is sorely lacking in the community currently is formalizable conditions for good predictions. We want the performance of algorithms to degrade gracefully with prediction errors. This is important for the explainability and trustworthiness of algorithms. Related to this, whereas some predictions may be easy to work with theoretically, it is important to focus on classes of predictions that are learnable in practice. To be useful, this also requires robustness, in the sense that minor, inconsequential changes in the input sequence compared with the prediction should not affect the result dramatically.
We are also interested in giving minor consideration to impossibility results, i.e., proving limits on how good solutions can be obtained. Whereas this is not directly constructive, it can tell us if we are done or how close we are to an optimal algorithm, so we do not waste time trying to improve algorithms that cannot be improved or only improved marginally.

Value creation

The project leads to value creation in a number of different directions.

Research-wise, with the developments in machine learning and related data science disciplines over the last years, the integration and utilization of these techniques into other areas of computer science is of great interest, and Danish research should be at the forefront of these endeavors. We facilitate this by bringing people with expertise in different topics together and consolidating knowledge of the primary techniques across institutions. Educating students in these topics is usually a nice side-effect of running such a project. The primary focus, of course, is to educate the PhD student and train the research assistants, but this is accompanied by having MS students working on their theses during the project period solve related, well-defined subproblems.

We are advocating the combined techniques that strive towards excellent typical-case performance while providing worst-case guarantees, and believe that they should be adopted by industry to a larger extent. The project will lead to results on concrete problems, but our experience tells us that companies generally need variations of these or new solutions to somewhat different problems. Thus, the most important aspect in this regards is capacity building, so that we can assist with concrete developments for particular companyspecific problems. Other than the fact that problems appear in many variations in different companies, a main reason why problem adaption would often be necessary is that the added value of the combined algorithmic approaches is based on predictions. And it varies greatly in what type of data is obtainable and which subset of the data can give useful predictions.

We have prior experience with industry consulting, the industrial PhD program, and co-advised MS students, and maintain close relationships with local industry. After, and in principle also during, this project, we are open to subsequent joint projects with industry that take their challenges as the starting point, whereafter we utilize the know-how and experience gained from the current project. Work such as that could be on a consultancy basis, through a joint student project, or, at a larger scale, with, for instance, the Innovation Foundation as a partner.

Finally, we see it as an advantage in our project that we include researchers who are relatively new to Denmark such that they get to interact with more people at different institutions and expand their Danish network.

Værdi

Projektet sigter mod at lede integrationen af maskinlæring med andre områder inden for datalogi ved at samle eksperter, konsolidere viden på tværs af institutioner og uddanne ph.d.-studerende og kandidatstuderende gennem samarbejdsprojekter.

Deltagere

Project Manager

Kim Skak Larsen

Kim Skak Larsen

Professor

University of Southern Denmark
Department of Mathematics and Computer Science

E: kslarsen@imada.sdu.dk

Nutan Limaye

Nutan Limaye

Associate Professor

IT University of Copenhagen
Department of Computer Science

Joan Boyar

Joan Boyar

Professor

University of Southern Denmark
Department of Mathematics and Computer Science

Melih Kandemir

Melih Kandemir

Associate Professor

University of Southern Denmark
Department of Mathematics and Computer Science

Lene Monrad Favholdt

Lene Monrad Favholdt

Associate Professor

University of Southern Denmark
Department of Mathematics and Computer Science

Magnus Berg Pedersen

Magnus Berg Pedersen

PhD Student

University of Southern Denmark
Department of Mathematics and Computer Science

Tim Poulsen

Tim Poulsen

Student Programmer

IT University of Copenhagen

Partnere