Søg
Close this search box.

VRAI

Workshop on Verifiable and Robust AI

This interdisciplinary conference will consist of a combination of invited talks, panel discussions, working groups, contributed talks and poster presentations.

Artificial intelligence, driven to a large extent by rapid advances in deep learning technology, has produced exciting results over the last decade across various different scientific disciplines and practical applications.

This success is accompanied by an increasing need for methods that explain the decisions of machine learning models, make their performance more robust under changing conditions, and can provide firm guarantees for their behavior with regard to aspects like safety, privacy preservation, and non-discrimination.

These emerging key issues for the further advancement of AI are being studied both in the AI/ML communities, as well as by researchers from the areas traditionally concerned with the safety and verification of software systems by formal methods, such as model checking and theorem proving. However, while working towards the same goals, the interaction between these different research communities has been somewhat limited. This workshop aims to bring together researchers from the AI/ML and formal methods communities for an exchange of ideas and scientific approaches for tackling the challenge of building safe, trustworthy and robust AI systems.

The workshop is supported by the Digital Research Center Denmark (DIREC). It will consist of a combination of invited talks, panel discussions, working groups, and contributed talks and poster presentations. This interdisciplinary conference will facilitate the exploration of synergies between the two fields, fostering novel collaborations and fostering the development of innovative techniques.

By uniting these scientific communities and promoting dialogue and collaboration, this workshop aims to pave the way for the development of AI systems that demonstrate remarkable performance and ensure safety, transparency, and accountability in their operation.

Selected contributions from the workshop will be published in a special issue of the International Journal on Software Tools for Technology Transfer (STTT).

Topics of interest include (but are not limited to):
  • Robustness against adversarial attacks
  • Robustness under domain distribution shifts
  • Fairness of machine learning models
  • Machine learning with humans-in-the-loop
  • Explaining predictions of machine learning models
  • Neuro-symbolic integration
  • Safety guarantees for machine learning models
  • Testing and evaluation protocols for machine learning models
  • Satisfiability modulo theories (SMT)
  • Integrating constraint satisfaction and machine learning
  • AI reasoning beyond prediction
  • Formal logic, domain logic, and machine learning?
  • (Non-statistical) privacy-preserving methods
  • Guarantee preservation for evolving ML models
  • Scalable formal methods for AI/ML
  • Benchmarking and evaluating the performance of AI/ML systems in safety-critical contexts
  • Case studies demonstrating successful application of formal methods and AI/ML techniques in the development of robust, verifiable AI systems

Registration

Thanks to the support from DIREC, the participation fee is reduced to DKK 1900 per person. This fee includes accommodation and full board for the duration of the workshop. Due to space and financial constraints, the number of participants is limited.

When you have registered you will receive an email within a couple of days with further information about payment etc.

Invited speakers

Peter Flach

University of Bristol

José Hernández-Orallo

Universitat Politècnica de València

Antonio Vergari

University of Edinburgh

Jan Křetínský

Technical University of Munich

Nils Jansen

Radboud University Nijmegen

Bernhard Steffen

TU Dortmund University

Moshe Vardi

Rice University

Organizers

The conference is organized by
  • Boris Düdder, University of Copenhagen
  • Jaco van de Pol, Aarhus University
  • Kim Guldstrand Larsen, Aalborg University
  • Thomas Hildebrandt, University of Copenhagen
  • Manfred Jaeger, Aalborg University