Artificial intelligence, driven to a large extent by rapid advances in deep learning technology, has produced exciting results over the last decade across various different scientific disciplines and practical applications.
This success is accompanied by an increasing need for methods that explain the decisions of machine learning models, make their performance more robust under changing conditions, and can provide firm guarantees for their behavior with regard to aspects like safety, privacy preservation, and non-discrimination.
These emerging key issues for the further advancement of AI are being studied both in the AI/ML communities, as well as by researchers from the areas traditionally concerned with the safety and verification of software systems by formal methods, such as model checking and theorem proving. However, while working towards the same goals, the interaction between these different research communities has been somewhat limited. This workshop aims to bring together researchers from the AI/ML and formal methods communities for an exchange of ideas and scientific approaches for tackling the challenge of building safe, trustworthy and robust AI systems.
The workshop is supported by the Digital Research Center Denmark (DIREC). It will consist of a combination of invited talks, panel discussions, working groups, and contributed talks and poster presentations. This interdisciplinary conference will facilitate the exploration of synergies between the two fields, fostering novel collaborations and fostering the development of innovative techniques.
By uniting these scientific communities and promoting dialogue and collaboration, this workshop aims to pave the way for the development of AI systems that demonstrate remarkable performance and ensure safety, transparency, and accountability in their operation.
Selected contributions from the workshop will be published in a special issue of the International Journal on Software Tools for Technology Transfer (STTT).