Search
Close this search box.

6 March 2024

Automated Sensitivity Analysis Enhances Trustworthiness of Probabilistic Programs  

Probabilistic programs, a fundamental concept in modern computing, have gained increasing attention due to their versatile applications across various domains, including machine learning, cryptography, and statistics. However, ensuring the reliability and trustworthiness of these programs poses significant challenges, particularly in understanding their probabilistic nature and analysing their sensitivity to input changes.

A recent study led by researchers Alejandro Aguirre from Aarhus University and Christoph Matheja from DTU has made strides in addressing these challenges. With funding from DIREC – Digital Research Centre Denmark, their project, titled “Automated Verification of Sensitivity Properties for Probabilistic Programs,” explores the feasibility of using automated program verification techniques to analyse the sensitivity and robustness of probabilistic programs.

 

Christoph Matheja, DTU (left) and Alejandro Aguirre, Aarhus University (right)

Robustness is paramount

At the heart of probabilistic programs lies the ability to sample from probability distributions and make decisions based on the outcomes, injecting randomisation into computations. While this randomness is essential for various applications, it also introduces complexities in understanding and quantifying the distributions produced by these programs. Moreover, ensuring the robustness of probabilistic programs against input changes and variations in sampling distributions is paramount for their reliability.

The project emphasizes the importance of sensitivity analysis in assessing the trustworthiness of probabilistic programs. Sensitivity, defined as the degree to which a program’s output changes in response to small changes in its inputs, serves as a crucial metric for evaluating program robustness. Trustworthy programs exhibit low sensitivity, indicating that minor input fluctuations do not yield significantly different results.

Bridging the gap between theoretical insights and automated analyses techniques

The research project leverages the automated verification tool Caesar and the relational weakest preexpectation calculus, which has been developed by the researchers, to analyse sensitivity properties of probabilistic programs. By encoding theoretical proof systems into practical verification tools, the researchers have bridged the gap between theoretical insights and automated analysis techniques.
As a case study, the researchers applied their methodology to analyse the rate of convergence of the Q-learning algorithm, a popular reinforcement learning algorithm widely used in various fields, including automated driving and text generation. By examining how the algorithm’s estimates converge over iterations, they demonstrated its theoretical convergence properties and validated its practical performance.

Future directions

The project’s findings were presented at VeriProp, an international workshop focusing on the verification of probabilistic programs, showcasing the potential impact of automated sensitivity analysis in enhancing the reliability of probabilistic programs.
Future applications include the analysis of more advanced machine learning and sampling algorithms, such as the Stochastic Gradient Descent Algorithm and Markov Chain Monte Carlo. Additionally, efforts will be directed toward making verification tools more accessible to engineers, thereby empowering a broader community to ensure the trustworthiness of probabilistic programs in diverse applications.

By advancing automated sensitivity analysis techniques, this research not only contributes to the theoretical understanding of probabilistic programs but also paves the way for their practical implementation with enhanced reliability and trustworthiness. 

What are probabilistic programs?

Probabilistic programs are computer programs that can sample from probability distributions, for example, by flipping a fair coin, and make decisions based on the outcome. Probabilistic programs thus inject randomization into computations.

Randomization is a ubiquitous concept in modern computing, with applications in machine learning, where the quality of the learned result heavily relies on the quality of samples; the design of randomized algorithms, which leverage sampling to be, on average, more efficient than their non-randomized counterparts; and cryptography, where randomness is needed for encoding secrets. In statistics, probabilistic programming languages have become popular modelling tools that are more expressive and accessible than classical graphical models.