The rapidly growing application of machine learning techniques in Cyber-Physical Systems leads to better solutions and products in terms of adaptability, performance, efficiency, functionality and usability. However, Cyber-Physical Systems are often safety critical (e.g., self-driving cars or medical devices), and the resulting need for verification against potentially fatal accidents is self-evident and of key importance. Most recently, in the EU White Paper: “On Artificial Intelligence – A European approach to excellence and trust” (February 2020) the safety risks that come with usage of AI are
AI technologies may present new safety risks for users when they are embedded in products and services. For example, as result of a flaw in the object recognition technology, an autonomous car can wrongly identify an object on the road and cause an accident involving injuries and material damage. This in turn makes it difficult to place liability in case of malfunctioning:
Under the Product Liability Directive, a manufacturer is liable for damage caused by a defective product. However, in the case of an AI based system such as autonomous cars, it may be difficult to prove that there is a defect in the product, the damage that has occurred and the causal link between the two.
What is needed are new methods, where machine learning is integrated with model-based techniques such that machine-learned solutions, typically optimising expected performance, are ensured to not violate crucial safety constraints, and can be certified not to do so. Relevant domains include all types of autonomous systems, where machine learning is applied to control safety critical systems.