Artificial Intelligence brings the promise of technological means to solve problems that previously were assumed to require human intelligence, and ultimately provide human-centered solutions that are both more effective and of higher quality in a synergy between the human and the AI system than solutions that are provided by humans or by an AI system alone.
However, compared to traditional problem solving based on logical rules and procedures, some artificial intelligence systems, in particular systems based on neural networks (e.g. as in deep learning), do not offer a human-understandable explanation to the answers given. Lack of explanation is not necessarily a problem, e.g. if the correctness of an answer can be easily validated, such as automatic character recognition subsequently validated by a human. However, in some situations, a lack of explanation may pose severe problems, and may even be illegal as it is the case for governmental decisions.