Pedro Larrañaga gave the presentation: “Interpretable Artificial Intelligence”

Pedro Larrañaga, one of the Directors of the ELLIS Unit Madrid, gave the presentation: “Interpretable Artificial Intelligence”. This past Wednesday, June 5th at 6:00 PM at the Royal Academy of Exact, Physical, and Natural Sciences of Spain.

The increasing dissemination and popularity of intelligent systems based on so-called machine learning present the dilemma of their indiscriminate use versus the need to understand their internal workings. Post hoc explanations, commonly used for black-box models, are insufficient for modeling and decision-making in scientific domains and high-risk situations. In such cases, it is essential for humans to be able to interpret both the system’s output and the internal processes that lead to that result.

In this lecture, we will explore the interpretability potentials of the paradigm known as Bayesian networks, highlighting their transparency and versatility in performing various types of reasoning, such as predictive, diagnostic, intercausal, counterfactual, and abductive. We will present several real-world use cases, encompassing both modeling and optimization, in domains such as computational neuroscience and industry.

Take a look at the summary of the presentation and the graphics shared: PDF Presentation