Explainable AI

30.09.2020 | Language: english

Research and technology trends in Big Data and AI

We are in the process of preparing our fourth COMET application and are looking for partners to jointly explore the potential of these key technologies:
Learn more about the advantages and opportunities of a COMET partnership and secure your access to funded top-level research.

Target group:


Open for everybody

Abstract:


Deep learning and other AI models have demonstrated remarkable results in various application areas, such as image classification, machine translation, predictive reasoning, or decision making. Hence, they are being increasingly applied for tasks where human mispredictions might have serious consequences, ranging from industrial predictive maintenance, over heartbeat anomaly detection to stock market forecasts. Because automatic predictions (made by an AI system) might have a substantial impact on the well-being of a person or considerable financial and legal consequences to an individual or a company, all actions and decisions resulting from such models need to be accountable. This in turn means, a lack of interpretability directly hinders the adoption rate of AI models in high-stake domains, and thus, prevents their potentially positive impact of extraordinary accuracy.

Furthermore, one challenge we currently face in the application of AI is not of a technological or mathematical nature but simply occurs with the data our algorithms are built on. If the machine learns from historic user/event data, it will also adapt biases represented in this data. While the algorithm might be completely correct, it will still depict undesired social constructs such as racism, sexism, or simply biases towards popular preferences of described user groups. Transparency in respect to the reasoning of AI predictions can support a human actor in evaluating the quality and trustworthiness of such predictions, or ideally even in spotting such biases. Explanations are a tool that is increasingly used and scientifically explored to provide this transparency with relevant information to the user.

In case a right for explanations to AI applications becomes law, this will become an even more prominent and pressing topic.

In this edition of the Know-Center's summer academy, we will give you an introduction to the rapidly developing field of research, which is known as eXplainable Artificial Intelligence (XAI). In addition to an introduction to the topic, we will highlight some of the most notable methods, as well as use-cases where and how these methods might be applied.

After the event you will know:



  • Why do we need eXplainable Artificial Intelligence (XAI)?

  • How Data Biases might cause undesired side effects (unfairness)

  • The main methods used to identify the most important parts in the input that influenced a deep learning model's decision

  • Methods used to visualize a deep learning model's internals

  • Use-cases and applications for XAI

  • The state of XAI for time series data

  • XAI in Recommender Systems

  • How Explanation can be presented to the User

Speaker

Vedran Sabol

Vedran Sabol

Research Area Manager Knowledge Visualization

Ilija Šimić

Ilija Šimić

Knowledge Visualization

Tomislav Đuričić

Tomislav Đuričić

Researcher – Social Computing

Jörg Simon

Jörg Simon

Simone Kopeinik

Simone Kopeinik

Senior Researcher Social Computing

Emanuel Lacić

Emanuel Lacić

Technical Lead – Social Computing