Contextualizing Explanations: An Interview with the Editors Philipp Cimiano, Benjamin Paassen, and Anna-Lisa Vollmer

May 3, 2026 5 min. reading time

As more and more important decisions are made using AI, how can we understand, question, and challenge those decisions? Philipp Cimiano, Benjamin Paassen, and Anna-Lisa Vollmer offer insights into current research in their latest publication, Contextualizing Explanations – Proceedings of the 3rd TRR 318 Conference. We spoke with the editors of the volume about their publication and the ideas behind it.

#Introduction

Could you provide a brief introduction to XAI?

The field eXplainable Artificial Intelligence (XAI) is a subfield of Artificial Intelligence and is concerned with developing methods that explain to people that are not experts in machine learning or AI how a certain AI system reached a specific decision.

Why is it so important today that AI systems are able to explain their decisions?

It is important as some algorithmic decisions might have important consequences for human lives, such as when AI systems are used to support decision making in areas such as credit scoring, job applications, healthcare, etc. Explanations enable affected people to understand, contest, and perhaps change the algorithmic decision.

You mention that people may feel 'at the mercy’ of AI systems. Can you give an example where this becomes particularly clear?

Imagine you are searching for a new flat and your financial liability is automatically determined by an algorithm. This might have consequences for your ability to find a new flat. While this is not the case at the moment, this might be a scenario that might be very likely in a few years from today.

#Good Explanations

How can users tell whether an explanation is actually helpful?

They can tell when an explanation provides them with sufficient and plausible evidence about how an algorithm reached its decision and have the ability to meaningfully challenge or contest a decision that is detrimental on them.

Why is there no single perfect explanation that works for everyone?

Because people’s circumstances, level of expertise and background knowledge might differ considerably. An explanation given to an expert will be very different to the one given to a layperson in a particular domain.

You emphasize the importance of context in the co-construction of explainability. What elements make up the ‘context’ of an explanation?

Context comprises multiple factors such as the level of expertise or knowledge of an explainee, that is the person receiving an explanation. Other important factors might be the reason why a person needs an explanation, the time the explainee has to process the explanation as well as the cognitive effort that the explainee can devote to processing the explanation. In general, the situation in which an explainee is is a crucial context factor. An explanation given to someone sitting at home on their couch on an algorithmic suggestion on which TV program to watch might be totally different to an explanation given in a time-critical context in which an algorithmic decision is a matter of live or death.

#Research & Contributions

What does the diversity of contributions indicate about the current state of research?

The diversity of research contributions and disciplines involved is due to the fact that explanations involve multiple dimensions. They have a communicative dimension as they are typically conveyed verbally, so it is important to involve research from linguistics. They have a social dimension as they might have impacts on the lives of people, so sociology research is important as well. And finally, AI algorithms need to compute suitable explanations, which involves the discipline of computer science.

Which approaches seem particularly promising to you?

The most promising approaches are those where the explainee can be actively involved in the process of creating an explanation. This reduces the risk that the explanation will turn out to be irrelevant for the explainee as the latter has some control over the process.

Does the dialogical interaction with AI systems change our understanding of explanations?

Yes, people are used to provide explanations to their co-citizens, and they do this as part of a dialogue with them. A dialogue-based approach might be more intuitive for people to request and shape the explanations given by AI algorithms.

#TRR 318 Context

How do the proceedings fit into the TRR 318?

The proceedings document important contributions from researchers of the TRR but also from other research institutions furthering our understanding of how context influences the process of giving and adapting explanations.

#Outlook

Which open questions remain particularly important?

The most important overarching question is how exactly context can be encoded and modeled to support AI systems to deliver a contextually relevant explanation.

What needs to change for AI to become truly understandable and controllable?

We need to understand that the opacity of certain models is a design choice and not a naturally given property. There is an inherent trade-off in AI between expressiveness / complexity of models on the one hand and their explainability / transparency on the other. It is a choice where along this trade-off we position ourselves. Choosing simpler models that have a worse performance comes with the benefit of better explainability. Contrary, choosing complex models means potentially sacrificing explainability. A challenge is to understand where in this space we need to be positioned depending on the criticality of the decisions made by an AI system.

What are the next steps for explainable AI?

For the TRR, the next step clearly is to understand the implications of different contextual factors on the explanations required and to implement this dependence on context factors in AI systems.