A Framework for Context-aware XAI in Human-AI Collaboration
While humans outperform AI systems in many decision-making tasks, the performance even improves in a human-AI partnership, as shown in. Explainable artificial intelligence (XAI) supports human experts and end-users solve complex problems as it gives insights into decisions made by supporting AI systems. Furthermore, XAI helps humans judge the reliability of the AI algorithm. However, current approaches primarily generate static and unimodal explanations, for example, in the form of highlighting. They contrast with the richness and flexibility of human explanations, particularly with the adaptation of explanations to the context of problem-solving. For instance, in industrial settings, the visual highlighting of casting defects such as blowholes in metal components does not necessarily imply overall defectivity and require expert knowledge for classification. The criticality of such defects depends on factors such as their severity and their proximity to structurally significant areas, such as loadbearing regions. In the domain of ornithological education aimed at, for example, distinguishing the tri-colored blackbird and the red-winged blackbird from the brewer’s blackbird and the rusty blackbird, it would be insufficient to highlight colored patches on feathers. Further conceptually and relationally description is needed, such as a red shoulder patch is below of a lighter one. The medical field offers a useful illustration of the importance of context as well. When classifying tissue, a CNN was criticized for incorrectly classifying healthy tissue surrounded by tumor tissue as non-cancerous. The examples demonstrate the necessity of tailoring explanations to align with the specific task, domain, and user. In a collaborative setting, users may require varying degrees of detail. Further, it is essential that XAI aligns with the user’s cognitive capabilities and expertise. The explanations should neither be overly technical nor too vague. People have different preferences and strengths in how they understand information (e.g., visual, verbal, spatial). Multimodal explanation can cover a broader range of users’ information needs. Furthermore, some explanations are more comprehensible when presented visually, while others are more effectively conveyed through auditory cues. In noisy environments, visual or textual explanations might be preferable to audio ones. Therefore, methods are needed that consider explaining as the recipient’s learning process and that produce adaptable, multiscope and multimodal explanations. The specific requirements for XAI that result from the context of human-AI collaboration are primarily driven by the need to be dynamic and tailored to the specific situation and the human user involved. The XAI framework in Figure 1 allows the user to enter into a dialogue with the system and ask for explanations on different levels.
Fig. 1. The Framework for context-aware explanations in human-AI collaboration enables adaptable, multimodal explanations based on the user’s goal and knowledge level.
In this talk, we will present explanation methods suitable for implementing such a multimodal explanation process, including contrastive, prototype-based, dialogical, and interactive approaches. According to Abowd et al.’s definition of context, we are mainly guided by the dimensions of location (where?), identity (who?), activity (what for?), and time (when?). One can consider explanations as entities that have locations in an explanatory space from which one chooses appropriate representations. Furthermore, it is essential to consider who is being explained to. The purpose of the explanation and the recipient’s activity and goal must also be taken into account. Contextaware approaches should also consider when the user needs the information. We show that adaptable and multimodal approaches are suitable to represent different dimensions of context in explanations and improve human-AI problemsolving.
Presentation A Framework for Context-aware XAI in Human-AI Collaboration held at the 3rd TRR 318 Conference: Contextualizing Explanations on 17th of June 2025 in Bielefeld, Germany