Contextualizing Explanations
As AI systems are used more and more in high-stakes domains, it also becomes ever-more important to make AI systems transparent to ensure meaningful human control and empower human users to contest or override AI-based decisions. Without sufficient transparency, increasingly complex and autonomous AI systems may leave users feeling overwhelmed and out of control, which is legally and ethically unacceptable, especially in the context of high-stakes decisions. For the users to feel empowered rather than out of control, explanations need to be relevant, providing sufficient information on which basis an output can be contested or challenged.
It has been increasingly noted by the XAI community that no one explanation can fit all needs. Further, recent approaches have advocated for a more participative approach to XAI in which users are not only involved but can directly shape and guide the explanations given by a certain AI System.
The 3rd TRR 318 Conference: Contextualizing Explanations is an international and interdisciplinary conference focusing on the question how explanations can be contextualized to increase their relevance and empower users.
Kapitel
-
1 Context-Aware Explainability in AI-Powered Language Education: The CURIPOD
Introduction
The increasing integration of Artificial Intelligence (AI) systems into high-stakes domains necessitates a paradigm shift toward transparency and explainability to ensure meaningful human oversight and user empowerment. The… -
2 The Principal’s Principles: Actionable (Personalized) AI Alignment as Underexplored XAI Application Context
Explainable Artificial Intelligence (XAI) has been proposed as a key element—or even a prerequisite—for addressing various challenges and fulfilling numerous societal desiderata. Yet, there is one topic that is frequently debated but… -
3 Development of a Human Knowledge Integrated Workflow for Context-aware Machine Learning
Introduction
Visual analytics (VA) has the potential to aid in the acquisition of expert knowledge; both existing domain knowledge and new insights. However, there is a notable lack of VA systems specifically designed to externalize… -
4 Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues
The computational generation of natural language explanations has gained research interest due to its importance for explainable artificial intelligence (XAI), which aims to explain decisions made by AI systems. Recent XAI research focuses on… -
5 Framing what and how to think: Lay people’s metaphors for algorithms
An informed public discussion about explainable artificial intelligence requires that laypeople and experts can negotiate in a language accessible to both. In our paper, we argue that this requires attention to metaphors. Metaphors – roughly speaking… -
6 A BFO-based Ontology of Context for Social XAI
Miller, exploring explainable AI from a social science perspective, extracts four key desiderata for explanations from the literature: explanations are contrastive, selective, social, and causal rather than statistical. He argues that these… -
7 The Context as Resource: Contextual Adaptations and Explanations in the Field
From a sociological perspective and on the basis of ethnographic research and semi-structured expert interviews, our paper presents and discusses contextual factors involved in the explanation of a ML prediction by a Predictive Policing research unit… -
8 Emerging Categories in Scientific Explanations
Clear and effective explanations are essential for human understanding and knowledge dissemination. The scope of scientific research aiming to understand the essence of explanations has recently expanded from the social sciences to include the fields… -
9 Stability of Model Explanations in Interpretable Prototype-based Classification Learning
Introduction
Learning vector quantization (LVQ) as originally proposed by Kohonen and mathematically justified as the Generalized LVQ (GLVQ) constitutes a classifier for multiple class learning of vector data $\bold{x} \in \R^n$… -
10 Inherently Explainable Hierarchical Generalized Learning Vector Quantization Models
Although deep learning models and transformers have demonstrated superior performance in identification and classification tasks across various domains, they often function as black-box models. Black-box AI models lack transparency, limiting… -
11 Explainable Text Clustering in the Context of Psychological Research
In psychological research, free text responses are crucial to avoid biasing participants with a pre-defined set of answers. However, the analysis of free text responses is time-consuming, as it requires the development of a code-book, manual coding… -
12 Cognitive and Interactive Adaptivity to the Explainee in an Explanatory Dialogue: An Experimental Study
Depending on the partners, a different dialogue evolves. When focusing on the dialogue as a product, partners can be considered as a context factor impacting this product. In the literature, this effect is explained by adaptation processes taking… -
13 Contextualizing Counterfactuals: Gender Differences in Alignment with Biased (X)AI
Introduction. Counterfactual explanations (CEs) in explainable AI (XAI) illustrate how alternative model inputs lead to a change in outcomes, offering actionable insight by mirroring human reasoning. However, XAI’s impact on… -
14 Investigating the Impact of Conceptual Metaphors on LLM-based NLI through Shapley Interactions
In everyday life, metaphorical language is frequently used in the flow of conversations.While metaphors can be observed in explicit cases such as “she was the light of my life”, the meaning manifestation of conventionalized metaphors such as… -
15 Towards Symbolic XAI – Explanation Through Human Understandable Logical Relationships Between Features
Machine learning (ML) models are increasingly used in science, industry, and for solving everyday problems. However, their predictions — particularly those of neural networks — are generally not traceable by the user. The field of explainable… -
16 Context-dependent Effects of Explanations on Multiple Layers of Trust
The rise of highly performant deep learning approaches results in a growing number of possible applications in many domains. In most fields, such as medicine, human agency and oversight are crucial, that is, complex decisionmaking processes should be… -
17 Explaining to and being explained by a service robot: Four HRI studies revisited under a framework for explainability
Robotic butlers relieving humans of domestic chores are currently highly researched and debated, envisioning a near future where these systems will smoothly navigate our private spaces and interact with us naturally and transparently. This vision… -
18 Assessing Intersectional Bias in Representations of Pre-Trained Image Recognition Models
Deep Learning (DL) has proven to be a highly effective tool across numerous fields of application. However, biases in training data can reinforce stereotypes, leading to predictions that are unfair, particularly toward marginalized… -
19 Contextualizing Explainability of Learning-Path Recommendations through Knowledge Graphs and Graph-based MDP
Introduction
Human learning is a complex and multi-dimensional process, governed by a wide range of factors that describe the learner, the learning content, and the learning environment. Even in learning environments where learners are… -
20 Contextualizing Explanations in Fluid Collaboration
Spontaneous collaboration in everyday tasks, like cooking together, requires quick adaptation to partners and situations. The resulting collaboration is fluid in that a dynamic and spontaneous mode of interaction emerges where participants… -
21 Human-Centered and Contextual Assessment of Human-AI Decision-Making Interventions
Human-AI collaboration is increasingly promoted to improve high-stakes decision-making, like medical and mental health diagnosis, yet its potential remains unrealized, with human-AI collaboration resulting on average in worse… -
22 Using SHAP for Feature Importance in Predicting Axelrod Tournament Winners
Introduction
Prisoner’s dilemma is the most known and investigated simple 2x2 game in Game Theory. Rationality demands players to defect, while general common sense implicates that wasting resources is bad for everyone. This gap between an… -
23 The influence of individual traits and stress on expressions related to understanding and confusion
Introduction
As a fundamental component of communication, facial expressions can give information about the mental state of the listener and can indicate important feedback on an individual’s understanding of an explanation. For… -
24 A Framework for Context-aware XAI in Human-AI Collaboration
While humans outperform AI systems in many decision-making tasks, the performance even improves in a human-AI partnership, as shown in. Explainable artificial intelligence (XAI) supports human experts and end-users solve complex problems as it… -
25 Socially-Aware Robot Explanations: Inferring Needs from Human Facial Expressions
Introduction
Although AI systems can make decisions, their ability to explain them remains limited, particularly in error-prone situations. This work focuses on mechanistic interpretability in error detection and examines how different… -
26 Context(s) for contextualizing explanaitons
One could argue that context is basically a setting or a situation that needs to be properly described. Recent discussions on future XAI systems call to regard context to provide more relevant explanations. According to Sanneman and Shah, “it…