I am a doctoral candidate at SFB TRR318 “Constructing Explainability”, a collaborative research center at Paderborn and Bielefeld University. My work lies in the Interdisciplinary intersection of Cognitive Science and Psycholinguistics. This includes statistical modeling and computational cognitive science.

I investigate multimodal processes in both humans and artificial agents to understand the desiderata of successful explanations. The core idea across my investigation is to integrate XAI techniques with interpretable cognitive/neurosymbolic models to enable holistic explanation generation.

The idea goes beyond applying ‘cognitively agnostic’ classical XAI techniques (such as LIME/SHAP, contrastive explanations) in isolation, and instead utilizes XAI methods within theories and cognitive models of (language) processing.

Methodologically, I study human–human explanation using eyetracking and psycholinguistic experimentation.

For human–AI teaming, I work with foundation models (prompting, feature extraction, fine-tuning), Human-Robot Interaction, and statistical modeling (mostly Bayesian) to quantify explanation practices.

Updates

Recent talk at IEEE ICDL

Watch the talk on YouTube

An example scenario of multimodal interaction within HRI

Scenario illustration from multimodal interaction studies