I am a doctoral researcher at SFB TRR318 “Constructing Explainability”, a Collaborative Research Center at Paderborn and Bielefeld University. I am supervised by Prof. Dr. Katharina J. Rohlfing, with Prof. Dr.-Ing. Britta Wrede as co-PI in Project A05. My work lies at the interdisciplinary intersection of Cognitive Science and Psycholinguistics. This includes statistical modeling and computational cognitive science. Before joining SFB TRR318, I completed an M.Sc. in Cognitive Science and a Bachelor’s in Germanistik from JNU.
I investigate multimodal processes in both humans and artificial agents to understand the desiderata of successful explanations. The core idea across my investigation is to integrate XAI techniques with interpretable cognitive/neurosymbolic models to enable holistic explanation generation.
The idea goes beyond applying ‘cognitively agnostic’ classical XAI techniques (such as LIME/SHAP, contrastive explanations) in isolation, and instead utilizes XAI methods within theories and cognitive models of (language) processing.
Methodologically, I study human–human explanation using eyetracking and psycholinguistic experimentation.
For human–AI teaming, I work with foundation models (prompting, feature extraction, fine-tuning), Human-Robot Interaction, and statistical modeling (mostly Bayesian) to quantify explanation practices.
Updates
Recent talk at IEEE ICDL
An example scenario of multimodal interaction within HRI

