Explainable knowledge graph embedding: Inference reconciliation for knowledge inferences supporting robot actions

Jan 1, 2022·
Angel Daruna
,
Devleena Das
,
Sonia Chernova
· 0 min read
Architecture Overview
Abstract
Learned knowledge graph representations supporting robots contain a wealth of domain knowledge that drives robot behavior. However, there does not exist an inference reconciliation framework that expresses how a knowledge graph representation affects a robot’s sequential decision making. We use a pedagogical approach to explain the inferences of a learned, black-box knowledge graph representation, a knowledge graph embedding. Our interpretable model, uses a decision tree classifier to locally approximate the predictions of the black-box model, and provides natural language explanations interpretable by non-experts. Results from our algorithmic evaluation affirm our model design choices, and the results of our user studies with non-experts support the need for the proposed inference reconciliation framework. Critically, results from our simulated robot evaluation indicate that our explanations enable non-experts to correct erratic robot behaviors due to nonsensical beliefs within the black-box.
Type
Publication
2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)