Explainable knowledge graph embedding: Inference reconciliation for knowledge inferences supporting robot actions
This work proposes an inference reconciliation framework that uses decision tree-based local approximations and natural language explanations to interpret black-box knowledge graph embeddings driving robot behavior. Evaluations show that the approach helps non-experts understand and correct erroneous robot actions caused by flawed internal beliefs.
Jan 1, 2022