Link Prediction (LP) aims at tackling Knowledge Graph incompleteness by inferring new, missing facts from the already known ones. The rise of novel Machine Learning techniques has led researchers to develop LP models that represent Knowledge Graph elements as vectors in an embedding space. These models can outperform traditional approaches and they can be employed in multiple downstream tasks; nonetheless, they tend to be opaque, and are mostly regarded as black boxes. Their lack of interpretability limits our understanding of their inner mechanisms, and undermines the trust that users can place in them. In this paper, we propose the novel Kelpie explainability framework. Kelpie can be applied to any embedding-based LP models independently from their architecture, and it explains predictions by identifying the combinations of training facts that have enabled them. Kelpie can extract two complementary types of explanations, that we dub necessary and sufficient. We describe in detail both the structure and the implementation details of Kelpie, and thoroughly analyze its performance through extensive experiments. Our results show that Kelpie significantly outperforms baselines across almost all scenarios.

Rossi, A., Firmani, D., Merialdo, P., Teofili, T. (2022). Explaining Link Prediction Systems based on Knowledge Graph Embeddings. In SIGMOD/PODS '22: Proceedings of the 2022 International Conference on Management of Data (pp.2062-2075). New York : Association for Computing Machinery [10.1145/3514221.3517887].

Explaining Link Prediction Systems based on Knowledge Graph Embeddings

Rossi, Andrea;Merialdo, Paolo;Teofili, Tommaso
2022-01-01

Abstract

Link Prediction (LP) aims at tackling Knowledge Graph incompleteness by inferring new, missing facts from the already known ones. The rise of novel Machine Learning techniques has led researchers to develop LP models that represent Knowledge Graph elements as vectors in an embedding space. These models can outperform traditional approaches and they can be employed in multiple downstream tasks; nonetheless, they tend to be opaque, and are mostly regarded as black boxes. Their lack of interpretability limits our understanding of their inner mechanisms, and undermines the trust that users can place in them. In this paper, we propose the novel Kelpie explainability framework. Kelpie can be applied to any embedding-based LP models independently from their architecture, and it explains predictions by identifying the combinations of training facts that have enabled them. Kelpie can extract two complementary types of explanations, that we dub necessary and sufficient. We describe in detail both the structure and the implementation details of Kelpie, and thoroughly analyze its performance through extensive experiments. Our results show that Kelpie significantly outperforms baselines across almost all scenarios.
2022
9781450392495
Rossi, A., Firmani, D., Merialdo, P., Teofili, T. (2022). Explaining Link Prediction Systems based on Knowledge Graph Embeddings. In SIGMOD/PODS '22: Proceedings of the 2022 International Conference on Management of Data (pp.2062-2075). New York : Association for Computing Machinery [10.1145/3514221.3517887].
File in questo prodotto:
File Dimensione Formato  
2022-sigmod.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: DRM non definito
Dimensione 4.16 MB
Formato Adobe PDF
4.16 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11590/410639
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 24
  • ???jsp.display-item.citation.isi??? 14
social impact