Entity resolution (ER) aims at identifying record pairs that refer to the same real-world entity. Recent works have focused on deep learning (DL) techniques, to solve this problem. While such works have brought tremendous enhancements in terms of effectiveness in solving the ER problem, understanding their matching predictions is still a challenge, because of the intrinsic opaqueness of DL based solutions. Interpreting and trusting the predictions made by ER systems is crucial for humans in order to employ such methods in decision making pipelines. We demonstrate certem an explanation system for ER based on certa, a recently introduced explainability framework for ER, that is able to provide both saliency explanations, which associate each attribute with a saliency score, and counterfactual explanations, which provide examples of values that can flip a prediction. In this demonstration we will showcase how certem can be effectively employed to better understand and debug the behavior of state-of-the-art DL based ER systems on data from publicly available ER benchmarks.

Teofili, T., Firmani, D., Koudas, N., Merialdo, P., Srivastava, D. (2022). CERTEM: Explaining and Debugging Black-box Entity Resolution Systems with CERTA. PROCEEDINGS OF THE VLDB ENDOWMENT, 15(12), 3642-3645 [10.14778/3554821.3554864].

CERTEM: Explaining and Debugging Black-box Entity Resolution Systems with CERTA

Teofili T.;Merialdo P.;
2022

Abstract

Entity resolution (ER) aims at identifying record pairs that refer to the same real-world entity. Recent works have focused on deep learning (DL) techniques, to solve this problem. While such works have brought tremendous enhancements in terms of effectiveness in solving the ER problem, understanding their matching predictions is still a challenge, because of the intrinsic opaqueness of DL based solutions. Interpreting and trusting the predictions made by ER systems is crucial for humans in order to employ such methods in decision making pipelines. We demonstrate certem an explanation system for ER based on certa, a recently introduced explainability framework for ER, that is able to provide both saliency explanations, which associate each attribute with a saliency score, and counterfactual explanations, which provide examples of values that can flip a prediction. In this demonstration we will showcase how certem can be effectively employed to better understand and debug the behavior of state-of-the-art DL based ER systems on data from publicly available ER benchmarks.
Teofili, T., Firmani, D., Koudas, N., Merialdo, P., Srivastava, D. (2022). CERTEM: Explaining and Debugging Black-box Entity Resolution Systems with CERTA. PROCEEDINGS OF THE VLDB ENDOWMENT, 15(12), 3642-3645 [10.14778/3554821.3554864].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11590/418219
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact