XINA: Explainable Instance Alignment using Dominance Relationship
2018
Over the past few years, knowledge bases (KBs) like DBPedia, Freebase, and YAGO have accumulated a massive amount of knowledge from web data. Despite their seemingly large size, however, individual KBs often lack comprehensive information on any given domain. For example, over 70% of people on Freebase lack information on place of birth. For this reason, the complementary nature across different KBs motivates their integration through a process of aligning instances. Meanwhile, since application-level machine systems, such as medical diagnosis, have heavily relied on KBs, it is necessary to provide users with trustworthy reasons why the alignment decisions are made. To address this problem, we propose a new paradigm, explainable instance alignment (XINA), which provides user-understandable explanations for alignment decisions. Specifically, given an alignment candidate, XINA replaces existing scalar representation of an aggregated score, by decision- and explanation-vector spaces for machine decision and user understanding, respectively. To validate XINA, we perform extensive experiments on real-world KBs and show that XINA achieves comparable performance with state-of-the-arts, even with far less human effort.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
48
References
0
Citations
NaN
KQI