A human-in-the-loop framework to handle implicit bias in crowdsourced KGs
2020
Crowd-sourced Knowledge Graphs (KGs) may be biased: some biases can originate from factual errors, while others reflect different points of view. How to identify and measure biases in crowd-sourced KGs? And then, how to tell apart factual errors from different point of views? And how to put together all these steps contextualized in a human-in-the-loop framework?
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
1
Citations
NaN
KQI