language-icon Old Web
English
Sign In

Crowdsourcing Ontology Verification

2013 
As the scale and complexity of ontologies increases, so too do errors and engineering challenges. It is frequently unclear, however, to what degree extralogical ontology errors negatively affect the application that the ontology underpins. For example, "Shoe SubClassOf Foot" may be correct logically, but not in a human interpretation. Indeed, such errors, not caught by reasoning, are likely to be domain-specific, and thus identifying salient ontology errors requires consideration of the domain. There are both automated and manual methods that provide ontology quality assurance. Nevertheless, these methods do not readily scale as ontology size increases, and do not necessarily identify the most salient extralogical errors. Recently, crowdsourcing has enabled solutions to complex problems that computers alone cannot solve. For instance, human workers can quickly and more accurately identify objects in images at scale. Crowdsourcing presents an opportunity to develop methods for ontology quality assurance that overcome the current limitations of scalability and applicability. In this work, I aim (1) to determine the effect of extralogical ontology errors in an example domain, (2) to develop a scalable framework for crowdsourcing ontology verification that overcomes current ontology Q/A method limitations, and (3) to apply this framework to ontologies in use. I will then evaluate the method itself and also its effect in the context of a specific domain. As an example domain, I will use biomedicine, which applies many large-scale ontologies. Thus, this work will enable scalable quality assurance for extralogical errors in biomedical ontologies. Terminology
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    53
    References
    18
    Citations
    NaN
    KQI
    []