Quality Measurements of Error Annotation - Ensuring Validity through Reliability

2015 
Major obstacles for achieving high levels of reliability (and by extension validity) of error annotation of learner corpora range from defining errors in general, the lack of an error taxonomy sufficiently applicable in corpus annotation, insufficiencies of any set linguistic norm as background for tagging, to the lack of well-defined measurements of quality of annotation. The paper first looks at the theoretical issues behind the definition of an error. It expands the discussion by focusing on a more practically applicable account of errors aimed at error annotation. It goes on to offer a more robust error taxonomy which could help address issues of interpretability inherent in linguistic categorization and could ensure more consistency. In the end, the paper suggests an alternative definition of an error applicable for corpus annotation, based on inter-annotator agreement and aimed at being the primary indicator of validity.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []