Validating Research-Abstract Writing Assessment Through Latent Regression Modeling and Rater’s Lenses

2019 
This study validates the measure, research-abstract writing assessment (RAWA), with two rating scales of global move of rhetorical purpose and local pattern of language use in applied linguistics (the scale level/score ranging from 0 to 5). The study adopted the embedded design of mixed-methods research that included both the quantitative latent regression model (LRM) for testing how the examinees’ (30 EFL doctoral students, 30 EFL master’s students) RAWA responses can be explained by examinee-group competence, scale-by-level difficulty of two scales, and rater expertise (5 raters); and the qualitative interviews on five raters’ perceptions. The LRM results revealed the scale-level difficulty effect, namely across the scales level 1 and level 5 of the global move being the easiest and the most difficult. The expert raters rated with lower scores. They also adopted the advanced subscales (i.e., content elements, brevity) as criteria and conducted self-monitoring while rating. The findings reveal the sub-competences of research-abstract writing, namely the global move sub-competence of move and content elements and the local pattern sub-competence of language use and brevity. Pedagogically, EFL graduate students should further develop the sub-competences of content element and brevity, once mastering those of move and language use as the basics.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    0
    Citations
    NaN
    KQI
    []