Making Sense of Online Discussions: Can Automated Reports help?

2021 
Enabling healthier online deliberation around issues of public concerns is an increasingly vital challenge in nowadays society. Two fundamental components of a healthier deliberation are: i. the capability of people to make sense of what they read, so that their contribution can be relevant; and ii. the improvement of the overall quality of the debate, so that noise can be reduced and useful signals can inform collective decision making. Platform designers often resort to computational aids to improve these two processes. In this paper, we examine automated reporting as promising mean of improving sensemaking in discussion platforms. We compared three approaches to automated reporting: an abstractive summariser, a template report and an argumentation highlighting system. We then evaluated improvements in sensemaking of participants and the perception on overall quality of the debate. The study suggests that argument mining technologies are particularly promising computational aids to improve sense making and perceived quality of online discussion, thanks to their capability to combine computational models for automated reasoning with users’ cognitive needs and expectation of automated reporting.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    0
    Citations
    NaN
    KQI
    []