On the cost-effectiveness of neural and non-neural approaches and representations for text classification: A comprehensive comparative study

2021 
Abstract This article brings two major contributions. First, we present the results of a critical analysis of recent scientific articles about neural and non-neural approaches and representations for automatic text classification (ATC). This analysis is focused on assessing the scientific rigor of such studies. It reveals a profusion of potential issues related to the experimental procedures including: (i) use of inadequate experimental protocols, including no repetitions for the sake of assessing variability and generalization; (ii) lack of statistical treatment of the results; (iii) lack of details on hyperparameter tuning, especially of the baselines; (iv) use of inadequate measures of classification effectiveness (e.g., accuracy with skewed distributions). Second, we provide some organization and ground to the field by performing a comprehensive and scientifically sound comparison of recent neural and non-neural ATC solutions. Our study provides a more complete picture by looking beyond classification effectiveness, taking the trade-off between model costs (i.e., training time) into account. Our evaluation is guided by scientific rigor, which, as our literature review shows, is missing in a large body of work. Our experimental results, based on more than 1500 measurements, reveal that in the smaller datasets, the simplest and cheaper non-neural methods are among the best performers. In the larger datasets, neural Transformers perform better in terms of classification effectiveness. However, when compared to the best (properly tuned) non-neural solutions, the gains in effectiveness are not very expressive, especially considering the much longer training times (up to 23x slower). Our findings call for a self-reflection of best practices in the field, from the way experiments are conducted and analyzed to the choice of proper baselines for each situation and scenario.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    61
    References
    9
    Citations
    NaN
    KQI
    []