What Lies Beneath: A Note on the Explainability of Black-box Machine Learning Models for Road Traffic Forecasting

2019 
Traffic flow forecasting is widely regarded as an essential gear in the complex machinery underneath Intelligent Transport Systems, being a critical component of avant-garde Automated Traffic Management Systems. Research in this area has stimulated a vibrant activity, yielding a plethora of new forecasting methods contributed to the community on a yearly basis. Efforts in this domain are mainly oriented to the development of prediction models featuring with ever-growing levels of performances and/or computational efficiency. After the swerve towards Artificial Intelligence that gradually took place in the modeling sphere of traffic forecasting, predictive schemes have ever since adopted all the benefits of applied machine learning, but have also incurred some caveats. The adoption of highly complex, black-box models has subtracted comprehensibility to forecasts: even though they perform better, they are more obscure to ITS practitioners, which hinders their practicality. In this paper we propose the adoption of explainable Artificial Intelligence (xAI) tools that are currently being used in other domains, in order to extract further knowledge from black-box traffic forecasting models. In particular we showcase the utility of xAI to unveil the knowledge extracted by Random Forests and Recurrent Neural Networks when predicting real traffic. The obtained results are insightful and suggest that the traffic forecasting model should be analyzed from more points of view beyond that of prediction accuracy or any other regression score alike, due to the treatment each algorithm gives to input variables: even with the same nominal score value, some methods can take advantage of inner knowledge that others instead disregard.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    9
    Citations
    NaN
    KQI
    []