XAI-AV: Explainable Artificial Intelligence for Trust Management in Autonomous Vehicles

2021 
Artificial intelligence (AI) is the most looked up technology with a diverse range of applications across all the fields, whether it is intelligent transportation systems (ITS), medicine, healthcare, military operations, or others. One such application is autonomous vehicles (AVs), which comes under the category of AI in ITS. Vehicular Adhoc Networks (VANET) makes communication possible between AVs in the system. The performance of each vehicle depends upon the information exchanged between AVs. False or malicious information can perturb the whole system leading to severe consequences. Hence, the detection of malicious vehicles is of utmost importance. We use machine learning (ML) algorithms to predict the flaw in the data transmitted. Recent papers that used the stacking ML approach gave an accuracy of 98.44%. Decision tree-based random forest is used to solve the problem in this paper. We achieved accuracy and F1 score of 98.43% and 98.5% respectively on the VeRiMi dataset in this paper. Explainable AI (XAI) is the method and technique to make the complex black-box ML and deep learning (DL) models more interpretable and understandable. We use a particular model interface of the evaluation metrics to explain and measure the model’s performance. Applying XAI to these complex AI models can ensure a cautious use of AI for AVs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    0
    Citations
    NaN
    KQI
    []