An Explainable Intelligence Model for Security Event Analysis

2019 
Huge volume of events is logged by monitoring systems. Analysts do not audit or trace the log files, which record the most significant events, until an incident occurs. Human analysis is a tedious and inaccurate task given the vast volume of log files that are stored in a “machine-friendly” format. The analysts have to derive the context for an incident using the prior knowledge to find relevant events to the incident to recognise why it has happened. Although the security tools by providing visualization techniques and minimizing human interactions have been developed to make the process of analysis easier, far too little attention has been paid to interpret security incident in a “human-friendly” format. Besides, the current detection patterns and rules are not mature enough to recognize early breaches, which have not caused any damage. In this paper, we presented an Explainable AI model that assist the analysts’ judgement to infer what is happened from the security event logs. The proposed Explainable AI model includes storytelling as a novel knowledge representation model to present the sequence of the events which automatically are discovered from the log file. For automated discovering sequential events, an apriority-like algorithm by mining temporal patterns is utilized. This effort focused on security events to convey both short-life and long-life activities. The experimental results demonstrate the potential and advantages of the proposed Explainable AI model from the security logs that validated on the activities during the security configuration compliance on Windows system.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    1
    Citations
    NaN
    KQI
    []