Model, Data and Reward Repair: Trusted Machine Learning for Markov Decision Processes

2018 
When machine learning (ML) models are used in safety-critical or mission-critical applications (e.g., self driving cars, cyber security, surgical robotics), it is important to ensure that they provide some high-level guarantees (e.g., safety, liveness). We introduce a paradigm called Trusted Machine Learning (TML) for making ML models more trustworthy. We use Markov Decision Processes (MDPs) as the underlying dynamical model and outline three TML approaches: (1) Model Repair, wherein we modify the learned model directly; (2) Data Repair, wherein we modify the data so that re-learning from the modified data results in a trusted model; and (3) Reward Repair, wherein we modify the reward function of the MDP to satisfy the specified logical constraint. We show how these repairs can be done efficiently for probabilistic models (e.g., MDP) when the desired properties are expressed in some appropriate fragment of logic such as temporal logic (for example PCTL, i.e., Probabilistic Computation Tree Logic), first order logic or propositional logic. We illustrate our approaches on case studies from multiple domains, e.g., car controller for obstacle avoidance, and a query routing controller in a wireless sensor network.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    6
    Citations
    NaN
    KQI
    []