A stochastic theory for evidence aggregation

2008 
The problem of evidence aggregation arises when opinions are provided by multiple experts. Current evidence aggregation approaches view fusion as a one-shot problem, completely disregarding condition evolution over time. In this work, we propose a completely new theory for evidence aggregation, formulating the aggregation problem as an estimation/filtering problem. The aggregation problem is viewed as a partially-known Markov process. The overall belief is modeled as a known, but unobservable, state evolving in a linear state space. Diagnostic algorithms provide noisy observation for the hidden states of the belief space. We demonstrate the accuracy and variability of the proposed approach under conditions of sensor noise and diagnostic algorithm drop-out. Further, we provide empirical evidence of convergence and management of the combinatorial complexity associated with handling multiple fault hypotheses.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    4
    References
    0
    Citations
    NaN
    KQI
    []