Knowing the Uncertainty in Human Behavior Classification via Variational Inference and Autoencoder

2019 
Deep learning techniques have been introduced to the radar-based human behavior research in recent years. Different from manual feature engineering methods, deep neural network models can learn features of the raw sensor input automatically, demonstrating competitive performance and scalability. However, most existing deep learning models are deterministic functions. Such models are forced to make a classification at prediction time even when the situation is far different from its training set. In this work, we propose a deep probability model that overly represents the uncertainty of its classification. Specifically, we extract the features of micro-Doppler spectrograms by the convolutional autoencoder and then introduce uncertainty in the weights of the classification module via the variational inference. In this way, all weights in the classifier are represented by probability distributions over possible values, rather than having a single fixed value. The flatter the probability distribution is, the more uncertain the classification result is. Therefore, we can know the uncertainty of the classification results and choose to whether believe its decision or not.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    4
    References
    0
    Citations
    NaN
    KQI
    []