Mitigating fooling with competitive overcomplete output layer neural networks

2017 
Although the introduction of deep learning has led to significant performance improvements in many machine learning applications, several recent studies have revealed that deep feedforward models are easily fooled. Fooling in effect results from overgeneralization of neural networks over regions far from the training data. To circumvent this problem this paper proposes a novel elaboration of standard neural network architectures called the competitive overcomplete output layer (COOL) neural network. Experiments demonstrate the effectiveness of COOL by visualizing the behavior of COOL networks in a low-dimensional artificial classification problem and by applying it to a high-dimensional vision domain (MNIST).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    21
    Citations
    NaN
    KQI
    []