Denoising of Video Frames resulting from Video Interface Leakage using Deep Learning for efficient Optical Character Recognition

2021 
The present work shows Deep Neural Networks’ application in the automatic recovery of information from unintended electromagnetic emanations emitted by video interfaces. A dataset of 18,194 captured frames is generated, which allows training two Convolutional Neural Networks for the denoising of captured video frames. After processing the noisy frames with the CNNs, a significant improvement is measured in the Peak Signal to Noise Ratio (PSNR). Consequently, text can be automatically extracted using Optical Character Recognition (OCR), allowing us to recover 68% of the text from our validation dataset. The proposed approach aims at evaluating the risk introduced by modern Deep Learning algorithms when applied to these captures, showing that compromising electromagnetic leakage represents a non-negligible threat to information security.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []