A blind deconvolution model for scene text detection and recognition in video

2016 
Text detection and recognition in poor quality video is a challenging problem due to unpredictable blur and distortion effects caused by camera and text movements. This affects the overall performance of the text detection and recognition methods. This paper presents a combined quality metric for estimating the degree of blur in the video/image. Then the proposed method introduces a blind deconvolution model that enhances the edge intensity by suppressing blurred pixels. The proposed deblurring model is compared with other state-of-the-art models to demonstrate its superiority. In addition, to validate the usefulness and the effectiveness of the proposed model, we conducted text detection and recognition experiments on blurred images classified by the proposed model from standard video databases, namely, ICDAR 2013, ICDAR 2015, YVT and then standard natural scene image databases, namely, ICDAR 2013, SVT, MSER. Text detection and recognition results on both blurred and deblurred video/images illustrate that the proposed model improves the performance significantly. We explore quality metrics for blur text image/video classification.Proposed a deblur model explores Gaussian weighted-L1 in different ways.Kernel based energy minimization enhances the edge strengths.Experiments to evaluate and validate the proposed deblur model are presented.Experimental result shows that the proposed method is useful for text detection.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    38
    Citations
    NaN
    KQI
    []