Investigation of adaptive local threshold segmentation in context of 3D-handwriting forensics

2016 
Image segmentation plays an important role in digitized crime scene forensics. Particularly in context of modern high resolution contact-less and non-destructive acquisition and analysis of handwriting impression traces by means of 3D sensors, one main challenge is the separation of writing trace areas and non-traces by image segmentation. In earlier work authors have presented the general, yet qualitative feasibility to do so by an initial processing pipeline based on data acquisition, pre-processing and a global segmentation approach. However, quantitative measurements with regards to the segmentation quality have not been studied yet, as well as the discussion of alternative strategies for 3D image segmentation in this scenario. In this paper, we extent the earlier work by introducing a concept for benchmarking segmentation accuracy for 3D handwriting traces. Further we present results with regards to the initial approach as well as a new, adaptive local threshold segmentation. The benchmarking is based on ground truth data, determined using data of handwriting traces acquired by a high-quality flatbed scanner and segmentation information retrieved from those by means of an Otsu operator. This ground truth allows for calculation of true positive, true negative, false positive and false negative error rates as quality measurement. The practical impact of the suggested benchmarking is shown by comparison of experimental results based on initial segmentation approach and new adaptive approach. Experiments are based on ten handwriting traces each of eleven persons. The comparison of results indicates that the best parameter set of the adaptive thresholding leads to an quality increase of 12.1% in terms of precision for writing trace and decrease of 1.4% in terms of precission for background.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    1
    Citations
    NaN
    KQI
    []