logo
    A skeleton extracting algorithm for dorsal hand vein pattern
    9
    Citation
    7
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    Extracting vein skeleton with little distortion from the vein image is very important for improving the identification rate. In the paper, an algorithm for segmenting the dorsal hand vein image and extracting the vein skeleton is presented. Firstly, after gray and size normalizing, Gaussian lowpass filter and median filter are used to eliminate the speck noise and the horizontal strip scanning noise respectively. Then, an improved NiBlack algorithm segments the vein pattern and an area thresholding algorithm removes the noise blocks from the vein pattern. Subsequently, opening, closing and median filter are used to smooth the vein boundary. After that, the vein pattern is thinned by Kejun Wang's improved conditional thinning. Lastly, a pruning algorithm is presented to trim the spurs and the vein pattern is skeletonized. Experiment shows the algorithm acquires more real skeleton.
    Keywords:
    Skeletonization
    Gaussian filter
    In the clinical practice of diagnosis and treatment of liver disease, how to effectively represent and analyze the vascular structure has been a widely studied topic for a long time. In this paper, we propose a method for the three dimensional skeletal graph generation of liver vessels using 3D thinning algorithm and graph theory. First of all, the principal methods for skeletonization are introduced, followed by their comparative analysis. Secondly, the 3D thinning-based skeletonization method together with a filling hole pre-processing on liver vessel image are employed to form the liver skeleton. A graph-based technique is then employed on the skeleton image to efficiently form the liver vascular graph. The thinning-based liver vessel skeletonization method was evaluated on liver vessel images with other two kinds of skeletonization approaches to show its effectiveness and efficiency.
    Skeletonization
    Thinning
    Vascular network
    Citations (7)
    Skeletonization as a tool for quantitative analysis of three- dimensional (3D) images is becoming more important, as they are more common in a number of application fields, especially in biomedical tomographic images at different scales. Here we propose a method, which computes both surface and curve skeletons of 3D binary images. The distance transform algorithm is applied to reduce a 3D object to a 2D surface skeleton, an then to a 1D curve skeleton in two phases. In surface skeletonization, 6-connectivity is used in distance transform; while in curve skeletonization, 18-connectivity is used in computing distance transform. Some examples are discussed to illustrate the algorithm.
    Skeletonization
    Distance transform
    Medial axis
    Skeleton (computer programming)
    Citations (1)
    Skeletonization is a crucial step in many digital image processing applications like medical imaging, pattern recognition, fingerprint classification etc. The skeleton expresses the structural connectivities of the main component of an object and is one pixel in width. Present paper covers the aspects of pixel deletion criteria in the skeletonization algorithms needed to preserve the connectivity, topology, sensitivity of the binary images. Performance of different skeletonization algorithms can be measured in terms of different parameters such as thinning rate, number of connected components, execution time etc. Present paper focuses on thinning rate, number of connected components, execution time on Zhang and Suen algorithm and Guo and Hall algorithm.
    Skeletonization
    Connected component
    Connected-component labeling
    Component (thermodynamics)
    Medial axis
    Citations (2)
    Morphological filtering is known for its flexibility in locally modifying geometrical features of three dimensional data, or image functions. The topic in this paper is on adaptive thresholding of multilevel image functions to extract application-specific features from grayscale images. By adaptive thresholding, we mean that the process of binarizing grayscale images is locally adjusted. The geometric features to be extracted are furnished by specifics from the application requirements, e.g., a binary version is needed from a photo to extract letters from a car license plate such that the binarized image is specifically representing the information about letters around the license plate while ignoring other background information. A contour function is used as the adaptive thresholding layer for the grayscale image. After the first thresholding, a binarized version is obtained and then local geometric parameters about the binary image are measured through a skeletonization process. The parameters from skeletonization are compared with the feature descriptions and a contour function is redefined and used for adaptively thresholding the grayscale image. A skeletonization process is then applied to the binarized image to extract local geometric parameters to meet the application- specific requirement. Application of the developed adaptive thresholding algorithm includes examples in text image binarization, object feature binarization against surrounding background, and glass flaw detection.
    Skeletonization
    Feature (linguistics)
    Balanced histogram thresholding
    Citations (2)
    The use of digital images from scanned documents is commonly used both for data backup and for further processing. However, often the digital image obtained is not optimum due to various factors like noise. The method to improve the quality of digital images is to filter images using the Thresholding method. This study compares three Thresholding methods, which are Simple Thresholding, Adaptive-Gaussian Thresholding, and Otsu Binarization. All three methods have advantages and disadvantages. However, using the MSE and PSNR assessment parameters, the Simple Thresholding method shows better quality with an MSE value of 5,196.76, followed by Otsu Binarization with a value of 5,934.10, and Adaptive-Gaussian Thresholding with a value of 9,025.29. Meanwhile, by using PSNR, the value for Simple Thresholding is 13.37, followed by Otsu Binarization with a value of 12.47, and Adaptive-Gaussian Thresholding with a value of 10.31.
    Balanced histogram thresholding
    Otsu's method
    Gaussian filter
    Shape description is an important step in image analysis. Skeletonization methods are widely used in image analysis since they are a powerful tool to describe a shape. This paper presents a new single-scan skeletonization using different diskrete distances. The application of this method is the extraction of caracteristics from µCT images in order to estimate the bone state.
    Skeletonization
    Citations (0)
    Skeletonization is a widely used tool for running simplified simulations of water distribution systems without losing accuracy of model results. The level of skeletonization highly depends on the model purpose and is rarely investigated on a general level. The aim of this paper is to highlight the influence of different network properties on the level of skeletonization as well as to investigate a general context between network properties and level of skeletonization - taking into account different model purposes. Therefore, 300 virtual water distribution systems with varying network properties were generated, which then were skeletonized to different levels. This allows for a generic analysis. Simulation results with the skeletonized models were compared to their original models with the Nash-Sutcliffe coefficient and a percentage criterion. Results indicate that for example network size has an influence on the accuracy of results of skeletonized models in terms of water quality.
    Skeletonization
    Citations (8)
    The problem of 2-D skeletonization has been discussed in this paper. There are different categories of skeletonization methods: one is based on symmetric axis analysis,and the others are based on thinning and shape decomposition. The basic ideas and developments about these skeletonization methods are investigated. The main target for this article is to provide a systemic and clear reference for researchers on the patteren recognition,visualization and medical image processing etc.
    Skeletonization
    Citations (2)