Hand Segmentation In Complex Background Using UNet

2020 
Hand tracking and segmentation are the essential strides for any hand motion acknowledgment framework. Hand segmentation utilizing skin shading models shows terrible output in complex foundation. Because of the comparable shades of skin also, non-uniform enlightenment exist. Most recent accomplishments in deep learning made additional opportunities in understanding the assignment of object recognition and segmentation. This paper compares the different deep learning models for hand segmentation in complex background with UNet. UNet was first applied in bio medical images. Here it's the first time applying in hand segmentation. The reason why UNet is used for segmentation is that, it is able to localize and distinguish borders and there by doing classification on every pixel, so the input and output share the same size. The networks were trained with two datasets Egohands and GTEA which consists of complex background. These models acknowledge hand pictures as info and produce the segmentation map at yield. The outcomes for all models are introduced. These results are compared for analyzing a good model for hand segmentation. Finally the result showed UNet obtain better comprehensive performance than other networks. It achieved an accuracy of 98% in egohands dataset and 90% in GTEA dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    1
    Citations
    NaN
    KQI
    []