Deep Features Representation for Automatic Targeting System of Gun Turret

2018 
Visual sensory has attracted much researches in many applications. This specifically for a pointing target tracking platform such as gun turret. Existing works, which use only visual information, mostly still rely on quite a number of handcrafted features that can cause parameters are not optimal and usually require complex kinematic and dynamic. An attempt has been made using deep learning that shows quite well for auto-targeting gun turret by fine-tuning the last layer. However, target localization can be further improved by involving not only last layer features but also first and second convolution layer features. In this paper, an auto-targeting gun turret system using deep network is developed. The first, second, and last layer features are indigenously combined to produced a response map. Auxiliary layers are developed to extract the first and second layer features. First and second convolutional layers help for precise localization while the last layer features help to capture semantic target. From the response map, bounding box is formed using a common non-maximal suppression which then actuates pan-tilt motors using PID algorithm. Experiments show encouraging result, accuracy is 80.35%, for the improved auto-targeting system of gun turret.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    2
    Citations
    NaN
    KQI
    []