An automatic feature construction method for salient object detection: A genetic programming approach

2021 
Abstract Over the last two decades, salient object detection (SOD) has received increasingly more attention due to its ability to handle complex natural scenes and its various real-world applications. The performance of an SOD method mainly relies on saliency features that are extracted with different levels of information. Low-level saliency features are often effective in simple scenarios, but they are not always robust in challenging scenarios. With the recent prevalence of high-level saliency features such as deep convolutional neural networks (CNNs) features, a remarkable progress has been achieved in the SOD field. However, CNN-based constructed high-level features unavoidably drop the location information and low-level fine details (e.g., edges and corners) of salient object(s), leading to unclear/blurry boundary predictions. In addition, deep CNN methods have difficulties to generalize and accurately detect salient objects when they are trained with limited number of images (e.g. small datasets). This paper proposes a new automatic feature construction method using Genetic Programming (GP) to construct informative high-level saliency features for SOD. The proposed method takes low-level and hand-crafted saliency features as input to construct high-level features. The constructed GP-based high-level features not only detect the general objects, but they are also good at capturing details and edges/boundaries. The GP-based constructed features have better interpretability compared to CNN-based features. The proposed GP-based method can potentially cope with a small number of samples for training to obtain a good generalization as long as the given training data has enough information to represent the distribution of the data. The experiments on six datasets reveal that the new method achieves consistently high performance compared to twelve state-of-the-art SOD methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    0
    Citations
    NaN
    KQI
    []