Kernel-based learning of cast shadows from a physical model of light sources and surfaces for low-level segmentation

2008 
In background subtraction, cast shadows induce silhouette distortions and object fusions hindering performance of high level algorithms in scene monitoring. We introduce a nonparametric framework to model surface behavior when shadows are cast on them. Based on physical properties of light sources and surfaces, we identify a direction in RGB space on which background surface values under cast shadows are found. We then model the posterior distribution of lighting attenuation under cast shadows and foreground objects, which allows differentiation of foreground and cast shadow values with similar chromaticity. The algorithms are completely unsupervised and take advantage of scene activity to learn model parameters. Spatial gradient information is also used to reinforce the learning process. Contributions are two-fold. Firstly, with a better model describing cast shadows on surfaces, we achieve a higher success rate in segmenting moving cast shadows in complex scenes. Secondly, obtaining such models is a step toward a full scene parametrization where light source properties, surface reflectance models and scene 3D geometry are estimated for low-level segmentation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    43
    Citations
    NaN
    KQI
    []