Automatic parsing of lane and road boundaries in challenging traffic scenes

2015 
Automatic detection of road boundaries in traffic surveillance imagery can greatly aid subsequent traffic analysis tasks, such as vehicle flow, erratic driving, and stranded vehicles. This paper develops an online technique for identifying the dominant road boundary in video sequences captured by traffic cameras under challenging environmental and lighting conditions, e.g., unlit highways captured at night. The proposed method works in real time of up to 20  frames/s and generates a ranked list of road regions that identify road and lane boundaries. Our method begins by segmenting each frame into a set of superpixels. An adaptive sampling step approximates superpixel contours to a collection of edge segments. Next, we show how online hierarchical clustering can be efficiently used to organize edges into clusters of colinearly similar sets. Promising clusters are paired with each other to form cluster pairs. Then we present and prove a statistical ranking measure that is used along with road-activity and perspective cues to find the dominant road boundaries. We evaluate the proposed approach on two real-world datasets to test our method under camera viewpoint changes and extreme environmental and lighting conditions. Results show that our method outperforms two state-of-the-art techniques in precision, recall, and runtime.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    4
    Citations
    NaN
    KQI
    []