A Comparison of Traffic Signs Detection Methods in 2D and 3D Images for the Benefit of the Navigation of Autonomous Vehicles

2018 
This paper presents a comparison between our computer vision system with 2D and 3D data fusion with vision systems that use purely 2D data. We introduce a system of obstacles recognition inspired by human visual attention and that uses the notion of depth to eliminate false positives and false negatives. In order for the system to have a greater robustness in data analysis, we apply a new method of extraction of 3D characteristics titled as 3D-Contour Sample Distances, being invariant to scale, translation and rotation. The system must be able to classify several different traffic signs (e.g. maximum speed allowed, stop, slow down, turn ahead, pedestrian), thus helping to make navigation within the local traffic rules. The obtained results are promising and very satisfactory, where we get 98.3% of test accuracy in a well known traffic sign benchmark dataset (INI - German Traffic Sign Benchmark). Our results in the detection and recognition of the traffic signs with 2D and 3D data fusion showed better results and greater robustness compared to traffic signs detection systems working only with 2D data. Our system make it possible to reduce or eliminate false positives and false negatives which are a big problem for the autonomous vehicle vision systems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    2
    Citations
    NaN
    KQI
    []