Dashcam based wildlife detection and classification using fused data sets of digital photographic and simulated imagery

2020 
In this paper, data from a simulated data set were fused with a significantly smaller measured data set for the purpose of training a deep neural network to identify animals from the footage of a dash cam. The trained networks were used to detect wildlife in an environment similar to game reserves in South Africa. To enable the automatic collection of data for the experiment, a simulated environment was created to simulate four classes of wildlife found in South Africa: buffalo, elephants, rhino and zebra. The network structure for the detector network selected was an adapted version of the tiny YOLOv3 network. It was discovered that using transfer learning and fine tuning resulted in two models with higher accuracy of 82.59% and 86.64% mAP@0.5 respectively than models where no transfer learning was used. The results were achieved when tested on a testing set of digital photographic images. These networks, initialised using transfer learning, were also faster and easier to train than training using a combined data set of photographic and simulated images from scratch. The simulated environment can however not replace real-life data, as was proven by an accuracy of no better than chance for the model trained using only simulated data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []