A Driving Scenario Representation for Scalable Real-Data Analytics with Neural Networks
2019
As development of Automated Driving Systems (ADS) advances, new methods for validation and verification (V&V) are needed. A promising approach for V&V is scenario-based testing. Recorded real-world-driving-data constitutes a useful source for the extraction of scenarios as they ensure a high level of realism. Since recorded data is typically unlabeled, the benefits drawn from the large amounts of available data are limited, e.g. the data interpretation with respect to the inherent driving scenarios is challenging. Manual data inspection or rule-based approaches are hardly scalable to neither big datasets nor numerous different scenario types. Hence, there is a need of automated data analysis tools, e.g. for labeling on a semantic level. Many current approaches try to accomplish that based on neural networks. This arises the need for a consistent, valid and machine-readable representation of a driving scenario. In this paper, a representation of a scenario is defined as a top-view grid, comprising the dynamic objects and the static environment, thereby allowing a consistent interpretation of all relevant aspects of a driving scenario. Furthermore, temporal scenario aspects have to be covered as well. Therefore, a neural network architecture for the extraction of both spatial and temporal features is described. Using the proposed feature extractor, the recorded driving data gets transformed to a reduced abstract feature space. With the autoencoder, a method for efficient training of the feature extractor is described. The combined approach enables the development of scalable and efficient methods for data analytics in large quantities of real driving data, e.g. automated labeling of big datasets or the scanning for rarely happening corner cases.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
21
References
3
Citations
NaN
KQI