MAAD: A Model and Dataset for "Attended Awareness" in Driving
2021
We propose a computational model to estimate a person's attended awareness of
their environment. We define attended awareness to be those parts of a
potentially dynamic scene which a person has attended to in recent history and
which they are still likely to be physically aware of. Our model takes as input
scene information in the form of a video and noisy gaze estimates, and outputs
visual saliency, a refined gaze estimate, and an estimate of the person's
attended awareness. In order to test our model, we capture a new dataset with a
high-precision gaze tracker including 24.5 hours of gaze sequences from 23
subjects attending to videos of driving scenes. The dataset also contains
third-party annotations of the subjects' attended awareness based on
observations of their scan path. Our results show that our model is able to
reasonably estimate attended awareness in a controlled setting, and in the
future could potentially be extended to real egocentric driving data to help
enable more effective ahead-of-time warnings in safety systems and thereby
augment driver performance. We also demonstrate our model's effectiveness on
the tasks of saliency, gaze calibration, and denoising, using both our dataset
and an existing saliency dataset. We make our model and dataset available at
https://github.com/ToyotaResearchInstitute/att-aware/.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI