Temporal RIT-Eyes: From real infrared eye-images to synthetic sequences of gaze behavior<italic/>
2022
Current methods for segmenting eye imagery into skin, sclera, pupil, and iris cannot leverage information about eye motion. This is because the datasets on which models are trained are limited to temporally non-contiguous frames. We present Temporal RIT-Eyes, a Blender pipeline that draws data from real eye videos for the rendering of synthetic imagery depicting natural gaze dynamics. These sequences are accompanied by ground-truth segmentation maps that may be used for training image-segmentation networks. Temporal RIT-Eyes relies on a novel method for the extraction of 3D eyelid pose (top and bottom apex of eyelids/eyeball boundary) from raw eye images for the rendering of gaze-dependent eyelid pose and blink behavior. The pipeline is parameterized to vary in appearance, eye/head/camera/illuminant geometry, and environment settings (indoor/outdoor). We present two open-source datasets of synthetic eye imagery: sGiW is a set of synthetic-image sequences whose dynamics are modeled on those of the Gaze in Wild dataset, and sOpenEDS2 is a series of temporally non-contiguous eye images that approximate the OpenEDS-2019 dataset. We also analyze and demonstrate the quality of the rendered dataset qualitatively and show significant overlap between latent-space representations of the source and the rendered datasets.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
41
References
0
Citations
NaN
KQI