Humans use saccades to sample information from the world with foveal vision by fixating objects and areas of interest. The world, however, is not static, so representations of objects must be updated over time as changes occur. Foveal vision has higher acuity and reliability than peripheral vision, which is also more susceptible to phenomena such as change blindness: given this inequality, how much does peripheral vision contribute to updating object representations across sequences of saccades? Is visual awareness based on potentially outdated information at the time of object fixation, or is awareness updated based on more recent, but less reliable peripheral information? This study tested whether the representation of a rotating object was updated based on peripheral information, or whether it was based purely on the foveal view of the object, and whether the predictability of object rotation affected updating. We presented participants with four real-world objects, presented at random orientations from 360º of possible viewpoints. Participants were instructed to fixate each object in a set order, for a fixed duration. With each saccade, each object rotated either in a consecutive manner, or to a random viewpoint. Participants were then asked to make a perceptual report by rotating a randomly presented object to match the viewpoint they remembered. We correlated perceptual reports to each of the shown orientations to determine the contribution of peripheral and foveal orientations. Results showed that when objects rotated to random, non-consecutive viewpoints, participants reported the foveally-viewed orientation; however, when objects rotated in a continuous manner, participants were more likely to report more recent, peripherally-viewed orientations, depending on object eccentricity. This suggests that peripheral information is used to update perceptual representations when peripherally-viewed changes are consistent with a systematic change in the world. Peripheral information may be processed, but filtered, and only accessed under specific circumstances.
With every saccade, humans must reconcile the low resolution peripheral information available before a saccade, with the high resolution foveal information acquired after the saccade. While research has shown that we are able to integrate peripheral and foveal vision in a near-optimal manner, it is still unclear which mechanisms may underpin this important perceptual process. One potential mechanism that may moderate this integration process is visual attention. Pre-saccadic attention is a well documented phenomenon, whereby visual attention shifts to the location of an upcoming saccade before the saccade is executed. While it plays an important role in other peri-saccadic processes such as predictive remapping, the role of attention in the integration process is as yet unknown. This study aimed to determine whether the presentation of an attentional distractor during a saccade impaired trans-saccadic integration, and to measure the time-course of this impairment. Results showed that presenting an attentional distractor impaired integration performance both before saccade onset, and during the saccade, in selected subjects who showed integration in the absence of a distractor. This suggests that visual attention may be a mechanism that facilitates trans-saccadic integration.
Our environment contains an abundance of objects which humans interact with daily, gathering visual information using sequences of eye-movements to choose which object is best-suited for a particular task. This process is not trivial, and requires a complex strategy where task affordance defines the search strategy, and the estimated precision of the visual information gathered from each object may be used to track perceptual confidence for object selection. This study addresses the fundamental problem of how such visual information is metacognitively represented and used for subsequent behaviour, and reveals a complex interplay between task affordance, visual information gathering, and metacogntive decision making. People fixate higher-utility objects, and most importantly retain metaknowledge about how much information they have gathered about these objects, which is used to guide perceptual report choices. These findings suggest that such metacognitive knowledge is important in situations where decisions are based on information acquired in a temporal sequence.
As humans scan the surrounding world, each saccade brings an area of interest from low-resolution peripheral vision into high-resolution foveal vision. To maintain perceptual stability across saccades, these pre- and post-saccadic percepts must be integrated. Humans are able to achieve trans-saccadic integration in a near-optimal manner (Ganmor, Landy, & Simoncelli, 2015; Wolf & Schütz, 2015), however it is unclear if integration can happen as soon as the information from pre- and post-saccadic stimuli becomes available, or if integration requires the longer time usually taken to plan and execute a saccade. We measured the time-course of integration both at the saccade target and at a location between the target and initial fixation, to determine how long a stimulus needs to be presented for integration to occur. Participants were presented with oriented Gabors either pre-saccadically, post-saccadically, or both. The Gabor was presented for a variable time before and/or after saccade onset, to reduce the amount of time the stimulus information was available. Participants responded whether the Gabor was tilted clockwise or counter-clockwise. Discrimination performance was calculated for stimulus presentation durations ranging from 10-100ms, to create a continuous time-course of performance for pre-saccadic, post-saccadic and integration conditions. The results show that integration occurs even when the stimulus is only presented briefly. We compared integration performance with predicted performance for different cue combination models, showing that an integration model with early noise best describes integration performance for the majority of participants. The model comparison also shows that integration benefits are not due to increased exposure duration of either pre- or post- saccadic information alone. These findings suggest that integration can occur when only very little information is available before or after a saccade. Integration also seems to be accomplished by independent channels for pre- and post-saccadic information rather than a single, spatio-topic channel. Meeting abstract presented at VSS 2018
In active vision, relevant objects are selected in the peripheral visual field and then brought to the central visual field by saccadic eye movements. Hence, there are usually two sources of visual information about an object: information from peripheral vision before a saccade and information from central vision after a saccade. The well-known differences in processing and perception between the peripheral and the central visual field lead to the question whether and how the two pieces of information are matched and combined. This talk will provide an overview about different mechanisms that may alleviate differences between peripheral and central representations and allow for a seamless perception across saccades. Transsaccadic integration results in a weighted combination of peripheral and central information according to their relative reliability, such that uncertainty is minimized. It is a resource-limited process that does not apply to the whole visual field, but only to attended objects. Nevertheless, it is not strictly limited to the saccade target, but can be flexibly directed to other relevant locations. Transsaccadic prediction uses peripheral information to estimate the most likely appearance in the central visual field. This allows appearance to be calibrated in the peripheral and central visual field. Such a calibration is not only relevant to maintain perceptual stability across saccades, but also to match templates for visual search in peripheral and central vision.
This dataset contrains the behavioural and eyetracking data for the paper: Stewart, E.E.M., Ludwig, C.J.H. & Schütz, A.C. Humans represent the precision and utility of information acquired across fixations. Sci Rep12, 2411 (2022). https://doi.org/10.1038/s41598-022-06357-7 For results of the supplementary online experiment for this paper, as well as analysis of the images in the Amsterdam Library of Object Images (ALOI) dataset, please see the separate dataset: https://doi.org/10.5281/zenodo.6068096 This dataset also contrains a copy of the ALOI images used in the experiment, originally sourced from https://aloi.science.uva.nl/.
Raw data for publication "The eyes anticipate where objects will move based on their shape." Stewart and Fleming (2023). The eyes anticipate where an object will move based on its shape. Current Biology, 33(17), R894-R895.
Dataset for the published paper: Stewart, E. E. M., & Schütz, A. C. (2018). Optimal Trans-saccadic integration relies on visual working memory. Vision research. DOI: 10.1016/j.visres.2018.10.002