Abstract Viewing behavior exhibits temporal and spatial structure that is independent of stimulus content and task goals. One example of such structure is horizontal biases, which are likely rooted in left-right asymmetries of the visual and attentional systems. Here, we studied the existence, extent, and mechanisms of this bias. Left- and right-handed subjects explored scenes from different image categories, presented in original and mirrored versions. We also varied the spatial spectral content of the images and the timing of stimulus onset. We found a marked leftward bias at the start of exploration that was independent of image category. This left bias was followed by a weak bias to the right that persisted for several seconds. This asymmetry was found in the majority of right-handers but not in left-handers. Neither low- nor high-pass filtering of the stimuli influenced the bias. This argues against mechanisms related to the hemispheric segregation of global versus local visual processing. Introducing a delay in stimulus onset after offset of a central fixation spot also had no influence. The bias was present even when stimuli were presented continuously and without any requirement to fixate, associated to both fixation- and saccade-contingent image changes. This suggests the bias is not caused by structural asymmetries in fixation control. Instead the pervasive horizontal bias is compatible with known asymmetries of higher-level attentional areas related to the detection of novel events.
The question of how self-driving cars should behave in dilemma situations has recently attracted a lot of attention in science, media and society. A growing number of publications amass insight into the factors underlying the choices we make in such situations, often using forced-choice paradigms closely linked to the trolley dilemma. The methodology used to address these questions, however, varies widely between studies, ranging from fully immersive virtual reality settings to completely text-based surveys. In this paper we compare virtual reality and text-based assessments, analyzing the effect that different factors in the methodology have on decisions and emotional response of participants. We present two studies, comparing a total of six different conditions varying across three dimensions: The level of abstraction, the use of virtual reality, and time-constraints. Our results show that the moral decisions made in this context are not strongly influenced by the assessment, and the compared methods ultimately appear to measure very similar constructs. Furthermore, we add to the pool of evidence on the underlying factors of moral judgment in traffic dilemmas, both in terms of general preferences, i.e., features of the particular situation and potential victims, as well as in terms of individual differences between participants, such as their age and gender.
Many learning rules for neural networks derive from abstract objective functions. The weights in those networks are typically optimized utilizing gradient ascent on the objective function. In those networks each neuron needs to store two variables. One variable, called activity, contains the bottom-up sensory-fugal information involved in the core signal processing. The other variable typically describes the derivative of the objective function with respect to the cell's activity and is exclusively used for learning. This variable allows the objective function's derivative to be calculated with respect to each weight and thus the weight update. Although this approach is widely used, the mapping of such two variables onto physiology is unclear, and these learning algorithms are often considered biologically unrealistic. However, recent research on the properties of cortical pyramidal neurons shows that these cells have at least two sites of synaptic integration, the basal and the apical dendrite, and are thus appropriately described by at least two variables. Here we discuss whether these results could constitute a physiological basis for the described abstract learning rules. As examples we demonstrate an implementation of the backpropagation of error algorithm and a specific self-supervised learning algorithm using these principles. Thus, compared to standard, one-integration-site neurons, it is possible to incorporate interesting properties in neural networks that are inspired by physiology with a modest increase of complexity.
In daily life, humans are bombarded with visual input. Yet, their attentional capacities for processing this input are severely limited. Several studies have investigated factors that influence these attentional limitations and have identified methods to circumvent them. Here, we provide a review of these findings. We first review studies that have demonstrated limitations of visuospatial attention and investigated physiological correlates of these limitations. We then review studies in multisensory research that have explored whether limitations in visuospatial attention can be circumvented by distributing information processing across several sensory modalities. Finally, we discuss research from the field of joint action that has investigated how limitations of visuospatial attention can be circumvented by distributing task demands across people and providing them with multisensory input. We conclude that limitations of visuospatial attention can be circumvented by distributing attentional processing across sensory modalities when tasks involve spatial as well as object-based attentional processing. However, if only spatial attentional processing is required, limitations of visuospatial attention cannot be circumvented by distributing attentional processing. These findings from multisensory research are applicable to visuospatial tasks that are performed jointly by two individuals. That is, in a joint visuospatial task requiring object-based as well as spatial attentional processing, joint performance is facilitated when task demands are distributed across sensory modalities. Future research could further investigate how applying findings from multisensory research to joint action research may facilitate joint performance. Generally, findings are applicable to real-world scenarios such as aviation or car-driving to circumvent limitations of visuospatial attention.