Summary Interception of fast moving targets is a demanding task many animals solve. To handle it successfully, mammals employ both saccadic and smooth pursuit eye movements in order to confine the target to their area centralis. But how can non-mammalian vertebrates, which lack smooth pursuit, intercept moving targets? We studied this question by exploring eye movement strategies employed by archer fish, an animal that possesses an area centralis, lacks smooth pursuit eye movements, but can intercept moving targets by shooting jets of water at them. We tracked gaze direction of fish during interception of moving targets and found that they employ saccadic eye movements based on prediction of target position when it is hit. The fish fixates on the target’s initial position for ~ 0.2 sec from the onset of its motion, a time period used to predict if a shot can be made before the projection of the target exits the area centralis. If the prediction indicates otherwise, the fish performs a saccade that overshoots the center of gaze beyond the present target projection on the retina, such that after the saccade the moving target remains inside the area centralis long enough to prepare and perform a shot. These results add to the growing body of knowledge on biological target tracking and may shed light on the mechanism underlying this behavior in other animals with no neural system for generation of smooth pursuit eye movements.
As we interact with our environment, the features of objects in the visual scene are not consistently present on the retina and sensory cues used to guide visual behavior are not always available. Thus, active observers are faced with a ubiquitous task of comparing sensory stimuli across time and space. When monkeys compare the direction of motion of two foveally presented stimuli, S1 and S2, separated by a delay, neurons in the dorsolateral prefrontal cortex (DLPFC) show direction selective (DS) responses suggestive of their origins in area MT (Zaksas & Pasternak, 2006). Furthermore, responses to S2 are often modulated by the preceding direction, reflecting the process of sensory comparison (Hussar & Pasternak, 2012). However, DLPFC neurons respond to motion not only at the fovea but also across the entire visual field, receiving direct bottom-up inputs from ipsilateral MT representing contralateral stimuli and indirectly from the opposite MT representing ipsilateral stimuli. Since MT is highly retinotopic, we examined whether DS in DLPFC retains the spatial specificity of its inputs by presenting stimuli in the contralateral and ipsilateral hemifields during the direction comparison task. We found that DLPFC responses often showed large differences in DS for stimuli presented at different visual field locations and these differences were preserved on trials with S1 and S2 appearing in opposite hemifields. This demonstrates that these responses may reflect the convergence of MT inputs representing different spatial locations. We also found that the comparison effects during S2 appeared only when both S1 and S2 were placed in the contralateral hemifield, suggesting that the direct input from ipsilateral MT may be necessary for producing comparison effects in DLPFC. These results demonstrate that the topography of sensory representation in DLPFC is governed by its connectivity with MT. Meeting abstract presented at VSS 2013
Abstract Archerfish are known for their remarkable behavior of shooting water jets at prey hanging on vegetation above water. Motivated by the fish's capacity to knock down small prey as high as two meters above water level, we studied the role of the retina in facilitating their excellent visual acuity. First, we show behaviorally that archerfish (Toxotes jaculatrix) can detect visual structures with a minimum angle of resolution in the range of 0.075°–0.15°. Then, combining eye movement measurements with a ray tracing method, we show that the image of a target on the retina coincides with the area centralis at the ventro-temporal retina. Moving down to retinal neural circuits, we then examine the ratio by which retinal ganglion cells multiplex visual information from the photoreceptors. Measuring the anatomical densities of both cell types in the area centralis, we found photoreceptor spacing to be 5.8 μm, which supports a minimum angle of resolution as low as 0.073°. Similarly, the average spacing of the ganglion cells was 5.7 μm. Based on electrophysiological measurements we found the smallest receptive fields of ganglion cells in that area to be in the range of 8–16 μm, which translates to an angular width of 0.1°–0.2°. These findings indicate that retinal ganglion cells in the area centralis stream information to the brain at a comparable resolution with which it is sampled by the photoreceptors. Thus, the archerfish can be used as an animal model for studying how visual details are streamed to the brain by retinal output.
SUMMARY Interception of fast-moving targets is a demanding task many animals solve. To handle it successfully, mammals employ both saccadic and smooth pursuit eye movements in order to confine the target to their area centralis. But how can non-mammalian vertebrates, which lack smooth pursuit, intercept moving targets? We studied this question by exploring eye movement strategies employed by archer fish, an animal that possesses an area centralis, lacks smooth pursuit eye movements, but can intercept moving targets by shooting jets of water at them. We tracked the gaze direction of fish during interception of moving targets and found that they employ saccadic eye movements based on prediction of target position when it is hit. The fish fixates on the targetʼs initial position for ~0.2s from the onset of its motion, a time period used to predict whether a shot can be made before the projection of the target exits the area centralis. If the prediction indicates otherwise, the fish performs a saccade that overshoots the center of gaze beyond the present target projection on the retina, such that after the saccade the moving target remains inside the area centralis long enough to prepare and perform a shot. These results add to the growing body of knowledge on biological target tracking and may shed light on the mechanism underlying this behavior in other animals with no neural system for the generation of smooth pursuit eye movements.
The assembly of mixtures of nanoparticles with different properties into a binary nanoparticle superlattice (BNSL) provides a route to fabricate novel classes of materials with properties emerging from the choice of the building blocks. The common theoretical approach based on the hard-spheres model predicts crystallization of only a few metastable binary superstructures (NaCl, AlB2 or the AB13). Recently [Shevchenko, E. V.; Talapin, D. V.; O'Brien, S.; Murray, C. B. Nature2006; 439, 55.)], it has been demonstrated that with the use of a combination of semiconducting, metallic, and magnetic nanoparticles, a variety of novel BNSL structures were formed, where at least 10 were low density structures that have not been previously reported. While some of the structures can be explained by the addition of electrostatic interactions, it is clear that at the nanometer scale one needs to consider other influences, such as van der Waals forces, steric effects, etc. Motivated by those experiments, we study, using Monte Carlo simulations, the phase behavior of binary mixtures of nanoparticles interacting via a combination of hard-core electrostatics and van der Waals forces. We include a tuning parameter that can be used to balance between electrostatic and dispersion interactions and study the phase behavior as a function of the different charges and size ratios of the nanoparticles. The results indicate that at the nanoscale, both electrostatic and dispersion interactions are necessary to explain the experimental observed BNSL structures.