Recent advances in in vivo two-photon imaging have extended the technique to permit the detection of action potentials (APs) in populations of spatially resolved neurons in awake animals. Although experimentally demanding, this technique’s potential applications include experiments to investigate perception, behavior, and other awake states. Here we outline experimental procedures for imaging neuronal populations in awake and anesthetized rodents. Details are provided on habituation to head fixation, surgery, head plate design, and dye injection. Determination of AP detection accuracy through simultaneous optical and electrophysiological recordings is also discussed. Basic problems of data analysis are considered, such as correction of signal background and baseline drift, AP detection, and motion correction. As an application of the method, the comparison of neuronal activity across arousal states is considered in detail, and some future directions are discussed.
Article Figures and data Abstract eLife digest Introduction Results Discussion Materials and methods Data availability References Decision letter Author response Article and author information Metrics Abstract Mice have a large visual field that is constantly stabilized by vestibular ocular reflex (VOR) driven eye rotations that counter head-rotations. While maintaining their extensive visual coverage is advantageous for predator detection, mice also track and capture prey using vision. However, in the freely moving animal quantifying object location in the field of view is challenging. Here, we developed a method to digitally reconstruct and quantify the visual scene of freely moving mice performing a visually based prey capture task. By isolating the visual sense and combining a mouse eye optic model with the head and eye rotations, the detailed reconstruction of the digital environment and retinal features were projected onto the corneal surface for comparison, and updated throughout the behavior. By quantifying the spatial location of objects in the visual scene and their motion throughout the behavior, we show that the prey image consistently falls within a small area of the VOR-stabilized visual field. This functional focus coincides with the region of minimal optic flow within the visual field and consequently area of minimal motion-induced image-blur, as during pursuit mice ran directly toward the prey. The functional focus lies in the upper-temporal part of the retina and coincides with the reported high density-region of Alpha-ON sustained retinal ganglion cells. eLife digest Mice have a lot to keep an eye on. To survive, they need to dodge predators looming on land and from the skies, while also hunting down the small insects that are part of their diet. To do this, they are helped by their large panoramic field of vision, which stretches from behind and over their heads to below their snouts. To stabilize their gaze when they are on the prowl, mice reflexively move their eyes to counter the movement of their head: in fact, they are unable to move their eyes independently. This raises the question: what part of their large visual field of view do these rodents use when tracking a prey, and to what advantage? This is difficult to investigate, since it requires simultaneously measuring the eye and head movements of mice as they chase and capture insects. In response, Holmgren, Stahr et al. developed a new technique to record the precise eye positions, head rotations and prey location of mice hunting crickets in surroundings that were fully digitized at high resolution. Combining this information allowed the team to mathematically recreate what mice would see as they chased the insects, and to assess what part of their large visual field they were using. This revealed that, once a cricket had entered any part of the mice’s large field of view, the rodents shifted their head – but not their eyes – to bring the prey into both eye views, and then ran directly at it. If the insect escaped, the mice repeated that behavior. During the pursuit, the cricket’s position was mainly held in a small area of the mouse’s view that corresponds to a specialized region in the eye which is thought to help track objects. This region also allowed the least motion-induced image blur when the animals were running forward. The approach developed by Holmgren, Stahr et al. gives a direct insight into what animals see when they hunt, and how this constantly changing view ties to what happens in the eyes. This method could be applied to other species, ushering in a new wave of tools to explore what freely moving animals see, and the relationship between behaviour and neural circuitry. Introduction The visual system of mice serves a variety of seemingly opposing functions that range from detection of predators, to finding shelter and selection of food and mates, and is required to do so in a diverse set of environments (Boursot et al., 1993). For example, foraging in open areas where food is available involves object selection, and in the case of insect predation (Badan, 1986; Tann et al., 1991), involves prey tracking and capture (Hoy et al., 2016; Langley, 1983; Langley, 1984; Langley, 1988), but the visual system can also simultaneously be relied on for avoidance of predation, particularly from airborne predators (Hughes, 1977). Like with many ground-dwelling rodents (Johnson and Gadow, 1901), predator detection in mice is served by a panoramic visual field which is achieved by the lateral placement of the eyes in the head (Dräger, 1978; Hughes, 1979; Oommen and Stahl, 2008) combined with monocular visual fields of around 200 degrees (Dräger and Olsen, 1980; Hughes, 1979; Sterratt et al., 2013). In mice, the panoramic visual field extends to cover regions above the animal’s head, below the animal's snout and laterally to cover ipsilaterally from behind the animal's head to the contralateral side, with the overlapping visual fields from both eyes forming a large binocular region overhead and in front of the animal (Hughes, 1977; Sabbah et al., 2017). In addition, eye movements in freely moving mice constantly stabilize the animal’s visual field by counteracting head rotations through the vestibulo-ocular reflex (VOR) (Meyer et al., 2020; Meyer et al., 2018; Michaiel et al., 2020; Payne and Raymond, 2017) maintaining the large panoramic overhead view (Wallace et al., 2013) critical for predator detection (Yilmaz and Meister, 2013). Given the VOR stabilized panoramic field of view, it is not clear what part of the visual field mice use to detect and track prey (but see: Johnson et al., 2021). Mouse retina contains retinal ganglion cells (RGCs), the output cells of the retina, with a broad diversity of functional classes (Baden et al., 2016; Bleckert et al., 2014; Franke et al., 2017; Zhang et al., 2012). Given the lateral eye position, the highest overall density faces laterally (Dräger and Olsen, 1981; Sabbah et al., 2017; Salinas-Navarro et al., 2009; Stabio et al., 2018). Further, as the functionally defined ganglion cells (Baden et al., 2016; Bleckert et al., 2014; Franke et al., 2017; Zhang et al., 2012) and cone sub-types (Szél et al., 1992) are segregated into retinal subregions within the large stabilized field of view, recent studies suggest that retinal subregions are tuned for specific behavioral tasks depending on what part of the world they subtend (Baden et al., 2016; Bleckert et al., 2014; Hughes, 1977; Sabbah et al., 2017; Szatko et al., 2020; Zhang et al., 2012). The challenge is to measure what part of the visual field the mouse is attending to during a visually based tracking task (Hoy et al., 2016) and the location of all objects within the field of view during the behavior. While recent studies have implied the relationship between prey and retina through tracking head position (Johnson et al., 2021) or measured both the horizontal and vertical eye rotations (Meyer et al., 2020; Meyer et al., 2018) during pursuit behavior (Michaiel et al., 2020) to uncover a large proportion of stabilizing eye-rotations, what is missing is the extent and location of the area used when detecting and pursuing prey, and the relationship to the retina (Bleckert et al., 2014). Here, we measured the position of a cricket in the visual fields of freely moving mice performing a prey pursuit behavior, using head and eye tracking in all three rotational axes, namely horizontal, vertical, and torsional. Eye tracking included an anatomical calibration to accurately account for the anatomical positions of both eyes. To quantify object location in the animal’s field of view and generate optic flow fields, head and eye rotations were combined with a high-resolution digital reconstruction of the arena to form a detailed visual map from the animal’s eye perspective. Given that mice use multisensory strategies during prey pursuit (Gire et al., 2016; Langley, 1983; Langley, 1988) and can track prey using auditory, visual, or olfactory cues (Langley, 1983; Langley, 1988), we developed a behavioral arena that isolated the visual aspect of the behavior by removing auditory and olfactory directional cues to ensure that the behavior was visually guided. To transfer the retinal topography onto the corneal surface, we developed an eye model capturing the optical properties of the mouse eye. We show that during prey detection mice preferentially position prey objects in stable foci located in the binocular field and undertake direct pursuit. Prey objects remain in the functional foci through the stabilizing action of the VOR, and not through active prey-pursuit eye movements. The stabilized functional foci are spatially distinct from the regions of highest total retinal ganglion cell density, which are directed laterally, but coincides with the regions of the visual field where there is minimal optic flow and therefore minimal motion-induced image disturbance during the behavior, as the mouse runs towards the cricket. Lastly, by building an optical model that allows corneal spatial locations to be projected onto the retina, we suggest that the functional foci correspond to retinal subregions containing a large density of Alpha-ON sustained RGCs that have center-surround receptive fields and project to both superior colliculus and dLGN (Huberman et al., 2008) and possess properties consistent with the requirements for tracking small and mobile targets (Krieger et al., 2017). Results Forming a view from the animal’s point of view To measure what part of the visual field mice use during prey capture while also considering that mice can use multisensory strategies during prey pursuit (Gire et al., 2016; Langley, 1983; Langley, 1988), we first developed an arena which isolated the visual component of prey pursuit by masking olfactory and auditory spatial cues (Figure 1A, see Materials and methods for details). By removing both olfactory and auditory cues, the average time to capture a cricket approximately doubled compared to removal of auditory cues alone (time to capture, median ± SD, control 24.92 ± 16.77 s, olfactory and auditory cues removed, 43.51 ± 27.82 s, p = 0.0471, Wilcoxon rank sum test, N=13 control and 12 cue removed trials from N = 5 mice). To track mouse head and eye rotations during prey capture, we further developed a lightweight version of our head mounted oculo-videography and camera-based pose and position tracking system (Wallace et al., 2013; Figure 1B and Materials and methods). This approach allowed quantification of head rotations in all three axes of rotation (pitch, roll, and yaw), as well as eye rotations in all three ocular rotation axes (torsion, horizontal, and vertical, Figure 1C, Figure 1—figure supplement 1A and B). The same camera-based system was used to track and triangulate the position of the cricket (see Materials and methods and Figure 1—figure supplement 1C). To quantify the position and motion of the environment and cricket in the mouse's field of view, we also developed a method that enabled a calibrated environment digitization to be projected onto the corneal surface. This approach utilized a combination of laser scanning and photogrammetry, giving a resolution for the reconstruction of the entire experimental room of 2 mm, as well as a detailed measurement of eye and head rotations (Figure 1D–E, and see Materials and methods). Mice, like rats (Wallace et al., 2013), have a large visual field of view which extends to also cover the region over the animal’s head (Figure 1F). To ensure the entire visual fields of the mouse could be captured during behavior, we digitized the entire experimental room and contents (Figure 1E, Figure 1—figure supplement 1D–F, Video 1). The coordinate systems of the environmental digitization and mouse and cricket tracking systems were registered using 16–20 fiducial markers identified in both the overhead camera images and the digitized environment. The average differences in position of fiducial points between the two coordinate systems were less than 1 mm (mean ± SD, x position, 0.18 ± 3.1 mm, y position, 0.07 ± 1.6 mm, z position, 0.66 ± 1.8 mm, N=54 fiducial points from three datasets). The next step was to re-create the view for each eye. First, and for each mouse, the positions of both eyes and nostrils were measured with respect to both the head-rotation tracking LEDs and head-mounted cameras, then calibrated into a common coordinate system (Figure 1B). Together, this enabled a rendered representation of the digitized field of view for each combination of head and eye rotations. This rendered image, from the animal’s point of view, contained all the arena and lab objects (Figure 1G–H, Video 2 and 3, Figure 1—figure supplement 1G). In addition to object position and distance (Figure 1I), the motion of the environment and each object in the field of view could be quantified as the mouse performed prey capture behaviors (Figure 1J, and Figure 1—figure supplement 1H). Figure 1 with 1 supplement see all Download asset Open asset Reconstruction of experimental arena and surrounds from the animal’s perspective. (A) Schematic of experimental arena with olfactory and auditory noise. (B) Schematic of tracking, anatomical and eye camera calibration. Head position and orientation was tracked using seven IR-LEDs (colored circles). Nostrils (red, yellow filled circles), left (blue filled circle), and right (green filled circle) medial canthi were identified and triangulated in calibration images and used to define a common coordinate system (forward, blue arrow, right, green arrow, and up, red arrow), into which the calibrated eye camera location and orientation could also be placed (eye camera vertical, cyan, horizontal, purple, camera optical axis, red). (C) Example left- and right eye camera images with tracked pupil position (white dashed outlines). (D) Rendered digital reconstruction of the laboratory room and (E) experimental arena. (F) Schematic representation of mouse’s left- (blue) and right (green) visual fields, showing also the region of binocular overlap (yellow) and un-seen region (white). (G) Reconstruction of the arena and room from the animal’s left- and right eye perspective, with monocular and binocular regions colored as in (F). (H) Reconstruction of the animal’s view of the prey (cricket - black) in the experiment arena. (I) Representation of left and right eye views of the arena and surrounding objects grayscale-coded by distance from the eye. (J) Rendered animal’s eye views from the left- and right eyes with overlay of arrows representing optic flow during 10 ms of free motion. Figure 1—source data 1 Related to Figure 1D. https://cdn.elifesciences.org/articles/70838/elife-70838-fig1-data1-v1.zip Download elife-70838-fig1-data1-v1.zip Figure 1—source data 2 Related to Figure 1G. https://cdn.elifesciences.org/articles/70838/elife-70838-fig1-data2-v1.zip Download elife-70838-fig1-data2-v1.zip Figure 1—source data 3 Related to Figure 1H. https://cdn.elifesciences.org/articles/70838/elife-70838-fig1-data3-v1.zip Download elife-70838-fig1-data3-v1.zip Figure 1—source data 4 Related to Figure 1I. https://cdn.elifesciences.org/articles/70838/elife-70838-fig1-data4-v1.zip Download elife-70838-fig1-data4-v1.zip Figure 1—source data 5 Related to Figure 1J. https://cdn.elifesciences.org/articles/70838/elife-70838-fig1-data5-v1.zip Download elife-70838-fig1-data5-v1.zip Video 1 Download asset This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Download as MPEG-4 Download as WebM Download as Ogg Digitized and rendered view of the experiment arena and surrounding environment. Laser scanned and digitally reconstructed experiment environmental, providing positional information of objects within the mouse’s environment. When combined with the tracked 3D cricket positions and the tracked mouse head and eye positions and rotations this allowed the generation of a frame-by-frame mouse eye view of the prey and the surroundings. Video 2 Download asset This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Download as MPEG-4 Download as WebM Download as Ogg Reconstruction of the mouse’s left and right eye field of view during one example behavioral sequence. Real speed. Video 3 Download asset This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Download as MPEG-4 Download as WebM Download as Ogg Reconstruction of the mouse’s left and right eye field of view during one example behavioral sequence, as shown in Video 2, but slowed to 0.5x real speed. During pursuit the image of the prey consistently falls in a localized visual region Crickets (Acheta domesticus), shown previously to be readily pursued and preyed upon by laboratory mice (Hoy et al., 2016), provided a prey target that could successfully evade capture for extended periods of time (total time for each cricket before capture: 64.4 ± 39.3 s, average time ± SD, N = 21 crickets and three mice, Video 4 and 5). To ensure that only data where the mouse was actively engaged in the detection and tracking of the cricket were used, we identified occasions where the mouse either captured the cricket, or contacted the cricket but the cricket escaped (see Materials and methods for definitions), and then quantified the trajectories of both mouse and cricket leading up to the capture or capture-escape (Figure 2A). Within these chase sequences we defined three behavioral epochs (detect, track, and capture, Figure 2B, see Materials and methods for definition details) based on the behavior of mouse and cricket, and similar to previous studies (Hoy et al., 2016). Figure 2 with 1 supplement see all Download asset Open asset Mice use a focal region of their visual field to track prey. (A) Mouse (black) and cricket (orange) paths during a single pursuit sequence (left), and for all pursuit sequences in one session for one animal (right). Pursuit start denoted as filled circles and cricket capture as X. (B) Mouse (red and blue) and cricket (orange) paths during an individual pursuit sequence (left) and all pursuit sequences in one session from one animal (right), showing detect (red) and track (blue) epochs of the mouse path. Paths after a cricket escape shown dashed. Pursuit sequence start shown as filled circles, cricket landing point after a jump shown as a filled triangle. (C) Euclidean distance between mouse and cricket during detect (red) and track (blue) epochs (n=65 trajectories, n=3 mice). (D) Mean and SD bearing to cricket (angle between mouse’s forward direction and cricket location) during detect (red), and track (blue) epochs from all animals (detect: 57 epochs; track: 65 epochs, n=3 animals, bin size = 5°). (E) Trajectory of the projected cricket position in the left and right corneal views, during a single pursuit sequence. Color scheme as for D. The inner dashed circle is 45° from the optical axes. Dorsal (D), ventral (V), nasal (N), and temporal (T) directions indicated. (F) Average probability density maps for detect epochs (4628 frames from three animals). Orientation as in E. (G) Average probability density maps for track epochs (13641 frames from three animals). Orientation as in E. (H) Isodensity contours calculated from the average probability density maps for track epochs. (note that 50% means that this region contains 50% of the total density, and likewise for the other contours). Orientation as in E. Figure 2—source data 1 Related to Figure 2A,B,C,D,H. https://cdn.elifesciences.org/articles/70838/elife-70838-fig2-data1-v1.xlsx Download elife-70838-fig2-data1-v1.xlsx Figure 2—source data 2 Related to Figure 2E. https://cdn.elifesciences.org/articles/70838/elife-70838-fig2-data2-v1.zip Download elife-70838-fig2-data2-v1.zip Figure 2—source data 3 Related to Figure 2F. https://cdn.elifesciences.org/articles/70838/elife-70838-fig2-data3-v1.zip Download elife-70838-fig2-data3-v1.zip Figure 2—source data 4 Related to Figure 2G. https://cdn.elifesciences.org/articles/70838/elife-70838-fig2-data4-v1.zip Download elife-70838-fig2-data4-v1.zip Video 4 Download asset This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Download as MPEG-4 Download as WebM Download as Ogg Left and right eye camera images and one overhead camera view showing one complete cricket pursuit, from shortly after release of the cricket into the arena to cricket capture. Real Speed. Video 5 Download asset This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Download as MPEG-4 Download as WebM Download as Ogg The same cricket pursuit as shown in Video 4 but slowed to 0.5x real speed. Upon cricket detection, mice oriented and ran towards the cricket, resulting in a significant adjustment to their trajectory (Δ target bearing: 40.2 ± 35.1°, P = 6.20 x 10−10, Δ speed: 10.2 ± 7.4 cm/s, P = 1.91 x 10−10; N=57 detect-track sequences N = 3 mice; Paired Wilcoxon’s signed rank test for both tests), and a rapid reduction in the Euclidean distance to the cricket (Figure 2C). During tracking, the cricket was kept in front of the mouse, resulting in a significant reduction in the spread of target bearings compared to during detect epochs (Figure 2D, Target bearing: detect 6.2 ± 62.1°, track: 2.5 ± 25.6°, mean ± SD, Brown-Forsythe test p = 0, F statistic=7.05x103, N=4406 detect and 13624 track frames, N=3 mice), consistent with previous findings (Hoy et al., 2016). To avoid the closing phase of the pursuit being associated with whisker strikes (Shang et al., 2019; Zhao et al., 2019), tracking periods were only analyzed when the mouse was more than 3 cm from the cricket, based on whisker length (Ibrahim and Wright, 1975). Using the detailed digitization of the behavioral arena and surrounding laboratory (Figure 1E, Video 1), an image of the cricket and objects in the environment was calculated for each head and eye position during the predator-prey interaction (Video 2 and 3). Using this approach, we addressed the question of what area of the visual field was the cricket located in during the various behavioral epochs. In the example pursuit sequence in Figure 2E, the cricket was initially located in the peripheral visual field and then transitioned to the lower nasal binocular quadrant of the cornea-view during pursuit and capture (red trace in left eye to blue trace in both eyes). Correspondingly, an average probability density map calculated for all animals during the detect epoch showed a very broad distribution of cricket positions across the visual field (Figure 2F, Figure 2—figure supplement 1A and B). Upon detection the mouse oriented toward the cricket, bringing it toward the lower nasal binocular visual field (Figure 2E, Video 6). When averaged for all pursuit sequences from all animals, projected cricket positions formed a dense cluster on the cornea of both eyes (Figure 2G and H, Figure 2—figure supplements 1A,C–D, 50% contour center for left and right eye respectively, radial displacement from optical axis 64.3 ± 7.5° and 63.3 ± 9.9°, rotational angle 126.2 ± 8.9° and −115.7 ± 6.1°, mean ± SD, N = 3 mice), which was significantly different from the cluster in the detect epoch (average histogram of the location of cricket image during tracking phase vs average histogram of the location of cricket during detect phase: Left eye P = 3.54 x 10−46, Right eye P = 1.08 x 10−81, differences calculated by taking the Mean Absolute Difference with bootstrapping, N=57 detect-track sequences, N = 3 mice). Thus, during the tracking and pursuit behavior the image of the prey consistently fell on a local and specific retinal area that we refer to from here on as the functional focus. The functional focus fell within the binocular field, while the region of elevated density of RGCs has been found to be located near the optical axis (Dräger and Olsen, 1981), which suggests that the location of the retinal specialization may not overlap with the functional focus. Video 6 Download asset This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Download as MPEG-4 Download as WebM Download as Ogg Reconstructed left and right mouse-eye views for one example pursuit behavioral sequence, showing the trajectory of the cricket position in the eye views during the detect (red) and track (blue) segments of the behavior. Relative locations of functional foci and ganglion cell density distributions To establish the relation between the identified functional focus and the density distribution of RGCs, we made a mouse eye-model (Figure 3A), modified from previous models (Barathi et al., 2008). Using the eye model, retinal spatial locations could be projected through the optics of the mouse eye to the corneal surface. We first reconstructed the isodensity contours quantifying the distribution of all RGCs (Dräger and Olsen, 1981) to define the retinal location with the highest overall ganglion cell density (Figure 3—figure supplement 1A–C, note that these contours are also in agreement with other recently published maps of total RGC density [Bleckert et al., 2014; Zhang et al., 2012]). The lens optical properties were based on a GRIN lens (present in both rats [Hughes, 1979; Philipson, 1969] and mice [Chakraborty et al., 2014]). To determine the optical characteristics of this lens, we developed a method which combined models of the lens surface and refractive index gradient (Figure 3A, Figure 3—figure supplement 1D and Tables 1 and 2, see Materials and methods for details). Using this model, the contours representing the retinal specializations were projected through the eye model onto the corneal surface to determine equivalent corneal locations (Figure 3B, Figure 3—figure supplement 1E). Comparing this location to the functional focus location showed that the region with the highest overall RGC counts and the functional focus (Figure 3B) occupied distinct retinal locations (Figure 3C). Viewed from above the animal’s head, the functional foci were directed at the region in front of the animal’s nose and within the region of stable binocular overlap (azimuth: 1.4 ± 8.8° and −4.4 ± 9.3°, elevation 5.7 ± 2.1° and 4.9 ± 1.4° for left and right eyes respectively, N = 13641 frames, N=3 mice), while the retinal specialization was directed laterally (azimuth: −66.2 ± 6.7° and 70.3 ± 4.7°, elevation: 30.8 ± 12.2° and 41.0 ± 13.5° for left and right eyes respectively, N = 13641 frames N=3 mice. Figure 3D, Figure 3—figure supplement 1F–G). Given that density distributions for different subtypes of RGCs can be spatially heterogeneous with density peaks in distinctly different retinal locations, and that the region of peak density for Alpha-ON sustained RGC’s is spatially located on the dorso-temporal retina (Bleckert et al., 2014), consistent with projecting to the front of the animal, we next quantified whether this region overlapped with the functional focus observed here (Figure 3E). Figure 3 with 1 supplement see all Download asset Open asset Functional foci are not sampled by the highest density retinal ganglion cell region. (A) Schematic of mouse eye model (left upper) with profile of all refractive indices (RI, left lower). Reconstructions of the optic disc (black), highest (>8000 cells/mm2, beige) and second highest (>7000 cells/mm2, brown) retinal ganglion cell (RGC) density regions redrawn from Dräger and Olsen, 1981, shown in lower right. (B) Position in corneal views of the high RGC density regions (brown and beige filled regions), and isodensity contours from Figure 2H after projection through the eye model. Orientation as in Figure 2E. (C) Horizontal axis histograms for the nasal half of the corneal view of the second highest RGC region (brown) and 50% isodensity contour for left (blue) and right (green) eyes. (D) Top-down view of the coverage regions for the right eye of the 50% isodensity contour (green, N = 7551 frames) and second highest RGC region (brown, N = 51007 frames) for a single animal. Bars represent the probability density function for the respective regions at that azimuth angle. (E) Position in corneal views of Alpha-ON sustained RGC densities (redrawn from Bleckert et al., 2014) after projection through the eye model. Colored regions show the 95% (dark purple), 75% (medium purple), and 50% (light purple) contour regions of the peak Alpha-ON sustained RGC density. Isodensity contours from Figure 2H. (F) Top-down view of the coverage regions for the right eye of the 95% (dark purple), 75% (medium purple), and 50% (light purple) Alpha-ON sustained RGC contour regions (same as in E, N = 51007 frames) and the 50% isodensity contour from D (green) for a single animal. For the Alpha-ON sustained RGC contour regions, 50% means that this region contains all points which are at least 50% of the peak RGC density. Figure 3—source data 1 Related to Figure 3A. https://cdn.elifesciences.org/articles/70838/elife-70838-fig3-data1-v1.zip Download elife-70838-fig3-data1-v1.zip Figure 3—source data 2 Related to Figure 3B. https://cdn.elifesciences.org/articles/70838/elife-70838-fig3-data2-v1.zip Download elife-70838-fig3-data2-v1.zip Figure 3—source data 3 Related to Figure
Action Potential (APs) patterns of sensory cortex neurons encode a variety of stimulus features, but how can a neuron change the feature to which it responds? Here, we show that in vivo a spike-timing-dependent plasticity (STDP) protocol—consisting of pairing a postsynaptic AP with visually driven presynaptic inputs—modifies a neurons' AP-response in a bidirectional way that depends on the relative AP-timing during pairing. Whereas postsynaptic APs repeatedly following presynaptic activation can convert subthreshold into suprathreshold responses, APs repeatedly preceding presynaptic activation reduce AP responses to visual stimulation. These changes were paralleled by restructuring of the neurons response to surround stimulus locations and membrane-potential time-course. Computational simulations could reproduce the observed subthreshold voltage changes only when presynaptic temporal jitter was included. Together this shows that STDP rules can modify output patterns of sensory neurons and the timing of single-APs plays a crucial role in sensory coding and plasticity.
We describe a miniaturized head-mounted multiphoton microscope and its use for recording Ca(2+) transients from the somata of layer 2/3 neurons in the visual cortex of awake, freely moving rats. Images contained up to 20 neurons and were stable enough to record continuously for >5 min per trial and 20 trials per imaging session, even as the animal was running at velocities of up to 0.6 m/s. Neuronal Ca(2+) transients were readily detected, and responses to various static visual stimuli were observed during free movement on a running track. Neuronal activity was sparse and increased when the animal swept its gaze across a visual stimulus. Neurons showing preferential activation by specific stimuli were observed in freely moving animals. These results demonstrate that the multiphoton fiberscope is suitable for functional imaging in awake and freely moving animals.
Abstract The visual callosal pathway, which reciprocally connects the primary visual cortices, is thought to play a pivotal role in cortical binocular processing. In rodents, the functional role of this pathway is largely unknown. Here, we measure visual cortex spiking responses to visual stimulation using population calcium imaging and functionally isolate visual pathways originating from either eye. We show that callosal pathway inhibition significantly reduced spiking responses in binocular and monocular neurons and abolished spiking in many cases. However, once isolated by blocking ipsilateral visual thalamus, callosal pathway activation alone is not sufficient to drive evoked cortical responses. We show that the visual callosal pathway relays activity from both eyes via both ipsilateral and contralateral visual pathways to monocular and binocular neurons and works in concert with ipsilateral thalamus in generating stimulus evoked activity. This shows a much greater role of the rodent callosal pathway in cortical processing than previously thought.