Abstract It is an integral function of the human brain to sample novel information from the environment and to integrate them into existing representations. Recent evidence suggests a specific role for the theta rhythm (4–8 Hz) in mnemonic processes and the coupling between the theta and the gamma rhythm (40–120 Hz) in ordering and binding perceptual features during encoding. Furthermore, decreases in the alpha rhythm (8–12 Hz) are assumed to gate perceptual information processes in semantic networks. In the present study, we used an associative memory task (object-color combinations) with pictures versus words as stimuli (high versus low visual information) to separate associative memory from visual perceptual processes during memory formation. We found increased theta power for later remembered versus later forgotten items (independent of the color judgement) and an increase in phase-amplitude coupling between frontal theta and fronto-temporal gamma oscillations, specific for the formation of picture-color associations. Furthermore, parietal alpha suppression and gamma power were higher for pictures compared to words. These findings support the idea of a theta-gamma code in binding visual perceptual features during encoding. Furthermore, alpha suppression likely reflects perceptual gating processes in semantic networks and is insensitive to mnemonic and associative binding processes. Gamma oscillations may promote visual perceptual information in visual cortical networks, which is integrated into existing representations by prefrontal control processes, working at a theta pace.
The most widely used activation functions in current deep feed-forward neural networks are rectified linear units (ReLU), and many alternatives have been successfully applied, as well. However, none of the alternatives have managed to consistently outperform the rest and there is no unified theory connecting properties of the task and network with properties of activation functions for most efficient training. A possible solution is to have the network learn its preferred activation functions. In this work, we introduce Adaptive Blending Units (ABUs), a trainable linear combination of a set of activation functions. Since ABUs learn the shape, as well as the overall scaling of the activation function, we also analyze the effects of adaptive scaling in common activation functions. We experimentally demonstrate advantages of both adaptive scaling and ABUs over common activation functions across a set of systematically varied network specifications. We further show that adaptive scaling works by mitigating covariate shifts during training, and that the observed advantages in performance of ABUs likewise rely largely on the activation function's ability to adapt over the course of training.
A binaural spike-based sound localization system suitable for real-time attention systems of robots is presented in this work. This system uses the output spikes from a 64-channel binaural silicon cochlea. The localization algorithm implements a spike-based correlation of these output spikes that measures the interaural time differences (ITDs) between the arrival of a sound to two microphones. The algorithm continuously updates a possible distribution of ITDs in the auditory scene whenever a spike arrives from either ear of the cochlea. Experimental results show that the system can estimate an azimuth angle to below 1 deg resolution. This algorithm is computationally cheaper than conventional generalized cross-correlation methods and is quicker to respond to changes in the scene.
Sleep promotes the consolidation of newly acquired associative memories. Here we used neuronal oscillations in the human EEG to investigate sleep-dependent changes in the cortical memory trace. The retrieval activity for object-color associations was assessed immediately after encoding and after 3 hr of sleep or wakefulness. Sleep had beneficial effects on memory performance and led to reduced event-related theta and gamma power during the retrieval of associative memories. Furthermore, event-related alpha suppression was attenuated in the wake group for memorized and novel stimuli. There were no sleep-dependent changes in retrieval activity for missed items or items retrieved without color. Thus, the sleep-dependent reduction in theta and gamma oscillations was specific for the retrieval of associative memories. In line with theoretical accounts on sleep-dependent memory consolidation, decreased theta may indicate reduced mediotemporal activity because of a transfer of information into neocortical networks during sleep, whereas reduced parietal gamma may reflect effects of synaptic downscaling. Changes in alpha suppression in the wake group possibly index reduced attentional resources that may also contribute to a lower memory performance in this group. These findings indicate that the consolidation of associative memories during sleep is associated with profound changes in the cortical memory trace and relies on multiple neuronal processes working in concert.
In this study, we investigate if phase-locking of fast oscillatory activity relies on the anatomical skeleton and if simple computational models informed by structural connectivity can help further to explain missing links in the structure-function relationship. We use diffusion tensor imaging data and alpha band-limited EEG signal recorded in a group of healthy individuals. Our results show that about 23.4% of the variance in empirical networks of resting-state functional connectivity is explained by the underlying white matter architecture. Simulating functional connectivity using a simple computational model based on the structural connectivity can increase the match to 45.4%. In a second step, we use our modeling framework to explore several technical alternatives along the modeling path. First, we find that an augmentation of homotopic connections in the structural connectivity matrix improves the link to functional connectivity while a correction for fiber distance slightly decreases the performance of the model. Second, a more complex computational model based on Kuramoto oscillators leads to a slight improvement of the model fit. Third, we show that the comparison of modeled and empirical functional connectivity at source level is much more specific for the underlying structural connectivity. However, different source reconstruction algorithms gave comparable results. Of note, as the fourth finding, the model fit was much better if zero-phase lag components were preserved in the empirical functional connectome, indicating a considerable amount of functionally relevant synchrony taking place with near zero or zero-phase lag. The combination of the best performing alternatives at each stage in the pipeline results in a model that explains 54.4% of the variance in the empirical EEG functional connectivity. Our study shows that large-scale brain circuits of fast neural network synchrony strongly rely upon the structural connectome and simple computational models of neural activity can explain missing links in the structure-function relationship.
A large number of studies suggest that the integration of multisensory signals by humans is well described by Bayesian principles. However, there are very few reports about cue combination between a native and an augmented sense. In particular, we asked the question whether adult participants are able to integrate an augmented sensory cue with existing native sensory information. Hence for the purpose of this study we build a tactile augmentation device. Consequently, we compared different hypotheses of how untrained adult participants combine information from a native and an augmented sense. In a two-interval forced choice (2 IFC) task, while subjects were blindfolded and seated on a rotating platform, our sensory augmentation device translated information on whole body yaw rotation to tactile stimulation. Three conditions were realized: tactile stimulation only (augmented condition), rotation only (native condition), and both augmented and native information (bimodal condition). Participants had to choose one out of two consecutive rotations with higher angular rotation. For the analysis, we fitted the participants' responses with a probit model and calculated the just notable difference (JND). Then we compared several models for predicting bimodal from unimodal responses. An objective Bayesian alternation model yielded a better prediction (χred2 = 1.67) than the Bayesian integration model (χred2= 4.34). Slightly higher accuracy showed a non-Bayesian winner takes all model (χred2= 1.64), which either used only native or only augmented values per subject for prediction. However the performance of the Bayesian alternation model could be substantially improved (χred2= 1.09) utilizing subjective weights obtained by a questionnaire. As a result, the subjective Bayesian alternation model predicted bimodal performance most accurately among all tested models. These results suggest that information from augmented and existing sensory modalities in untrained humans is combined via a subjective Bayesian alternation process. Therefore we conclude that behavior in our bimodal condition is explained better by top down-subjective weighting than by bottom-up weighting based upon objective cue reliability.
Synchronization has been suggested as a mechanism of binding distributed feature representations facilitating segmentation of visual stimuli. Here we investigate this concept based on unsupervised learning using natural visual stimuli. We simulate dual-variable neural oscillators with separate activation and phase variables. The binding of a set of neurons is coded by synchronized phase variables. The network of tangential synchronizing connections learned from the induced activations exhibits small-world properties and allows binding even over larger distances. We evaluate the resulting dynamic phase maps using segmentation masks labeled by human experts. Our simulation results show a continuously increasing phase synchrony between neurons within the labeled segmentation masks. The evaluation of the network dynamics shows that the synchrony between network nodes establishes a relational coding of the natural image inputs. This demonstrates that the concept of binding by synchrony is applicable in the context of unsupervised learning using natural visual stimuli.
Abstract Here we use computational modeling of fast neural dynamics to explore the relationship between structural and functional coupling in a population of healthy subjects. We use DTI data to estimate structural connectivity and subsequently model phase couplings from band-limited oscillatory signals derived from multichannel EEG data. Our results show that about 23.4% of the variance in empirical networks of resting-state fast oscillations is explained by the underlying white matter architecture. By simulating functional connectivity using a simple reference model, the match between simulated and empirical functional connectivity further increases to 45.4%. In a second step, we use our modeling framework to explore several technical alternatives along the modeling path. First, we find that an augmentation of homotopic connections in the structural connectivity matrix improves the link to functional connectivity while a correction for fiber distance slightly decreases the performance of the model. Second, a more complex computational model based on Kuramoto oscillators leads to a slight improvement of the model fit. Third, we show that the comparison of modeled and empirical functional connectivity at source level is much more specific for the underlying structural connectivity. However, different source reconstruction algorithms gave comparable results. Of note, as the fourth finding, the model fit was much better if zero-phase lag components were preserved in the empirical functional connectome, indicating a considerable amount of functionally relevant synchrony taking place with near zero or zero-phase lag. The combination of the best performing alternatives at each stage in the pipeline results in a model that explains 54.4% of the variance in the empirical EEG functional connectivity. Our study shows that large-scale brain circuits of fast neural network synchrony strongly rely upon the structural connectome and simple computational models of neural activity can explain missing links in the structure-function relationship. Author Summary Brain imaging techniques are broadly divided into the two categories of structural and functional imaging. Structural imaging provides information about the static physical connectivity within the brain, while functional imaging provides data about the dynamic ongoing activation of brain areas. Computational models allow to bridge the gap between these two modalities and allow to gain new insights. Specifically, in this study, we use structural data from diffusion tractography recordings to model functional brain connectivity obtained from fast EEG dynamics. First, we present a simple reference procedure which consists of several steps to link the structural to the functional empirical data. Second, we systematically compare several alternative methods along the modeling path in order to assess their impact on the overall fit between simulations and empirical data. We explore preprocessing steps of the structural connectivity and different levels of complexity of the computational model. We highlight the importance of source reconstruction and compare commonly used source reconstruction algorithms and metrics to assess functional connectivity. Our results serve as an important orienting frame for the emerging field of brain network modeling.
Dynamic communication and routing play important roles in the human brain in order to facilitate flexibility in task solving and thought processes. Here, we present a network perturbation methodology that allows investigating dynamic switching between different network pathways based on phase offsets between two external oscillatory drivers. We apply this method in a computational model of the human connectome with delay-coupled neural masses. To analyze dynamic switching of pathways, we define four new metrics that measure dynamic network response properties for pairs of stimulated nodes. Evaluating these metrics for all network pathways, we found a broad spectrum of pathways with distinct dynamic properties and switching behaviors. We show that network pathways can have characteristic timescales and thus specific preferences for the phase lag between the regions they connect. Specifically, we identified pairs of network nodes whose connecting paths can either be (1) insensitive to the phase relationship between the node pair, (2) turned on and off via changes in the phase relationship between the node pair, or (3) switched between via changes in the phase relationship between the node pair. Regarding the latter, we found that 33% of node pairs can switch their communication from one pathway to another depending on their phase offsets. This reveals a potential mechanistic role that phase offsets and coupling delays might play for the dynamic information routing via communication pathways in the brain.