Non-linear regression models for behavioral and neural data analysis

2020 
Regression models are popular tools in empirical sciences to infer the influence of a set of variables onto a dependent variable given an experimental dataset. In neuroscience and cognitive psychology, Generalized Linear Models (GLMs) -including linear regression, logistic regression, and Poisson GLM- is the regression model of choice to study the factors that drive participant's choices, reaction times and neural activations. These methods are however limited as they only capture linear contributions of each regressors. Here, we introduce an extension of GLMs called Generalized Unrestricted Models (GUMs), which allows to infer a much richer set of contributions of the regressors to the dependent variable, including possible interactions between the regressors. In a GUM, each regressor is passed through a linear or nonlinear function, and the contribution of the different resulting transformed regressors can be summed or multiplied to generate a predictor for the dependent variable. We propose a Bayesian treatment of these models in which we endow functions with Gaussian Process priors, and we present two methods to compute a posterior over the functions given a dataset: the Laplace method and a sparse variational approach, which scales better for large dataset. For each method, we assess the quality of the model estimation and we detail how the hyperparameters (defining for example the expected smoothness of the function) can be fitted. Finally, we illustrate the power of the method on a behavioral dataset where subjects reported the average perceived orientation of a series of gratings. The method allows to recover the mapping of the grating angle onto perceptual evidence for each subject, as well as the impact of the grating based on its position. Overall, GUMs provides a very rich and flexible framework to run nonlinear regression analysis in neuroscience, psychology, and beyond.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    1
    Citations
    NaN
    KQI
    []