language-icon Old Web
English
Sign In

Moving-average model

In time series analysis, the moving-average model (MA model), also known as moving-average process, is a common approach for modeling univariate time series. The moving-average model specifies that the output variable depends linearly on the current and various past values of a stochastic (imperfectly predictable) term. In time series analysis, the moving-average model (MA model), also known as moving-average process, is a common approach for modeling univariate time series. The moving-average model specifies that the output variable depends linearly on the current and various past values of a stochastic (imperfectly predictable) term. Together with the autoregressive (AR) model, the moving-average model is a special case and key component of the more general ARMA and ARIMA models of time series, which have a more complicated stochastic structure. The moving-average model should not be confused with the moving average, a distinct concept despite some similarities. Contrary to the AR model, the finite MA model is always stationary. The notation MA(q) refers to the moving average model of order q: where μ is the mean of the series, the θ1, ..., θq are the parameters of the model and the εt, εt−1,..., εt−q are white noise error terms. The value of q is called the order of the MA model. This can be equivalently written in terms of the backshift operator B as Thus, a moving-average model is conceptually a linear regression of the current value of the series against current and previous (observed) white noise error terms or random shocks. The random shocks at each point are assumed to be mutually independent and to come from the same distribution, typically a normal distribution, with location at zero and constant scale. The moving-average model is essentially a finite impulse response filter applied to white noise, with some additional interpretation placed on it. The role of the random shocks in the MA model differs from their role in the autoregressive (AR) model in two ways. First, they are propagated to future values of the time series directly: for example, ε t − 1 {displaystyle varepsilon _{t-1}} appears directly on the right side of the equation for X t {displaystyle X_{t}} . In contrast, in an AR model ε t − 1 {displaystyle varepsilon _{t-1}} does not appear on the right side of the X t {displaystyle X_{t}} equation, but it does appear on the right side of the X t − 1 {displaystyle X_{t-1}} equation, and X t − 1 {displaystyle X_{t-1}} appears on the right side of the X t {displaystyle X_{t}} equation, giving only an indirect effect of ε t − 1 {displaystyle varepsilon _{t-1}} on X t {displaystyle X_{t}} . Second, in the MA model a shock affects X {displaystyle X} values only for the current period and q periods into the future; in contrast, in the AR model a shock affects X {displaystyle X} values infinitely far into the future, because ε t {displaystyle varepsilon _{t}} affects X t {displaystyle X_{t}} , which affects X t + 1 {displaystyle X_{t+1}} , which affects X t + 2 {displaystyle X_{t+2}} , and so on forever (see Vector autoregression#Impulse response). Fitting the MA estimates is more complicated than it is in autoregressive models (AR models), because the lagged error terms are not observable. This means that iterative non-linear fitting procedures need to be used in place of linear least squares.

[ "Autocorrelation", "Autoregressive integrated moving average", "Time series", "Autoregressive model", "Moving average" ]
Parent Topic
Child Topic
    No Parent Topic