In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector. In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector. Under these assumptions an optimal control scheme within the class of linear control laws can be derived by a completion-of-squares argument. This control law which is known as the LQG controller, is unique and it is simply a combination of a Kalman filter (a linear–quadratic state estimator (LQE)) together with a linear–quadratic regulator (LQR). The separation principle states that the state estimator and the state feedback can be designed independently. LQG control applies to both linear time-invariant systems as well as linear time-varying systems, and constitutes a linear dynamic feedback control law that is easily computed and implemented: the LQG controller itself is a dynamic system like the system it controls. Both systems have the same state dimension. A deeper statement of the separation principle is that the LQG controller is still optimal in a wider class of possibly nonlinear controllers. That is, utilizing a nonlinear control scheme will not improve the expected value of the cost functional. This version of the separation principle is a special case of the separation principle of stochastic control which states that even when the process and output noise sources are possibly non-Gaussian martingales, as long as the system dynamics are linear, the optimal control separates into an optimal state estimator (which may no longer be a Kalman filter) and an LQR regulator. In the classical LQG setting, implementation of the LQG controller may be problematic when the dimension of the system state is large. The reduced-order LQG problem (fixed-order LQG problem) overcomes this by fixing a priori the number of states of the LQG controller. This problem is more difficult to solve because it is no longer separable. Also, the solution is no longer unique. Despite these facts numerical algorithms are available to solve the associated optimal projection equations which constitute necessary and sufficient conditions for a locally optimal reduced-order LQG controller. LQG optimality does not automatically ensure good robustness properties. The robust stability of the closed loop system must be checked separately after the LQG controller has been designed. To promote robustness some of the system parameters may be assumed stochastic instead of deterministic. The associated more difficult control problem leads to a similar optimal controller of which only the controller parameters are different. Finally, the LQG controller is also used to control perturbed non-linear systems. Consider the continuous-time linear dynamic system where x {displaystyle {mathbf {x} }} represents the vector of state variables of the system, u {displaystyle {mathbf {u} }} the vector of control inputs and y {displaystyle {mathbf {y} }} the vector of measured outputs available for feedback. Both additive white Gaussian system noise v ( t ) {displaystyle mathbf {v} (t)} and additive white Gaussian measurement noise w ( t ) {displaystyle mathbf {w} (t)} affect the system. Given this system the objective is to find the control input history u ( t ) {displaystyle {mathbf {u} }(t)} which at every time t {displaystyle {mathbf {} }t} may depend linearly only on the past measurements y ( t ′ ) , 0 ≤ t ′ < t {displaystyle {mathbf {y} }(t'),0leq t'<t} such that the following cost function is minimized: where E {displaystyle mathbb {E} } denotes the expected value. The final time (horizon) T {displaystyle {mathbf {} }T} may be either finite or infinite. If the horizon tends to infinity the first term x T ( T ) F x ( T ) {displaystyle {mathbf {x} }^{mathrm {T} }(T)F{mathbf {x} }(T)} of the cost function becomes negligible and irrelevant to the problem. Also to keep the costs finite the cost function has to be taken to be J / T {displaystyle {mathbf {} }J/T} .