Metadynamics (MTD; also abbreviated as METAD or MetaD) is a computer simulation method in computational physics, chemistry and biology. It is used to estimate the free energy and other state functions of a system, where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Alessandro Laio and Michele Parrinello in 2002 and is usually applied within molecular dynamics simulations. MTD closely resembles a number of recent methods such as adaptively biased molecular dynamics, adaptive reaction coordinate forces and local elevation umbrella sampling. More recently, both the original and well-tempered metadynamics were derived in the context of importance sampling and shown to be a special case of the adaptive biasing potential setting. MTD is related to the Wang-Landau sampling. Metadynamics (MTD; also abbreviated as METAD or MetaD) is a computer simulation method in computational physics, chemistry and biology. It is used to estimate the free energy and other state functions of a system, where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Alessandro Laio and Michele Parrinello in 2002 and is usually applied within molecular dynamics simulations. MTD closely resembles a number of recent methods such as adaptively biased molecular dynamics, adaptive reaction coordinate forces and local elevation umbrella sampling. More recently, both the original and well-tempered metadynamics were derived in the context of importance sampling and shown to be a special case of the adaptive biasing potential setting. MTD is related to the Wang-Landau sampling. The technique builds on a large number of related methods including (in a chronological order) thedeflation,tunneling,tabu search,local elevation,conformational flooding,Engkvist-Karlström andAdaptive Biasing Force methods. Metadynamics has been informally described as 'filling the free energy wells with computational sand'. The algorithm assumes that the system can be described by a few collective variables. During the simulation, the location of the system in the space determined by the collective variables is calculated and a positive Gaussian potential is added to the real energy landscape of the system. In this way the system is discouraged to come back to the previous point. During the evolution of the simulation, more and more Gaussians sum up, thus discouraging more and more the system to go back to its previous steps, until the system explores the full energy landscape -at this point the modified free energy becomes a constant as a function of the collective variables which is the reason for the collective variables to start fluctuating heavily. At this point the energy landscape can be recovered as the opposite of the sum of all Gaussians. The time interval between the addition of two Gaussian functions, as well as the Gaussian height and Gaussian width, are tuned to optimize the ratio between accuracy and computational cost. By simply changing the size of the Gaussian, metadynamics can be fitted to yield very quickly a rough map of the energy landscape by using large Gaussians, or can be used for a finer grained description by using smaller Gaussians. Usually, the well-tempered metadynamics is used to change the Gaussian size adaptively. Also, the Gaussian width can be adapted with the adaptive Gaussian metadynamics. Metadynamics has the advantage, upon methods like adaptive umbrella sampling, of not requiring an initial estimate of the energy landscape to explore. However, it is not trivial to choose proper collective variables for a complex simulation. Typically, it requires several trials to find a good set of collective variables, but there are several automatic procedure proposed: essential coordinates, Sketch-Map, and non-linear data-driven collective variables. Independent metadynamics simulations (replicas) can be coupled together to improve usability and parallel performance. There are several such methods proposed: the multiple walker MTD, the parallel tempering MTD, the bias-exchange MTD, and the collective-variable tempering MTD. The last three are similar to the parallel tempering method and use replica exchanges to improve sampling. Typically, the Metropolis–Hastings algorithm is used for replica exchanges, but the infinite swapping and Suwa-Todo algorithms give better replica exchange rates. Typical (single-replica) MTD simulations can include up to 3 CVs, even using the multi-replica approach, it is hard to exceed 8 CVs, in practice. This limitation comes from the bias potential, constructed by adding Gaussian functions (kernels). It is a special case of the kernel density estimator (KDE). The number of required kernels, for a constant KDE accuracy, increases exponentially with the number of dimensions. So MTD simulation length has to increase exponentially with the number of CVs to maintain the same accuracy of the bias potential. Also, the bias potential, for fast evaluation, is typically approximated with a regular grid. The required memory to store the grid increases exponentially with the number of dimensions (CVs) too. A high-dimensional generalization of metadynamics is NN2B. It is based on two machine learning algorithms: the nearest-neighbor density estimator (NNDE) and the artificial neural network (ANN). NNDE replaces KDE to estimate the updates of bias potential from short biased simulations, while ANN is used to approximate the resulting bias potential. ANN is a memory-efficient representation of high-dimensional functions, where derivatives (biasing forces) are effectively computed with the backpropagation algorithm. An alternative method, exploiting ANN for the adaptive bias potential, uses mean potential forces for the estimation. This method is also a high-dimensional generalization of the Adaptive Biasing Force (ABF) method. Additionally, the training of ANN is improved using the Bayesian regularization, and the error of approximation can be inferred by training an ensemble of ANNs.