Metadynamics has been informally described as "filling the free energy wells with computational sand".[15] The algorithm assumes that the system can be described by a few collective variables. During the simulation, the location of the system in the space determined by the collective variables is calculated and a positive Gaussian potential is added to the real energy landscape of the system. In this way the system is discouraged to come back to the previous point. During the evolution of the simulation, more and more Gaussians sum up, thus discouraging more and more the system to go back to its previous steps, until the system explores the full energy landscape -at this point the modified free energy becomes a constant as a function of the collective variables which is the reason for the collective variables to start fluctuating heavily. At this point the energy landscape can be recovered as the opposite of the sum of all Gaussians.

The time interval between the addition of two Gaussian functions, as well as the Gaussian height and Gaussian width, are tuned to optimize the ratio between accuracy and computational cost. By simply changing the size of the Gaussian, metadynamics can be fitted to yield very quickly a rough map of the energy landscape by using large Gaussians, or can be used for a finer grained description by using smaller Gaussians.[1] Usually, the well-tempered metadynamics[5] is used to change the Gaussian size adaptively. Also, the Gaussian width can be adapted with the adaptive Gaussian metadynamics.[16]

Metadynamics has the advantage, upon methods like adaptive umbrella sampling, of not requiring an initial estimate of the energy landscape to explore.[1] However, it is not trivial to choose proper collective variables for a complex simulation. Typically, it requires several trials to find a good set of collective variables, but there are several automatic procedure proposed: essential coordinates,[17]Sketch-Map,[18] and non-linear data-driven collective variables.[19]

Independent metadynamics simulations (replicas) can be coupled together to improve usability and parallel performance. There are several such methods proposed: the multiple walker MTD,[20] the parallel tempering MTD,[21] the bias-exchange MTD,[22] and the collective-variable tempering MTD.[23] The last three are similar to the parallel tempering method and use replica exchanges to improve sampling. Typically, the Metropolis–Hastings algorithm is used for replica exchanges, but the infinite swapping[24] and Suwa-Todo[25] algorithms give better replica exchange rates.[26]

Typical (single-replica) MTD simulations can include up to 3 CVs, even using the multi-replica approach, it is hard to exceed 8 CVs, in practice. This limitation comes from the bias potential, constructed by adding Gaussian functions (kernels). It is a special case of the kernel density estimator (KDE). The number of required kernels, for a constant KDE accuracy, increases exponentially with the number of dimensions. So MTD simulation length has to increase exponentially with the number of CVs to maintain the same accuracy of the bias potential. Also, the bias potential, for fast evaluation, is typically approximated with a regular grid.[27] The required memory to store the grid increases exponentially with the number of dimensions (CVs) too.

A high-dimensional generalization of metadynamics is NN2B.[28] It is based on two machine learning algorithms: the nearest-neighbor density estimator (NNDE) and the artificial neural network (ANN). NNDE replaces KDE to estimate the updates of bias potential from short biased simulations, while ANN is used to approximate the resulting bias potential. ANN is a memory-efficient representation of high-dimensional functions, where derivatives (biasing forces) are effectively computed with the backpropagation algorithm.[28][29]

An alternative method, exploiting ANN for the adaptive bias potential, uses mean potential forces for the estimation.[30] This method is also a high-dimensional generalization of the Adaptive Biasing Force (ABF) method.[31] Additionally, the training of ANN is improved using the Bayesian regularization,[32] and the error of approximation can be inferred by training an ensemble of ANNs.[30]

Assume, we have a classicalN{\textstyle N}-particle system with positions at {r→i}{\textstyle \{{\vec {r}}_{i}\}}(i∈1...N){\textstyle (i\in 1...N)} in the Cartesian coordinates(r→i∈R3){\textstyle ({\vec {r}}_{i}\in \mathbb {R} ^{3})}. The particle interaction are described with a potential function V≡V({r→i}){\textstyle V\equiv V(\{{\vec {r}}_{i}\})}. The potential function form (e.g. two local minima separated by a high-energy barrier) prevents an ergodic sampling with molecular dynamics or Monte Carlo methods.

A general idea of MTD is to enhance the system sampling by discouraging revisiting of sampled states. It is achieved by augmenting the system HamiltonianH{\textstyle H} with a bias potential Vbias{\displaystyle V_{\text{bias}}}:

H=T+V+Vbias{\displaystyle H=T+V+V_{\text{bias}}}.

The bias potential is a function of collective variables(Vbias≡Vbias(s→)){\textstyle (V_{\text{bias}}\equiv V_{\text{bias}}({\vec {s}}\,))}. A collective variable is a function of the particle positions (s→≡s→({r→i})){\displaystyle ({\vec {s}}\equiv {\vec {s}}(\{{\vec {r}}_{i}\}))}. The bias potential is continuously updated by adding bias at rate ω{\displaystyle \omega }, where s→t{\displaystyle {\vec {s}}_{t}} is an instantaneous collective variable value at time t{\displaystyle t}:

Below there is a pseudocode of MTD base on molecular dynamics (MD), where {r→}{\displaystyle \{{\vec {r}}\}} and {v→}{\displaystyle \{{\vec {v}}\}} are the N{\displaystyle N}-particle system positions and velocities, respectively. The bias Vbias{\displaystyle V_{\text{bias}}} is updated every n=τ/Δt{\displaystyle n=\tau /\Delta t} MD steps, and its contribution to the system forces {F→}{\displaystyle \{{\vec {F}}\,\}} is {F→bias}{\displaystyle \{{\vec {F}}_{\text{bias}}\}}.

The finite size of the kernel makes the bias potential to fluctuate around a mean value. A converged free energy can be obtained by averaging the bias potential. The averaging is started from tdiff{\displaystyle t_{\text{diff}}}, when the motion along the collective variable becomes diffusive: