Lattice field theory is an area of theoretical physics, specifically
quantum field theory, which deals with field theories defined on a spatial
or space-time lattice.

The theoretical description of the fundamental constituents of matter and
the interactions between them is based on quantum field theory. The basic
ingredients of field theory are fields. They are functions \(\phi\) which
associate to each point \(x\) of space-time a quantity \(\phi(x)\ .\)

In the case of classical field theories, \(\phi(x)\) usually is an element of
a finite dimensional real or complex manifold, which in many cases is a
linear space. Prominent examples are

the Yang-Mills field \(A_{\mu}(x) = \sum_{b} A_{\mu}^b(x) T_b\ ,\) whose components are elements of the Lie algebra of a compact Lie group with generators \(T_b\ .\)

In contrast, in the operator formulation of quantum field theory the fields
are operators acting in a Hilbert space. (More precisely, quantum fields
\(\phi(x)\) are operator valued distributions, which means that integrals
\(\int f(x) \phi(x) dx\) with suitable test functions \(f(x)\) are operators.)

The physical content of a field theory depends essentially on the Lagrangian
\(\mathcal{L}(\phi(x), \partial^{n} \phi(x))\ ,\) which is a function of
\(\phi(x)\) and its derivatives. The Lagrangian determines the field
equations, which comprise the interactions. If the strength of an
interaction is given by a small parameter \(g\ ,\) it is possible to calculate
physical quantities approximately to a satisfactory accuracy by means of
perturbation theory, which amounts to a power series expansion in \(g\ .\) This is, for example, the case in quantum electrodynamics (QED), where the
interaction is proportional to the fine structure constant \(\alpha \approx
1/137\ ,\) and many interesting observables can be obtained as power series in
\(\alpha\ .\) There are, however, important cases, where it turned out that
perturbation theory is inadequate for the calculation of physical
quantities. The most prominent example is the low-energy regime of Quantum Chromodynamics (QCD), the theory of the strong interactions of elementary particles.

Not only Quantum Chromodynamics, but also other components of the Standard Model
of elementary particle physics
and moreover theories of physics beyond the Standard Model supply us with
non-perturbative problems. An important step to answer such questions has
been made by K. Wilson in 1974 (Wilson, 1974). He introduced a
formulation of Quantum Chromodynamics on a space-time lattice, which allows the
application of various non-perturbative techniques. This discretization will be
explained in detail below. It leads to mathematically well-defined problems,
which are (at least in principle) solvable. It should also be pointed out that the
introduction of a space-time lattice can be taken as a starting point for a
mathematically clean approach to quantum field theory, so-called
constructive quantum field theory.

In modern quantum field theory, the introduction of a space-time lattice is
part of an approach different from the operator formalism. This is lattice
field theory. Its main ingredients are

functional integrals,

Euclidean field theory and

the space-time discretization of fields.

Lattice field theory has turned out to be very successful for the
non-perturbative calculation of physical quantities. In this Wiki
an introduction and overview over the foundations and methods of lattice
field theory is given.
The main concepts are here illustrated with a scalar field theory.

Quantum field theory with functional integrals

The functional integral formulation of quantum field theory is a
generalization of the quantum mechanical path integral.
In quantum mechanics of a point particle in one space dimension,
the transition amplitude is given by
\[
\langle x'|\mathrm{e}^{-\mathrm{i} HT}|x \rangle,
\]
where \(|x\rangle\) is an (improper) eigenstate of the position operator
and \(H\) is the Hamilton operator.

Figure 2: Path of a particle

The transition amplitude can be written
as a path integral
\[
\langle x'|\mathrm{e}^{-\mathrm{i} HT}|x \rangle
= \int\!\mathcal{D}x \ \mathrm{e}^{\mathrm{i} S},
\]
where the integration is over all possible paths \(x(t)\)
from \(x\) to \(x'\) during the time interval \(T\ ,\)
see Figure 2,
and
\[
S = \int_0^T\!dt\,L(x,\dot x)
\]
is the classical action for such a path.

Formally the path integral measure is written as
\[
\mathcal{D}x \equiv \prod_t dx(t)
\]
up to a normalization factor.
For a particle in 3 dimensional space this is generalized to
paths \(x_i(t)\ ,\) where \(i\)=1,2,3, and
\[
\mathcal{D}x = \prod_t \prod_i dx_i(t) .
\]

Perhaps this is the most intuitive picture of the quantum mechanical
transition amplitude. It can be written as an integral over contributions
from all possible paths from the starting point to the final point. Each
path is weighted by the classical action evaluated along this path.

For a detailed and mathematically rigorous account of path integrals the
interested reader is referred to the textbook (Glimm and Jaffe, 1987).

The mass \(m_0\) and coupling constant \(g_0\) bear a subscript \(0\ ,\) since they
are bare, unrenormalized parameters. This theory plays a role in the context
of Higgs-Yukawa models, where \(\phi(x)\) is the Higgs field.

In analogy to the quantum mechanical path integral, a
representation of the Greens functions in terms of what one calls
functional integrals is written down as
\[
\langle 0|\varphi(x_1)\varphi(x_2)\dots\varphi(x_n)|0 \rangle
= \frac{1}{Z} \int\!\mathcal{D}\phi\ \phi(x_1)\phi(x_2)\dots\phi(x_n)
\ \mathrm{e}^{\mathrm{i} S}
\]
with
\(
Z = \int\!\mathcal{D}\phi \ \mathrm{e}^{\mathrm{i} S}.
\)
These expressions involve integrals over all classical field configurations.

As mentioned before, any derivation of functional integrals is not attempted
here, but just a motivation of their form by analogy. Furthermore, in
the case of quantum mechanics the transition amplitude has been considered,
whereas now the formula for Greens functions has been written, which is a bit
different.

The formulae for functional integrals give rise to some questions. First of
all, how does the projection onto the ground state \(| 0 \rangle\) arise?
Secondly, these integrals contain oscillating integrands, due to the
imaginary exponents; what about their convergence? Moreover, is there a way
to evaluate them numerically?

In the following it will be discussed, how the introduction of imaginary times
helps in answering these questions.

The analytic continuation has to be
done in such a way that all time arguments are rotated simultaneously
counter-clockwise in the complex \(t\)-plane. This is the so-called Wick rotation,
illustrated in Figure 3.

As can also be seen from the kinetic part contained in \(S_E\ ,\) the metric
of Minkowski space
\[
- ds^2 = -dt^2 + dx_1^2 + dx_2^2 + dx_3^2
\]
has changed into
\[
d\tau^2 + dx_1^2 + dx_2^2 + dx_3^2 ,
\]
which is the metric of a Euclidean space. Therefore one speaks of
Euclidean Greens functions \(G_E\) and of Euclidean functional integrals.
They are taken as starting point for non-perturbative
investigations of field theories and for constructive studies.

Whether it is possible to continue a specific field theory analytically from real to imaginary times and vice versa, depends on certain conditions to be satisfied. For a large class of field theories these conditions have been analyzed and formulated by Osterwalder and Schrader, see (Osterwalder and Schrader, 1973, 1975). In particular, a Euclidean field theory must satisfy the so-called reflection positivity in order to correspond to a proper field theory in Minkowski space.

As \(S_E\) is real, the integrals of interest are now real and no unpleasant
oscillations occur. Moreover, since \(S_E\) is bounded from below, the factor
\(\exp (-S_E)\) in the integrand is bounded. Strongly fluctuating fields have
a large Euclidean action \(S_E\) and are thus suppressed by the factor \(\exp
(-S_E)\ .\) (Strictly speaking, this statement does not make sense in field
theory unless renormalization is taken into account.) This makes Euclidean
functional integrals so attractive compared to their Minkowskian
counterparts.

One might think that in the Euclidean domain everything is unphysical and
there is no possibility to get physical results directly from the Euclidean
Greens functions. But this is not the case. For example, the spectrum of the
theory can be obtained in the following way. Consider a vacuum
expectation value of the form
\[
\langle 0| A_1 \mathrm{e}^{-H\tau} A_2 |0 \rangle,
\]
where the \(A_i\)'s are formed out of the field \(\varphi\ ,\) e.g.
\(A = \varphi(\vec x, 0)\) or \(A = \int\!d^3x\ \varphi(\vec x, 0)\ .\)
Now, with the familiar insertion of a complete set of energy
eigenstates, one has
\[
\langle 0| A_1 \mathrm{e}^{-H\tau} A_2 |0 \rangle =
\sum_n \langle 0|A_1|n \rangle \mathrm{e}^{-E_n\tau} \langle n|A_2|0 \rangle .
\]
In case of a continuous spectrum the sum is to be read as an integral.
On the other hand, representing the expectation value as a functional
integral leads to
\[
\frac{1}{Z} \int\!\mathcal{D}\phi\ \mathrm{e}^{-S_E} A_1(\tau)A_2(0) =
\sum_n \langle 0|A_1|n \rangle \langle n|A_2|0 \rangle \mathrm{e}^{-E_n\tau}.
\]
This is similar to the ground state projection at the beginning of this
section. For large \(\tau\) the lowest energy eigenstates will dominate the sum
and one can thus obtain the low-lying spectrum from the asymptotic behaviour
of this expectation value. By choosing \(A_1, A_2\) suitably,
e.g. for
\[
A \equiv A_1 = A_2 = \int\!d^3x\ \varphi(\vec x,0),
\]
such that \(\langle 0 | A | 1 \rangle \neq 0\) for a one-particle state
\(| 1 \rangle\) with zero momentum \(\vec p = 0\) and mass \(m_1\ ,\) one
gets
\[
\frac{1}{Z} \int\! \mathcal{D}\phi\ \mathrm{e}^{-S_E} A(\tau) A(0)
= |\langle 0|A|1 \rangle|^2 \mathrm{e}^{-m_1\tau} + \dots,
\]
which means that one can extract the mass of the particle.

From now on we shall remain in Euclidean space and suppress the subscript
\(E\ ,\) so that \(S \equiv S_E\) means the Euclidean action.

Lattice discretization

One central question still remains: does the infinite dimensional
integration over all classical field configurations, i.e.
\[\tag{2}
\mathcal{D}\phi = \prod_x d\phi(x),
\]

make sense at all? How is it defined?

Figure 4: 3-dimensional lattice

In quantum mechanics the path integral representation can be derived
as a limit of a discretization in time. As
in field theory the fields depend on the four Euclidean coordinates instead
of a single time coordinate, we may now introduce a discretized space-time
in form of a lattice, for example a hypercubic lattice, specified by
\[
x_{\mu} = a n_{\mu}, \qquad n_{\mu} \in \mathbf{Z},
\]
see Figure 4.

The quantity \(a\) is called the lattice spacing for obvious reasons. It should be noted that the lattice spacing, being a dimensionful quantity, is not a parameter of the discretized theory, which could e.g. be inserted in a computer program for an evaluation of the path integral. The size of the lattice spacing in physical units is a derived quantity determined by the dynamics. This will be explained in Section "Continuum limit".

In the functional integrals the measure \( \mathcal{D}\phi\ ,\) Eq.(2),
involves the lattice points \(x\) only. So a discrete set of variables has to
be integrated. If the lattice is taken to be finite, one just
has finite dimensional integrals.

Discretization of space-time using lattices has one very important
consequence. Due to a non-zero lattice spacing, a cutoff in momentum space
arises. The cutoff can be observed by having a look at the Fourier
transformed field
\[
\tilde{\phi}(p) = \sum_x a^4\ \mathrm{e}^{-\mathrm{i} px}\ \phi(x).
\]
The Fourier transformed functions are periodic in momentum-space, so that one
can identify
\[
p_{\mu} \cong p_{\mu}+\frac{2\pi}{a}
\]
and restrict the momenta to the so-called first Brillouin zone
\[
-\frac{\pi}{a} \, < \, p_{\mu}\,\leq \frac{\pi}{a}.
\]
The inverse Fourier transformation, for example, is given by
\[
\phi(x) = \int_{-\pi/a}^{\pi/a} \frac{d^4 p}{(2\pi)^4}\ \mathrm{e}^{\mathrm{i} px}
\ \tilde{\phi}(p).
\]
One recognises an ultraviolet cutoff
\[
|p_{\mu}| \leq \frac{\pi}{a}.
\]
Therefore field theories on a lattice are regularized in a natural way.

In order to begin in a well-defined way one would start with a finite
lattice. Let us assume a hypercubic lattice with length \(L_1=L_2=L_3=L\) in
every spatial direction and length \(L_4=T\) in Euclidean time,
\[
x_{\mu} = an_{\mu},\qquad n_{\mu} = 0,1,2,\dots,L_{\mu}-1,
\]
with finite volume \(V = L^3T\ .\) In a finite volume one has to specify boundary
conditions. A popular choice are periodic boundary conditions
\[
\phi(x) = \phi(x+aL_{\mu}\,\hat{\mu}),
\]
where \(\hat{\mu}\) is the unit vector in the \(\mu\)-direction.
They imply that the momenta are also discretized,
\[
p_{\mu} = \frac{2\pi}{a}\,\frac{l_{\mu}}{L_{\mu}} \qquad \mbox{with}
\ l_{\mu} = 0,1,2,\dots,L_{\mu}-1,
\]
and therefore momentum-space integration is replaced by finite sums
\[
\int\!\frac{d^4p}{(2\pi)^4}\ \longrightarrow
\ \frac{1}{a^4 L^3 T}\sum_{l_{\mu}}.
\]
Now, all functional integrals have turned into regularized and finite
expressions.

Of course, one would like to recover physics in a continuous and infinite
space-time eventually. The task is therefore to take the infinite volume
limit,
\[
L,T \longrightarrow \infty,
\]
which is the easier part in general, and to take the
continuum limit,
\[
a \longrightarrow 0.
\]
Constructing the continuum limit of a lattice field theory is usually
highly nontrivial and most effort is often spent here.

The formulation of Euclidean quantum field theory on a lattice bears a
useful analogy to statistical mechanics. Functional integrals have the form
of partition functions and we can set up the following correspondence:

Euclidean field theory

Statistical Mechanics

generating functional

\(\int\! \mathcal{D}\phi\ \mathrm{e}^{-S}\)

partition function

\(\sum \mathrm{e}^{-\beta \mathcal{H }}\)

action

\(S\)

Hamilton function

\(\beta \mathcal{H}\)

mass \(m\)

\(G \sim \mathrm{e}^{-mt}\)

inverse correlation length \(1 / \xi\)

\(G \sim \mathrm{e}^{-\frac{x}{\xi}}\)

This formal analogy allows to use well established methods of statistical
mechanics in field theory and vice versa. Even the terminology of both
fields is often identical. To mention some examples, in field theory one
employs high-temperature expansions and mean field approximations, and in
statistical mechanics one applies the renormalization group.

Hamiltonian lattice field theory

An alternative to Euclidean lattice field theory, as described before,
is Hamiltonian lattice field theory, introduced by Kogut and
Susskind (Kogut and Susskind, 1975).
In this formulation only three-dimensional space is discretized on a lattice,
whereas time remains continuous. Furthermore, time is kept real and is not
continued to the Euclidean domain. Hamiltonian lattice field theory allows the
application of some analytical methods like strong coupling expansions
and perturbation theory. Since it is not suitable for the application
of the numerical Monte Carlo method, it doesn't enjoy any more as much
attention as in its beginnings, and is not covered in more detail here.

Lattice gauge theory

Theories of gauge fields can also be formulated on a space-time lattice. As details are explained in the Wiki on lattice gauge theories, we shall just indicate the basic elements of lattice gauge theory for gauge group SU(N).

The integral over all gauge field configurations on the lattice amounts to
an integral over all link variables \(U(b)\ .\) So, for the expectation value of
any observable \(A\) one writes
\[
\langle A \rangle = \frac{1}{Z}\int\!\prod_b dU(b)\ A\ \mathrm{e}^{-S_W},
\]
where the integration \(dU(b)\) for a given link \(b\) is to be understood
as the invariant integration over the group manifold, normalized to
\[
\int\!dU = 1.
\]

Fermions on the lattice

Grassmann variables

Classical bosonic fields are just ordinary functions and satisfy
\[
[\phi(x),\phi(y)] = 0,
\]
which can be considered as the limit \(\hbar \rightarrow 0\) of the quantum
commutation relations.

In fermionic field theories one has Grassmann fields, which associate
Grassmann variables with every space-time point. For example, a Dirac field
has anticommuting variables \(\psi_{\alpha}(x)\) and \(\bar{\psi}_{\alpha}(x)\ ,\)
where \(\alpha\)=1,2,3,4 is the Dirac index. The classical Dirac field obeys
\[
\{ \psi_{\alpha}(x), \psi_{\beta}(y) \} = 0, \quad \mbox{etc.}\,.
\]
In order to write down fermionic path integrals as integrals over
fermionic and anti-fermionic field configurations, we write
\[
\mathcal{D}\psi\, \mathcal{D}\bar{\psi} = \prod_x \prod_{\alpha}
d\psi_{\alpha}(x)\, d\bar{\psi}_{\alpha}(x).
\]
Then any fermionic Greens function is of the form
\[
\langle 0|A|0 \rangle = \frac{1}{Z}\int\! \mathcal{D}\psi\, \mathcal{D}\bar{\psi}
\ A\ \mathrm{e}^{-S_F},
\]
with an action \(S_F\) for the fermions. For a free Dirac field the action is
\[
S_F = \int\!d^4x\ \bar{\psi}(x) (\gamma_{\mu}\partial^{\mu}+m)\psi(x).
\]
In the context of the Standard Model, fermionic actions are always bilinear
in the fermionic fields. With the help of the Grassmann integration rules
above one can then show that the functional integrals are formally
remarkably simple to calculate:
\[\tag{4}
\int\! \mathcal{D}\psi \mathcal{D}\bar{\psi}\ \mathrm{e}^{-\int\!d^4x\,
\bar{\Psi}(x) Q \Psi(x)} = \det{Q}.
\]

This is the famous fermion determinant. The main problem remains, of course,
namely to evaluate the determinant of the typically huge matrix \(Q\ .\)

In numerical simulations of lattice field theories with fermions
the calculation of \(\det{Q}\)
turns out to be very tedious.
Therefore one often uses the quenched approximation that
treats \(Q\) as a constant. In recent years different unquenched
investigations of Quantum Chromodynamics have been made and have given
estimates for quenching errors.

Naive fermions

So far no difficulties for the implementation of fermions on the lattice
seem to arise: all one has to do is to discretise the field configurations
in the well-known way and to calculate the Greens functions with some of the
methods of the last section. There is a problem, however. To see this, consider
the propagator of a fermion with mass \(m\) as an example. The fermionic
lattice action is then given by
\[
S_F = \frac{1}{2} \sum_x \sum_{\mu} \bar{\psi}(x) (\gamma_{\mu}
\Delta_{\mu} + m)\psi(x) + h.c.
\]
and the resulting propagator is
\[
\tilde{\Delta}(k) = \frac{-i\sum_{\mu}\gamma_{\mu}\sin{k_{\mu}}+m}
{\sum_{\mu}\sin{k_{\mu}}^2+m^2}.
\]
The propagator has got a pole for small \(k_{\mu}\) representing the physical
particle, but there are additional poles near \(k_{\mu} = \pm \pi\) due to the
periodicity of the denominator. So \(S_F\) really describes 16 instead of 1
particle. This problem - euphemistically called fermion doubling - is a
crucial obstacle for all lattice representations of quark fields.

Wilson and Staggered fermions

Fermion doubling was already known to Wilson in the early days of lattice
Quantum Chromodynamics. He proposed a modified action for the fermions in order to damp out
the doubled fields in the continuum limit. Therefore he added another term,
the Wilson term, to the naive action.
\[
\begin{align}
S_F \rightarrow S_F^{(W)} &= S_F - \frac{r}{2}\sum_x
\bar{\psi}(x) \Box \psi(x) \\
&= S_F - \frac{r}{2} \sum_{x,\mu} \bar{\psi}(x) \{
\psi(x+\hat{\mu}) + \psi(x-\hat{\mu}) - 2 \psi(x) \},
\end{align}
\]
where \(0< r \le 1\ .\) Calculating the propagator with this modified action,
one finds that the unwanted doubled fermions acquire masses \(\propto 1 / a\ ,\)
so that they become infinitely massive in the continuum limit and disappear
from the physical spectrum.

Wilson fermions have a serious disadvantage: even at vanishing fermion
masses, chiral symmetry is broken explicitly by the Wilson term, and one
has problems with calculations for which chiral symmetry is of central
importance.

There are alternatives to Wilson's approach. One of them, due to Kogut and
Susskind, are so-called staggered fermions. The idea is to distribute the
components \(\psi_{\alpha}\) of the Dirac field on different lattice points.
It results in a reduction from 16 to 4 fermions. Moreover, for massless
fermions a remnant of chiral symmetry in form of a chiral
U(1)\(\otimes\)U(1)-symmetry remains.

Even better in view of chiral symmetry and other aspects are formulations
for fermions on the lattice, which obey the Ginsparg-Wilson relation.
More details can be found in the Wikis on lattice gauge theories and on
lattice chiral fermions.

Methods

In the previous sections the functional integrals for field theories on the
lattice have been defined. But it is another problem to evaluate these
high dimensional integrals. A calculation in closed form appears to be
impossible in general. In this section some of the methods
used to evaluate the functional integrals approximately are considered.

Perturbation theory

Although lattice field theory offers the possibility to study
non-perturbative aspects, perturbation theory is nevertheless a highly
valuable tool on the lattice, too. In particular, it can be used to match the
results of non-perturbative calculations to perturbative calculations
in regions where both methods are applicable.

Perturbation theory amounts to an expansion in powers of the coupling as in
the continuum. The lattice provides an intrinsic UV cutoff \(\pi / a\) for all
momenta. Apart from that one has to observe that the propagators and
vertices are different from the continuum ones, owing to the form of the
lattice action. In particular, gluon self interactions of all orders appear
and not only as three and four gluon vertices.

Strong coupling expansion

The analogies between Euclidean field theory and statistical mechanics have
already been pointed out.
In statistical mechanics a well-established technique
is the high-temperature expansion. For lattice gauge theory, this is an
expansion in powers of
\[
\beta \sim \frac{1}{g_0^2},
\]
which is a small quantity at large bare couplings \(g_0\ .\) Therefore it is
the same as a strong coupling expansion. Basically the Boltzmann factor
is expanded as
\[
\exp{\left( \beta\frac{1}{N} {\rm Re}({\rm Tr}(U(p)))\right)}
= 1 + \beta \frac{1}{N} {\rm Re}({\rm Tr}(U(p))) + \dots\ .
\]
The resulting expansion can be represented diagrammatically, similar to the
Feynman diagrams of perturbation theory. The diagram elements, however, are
plaquettes \(p\) on the lattice. Every power of \(\beta\) introduces one more
plaquette.

In the case of scalar fields, the corresponding method is the hopping
parameter expansion, which amounts to an expansion in a parameter \(\kappa\ ,\)
which is small for large masses \(m_0\ .\)

Strong coupling and hopping parameter expansions have a finite radius of convergence, in contrast to perturbation theory, which usually is divergent and at best asymptotic.

Other analytic methods

Other analytical methods are available for approximative evaluations of the
functional integrals of lattice gauge theory. Some of them are:

Monte Carlo methods

On a finite lattice the calculation of expectation values requires the
evaluation of finite dimensional integrals. This immediately suggests the
application of numerical methods. The first thing one would naively propose
is some simple numerical quadrature. In order to understand that this
approach wouldn't be all that helpful, consider a typical lattice as
it is considered in recent calculations. With 40 lattice points in every
direction we have \(4 \cdot 40^4\) link variables. For gauge group SU(3) this
gives 81,920,000 real variables. That should be intractable for conventional
quadratures even in the future. Therefore some statistical method is
required. Producing lattice gauge configurations just randomly turns out to
be extremely inefficient. The crucial idea to handle this problem is the
concept of importance sampling: for a given lattice action \(S\)
quadrature points \(x_i\) are generated with a probability
\[
p(x_i) \sim \exp\{-S(x_i)\}.
\]
This provides us with a large number of points in the important regions
of the integral, improving the accuracy drastically.

The Monte Carlo method consists in producing a sequence of configurations
\(U^{(1)} \rightarrow U^{(2)} \rightarrow U^{(3)} \rightarrow \dots\) with the
appropriate probabilities in a statistical way. This is of course done on a
computer. An update is a step where a single link variable \(U_{x\mu}\)
is changed, whereas a sweep implies that one goes once through the
entire lattice, updating all link variables. A commonly used technique for
obtaining updates is the Metropolis algorithm.

An important feature of this statistical way of evaluation is the existence
of statistical errors. The result of such a calculation is usually
presented in the form
\[
\langle A \rangle = \bar{A} \pm \sigma_{\bar{A}},
\]
where the variance of \(\bar{A}\) decreases with the number \(n\) of
configurations as
\[
\sigma_{\bar{A}} \sim \frac{1}{n^{1/2}}.
\]

Error sources

The results obtained by means of the Monte Carlo method differ from the
desired physical results by different sorts of errors. The most important
error sources are

statistical errors: due to the finite number of configurations in the Monte Carlo calculation, \(\sim 1 / n^{1/2}\ ,\)

Continuum limit

As one is only able to perform calculations at finite lattice spacing, it is
an important issue to get the extrapolation process to the continuum limit
under control. Since the lattice spacing is the regulator of the theory, it
should be useful to apply renormalization group techniques to this problem.
Knowing the functional dependence of the bare coupling \(g_0\) on the
regulator, in other words solving the renormalization group equation, we
should know how to vary the bare coupling of our theory in order to reach a
continuum limit. Let us discuss this idea in more detail.

In the continuum limit the lattice spacing \(a\) is supposed to go to zero,
while physical masses \(m\) should approach a finite limit. The lattice
spacing, however, is not a dimensionless quantity, therefore we have to fix
some mass scale \(m\ ,\) e.g. some particle mass, and consider the limit
\(a m \rightarrow 0\ .\) The inverse of that,
\[
\frac{1}{am} \equiv \xi,
\]
can be regarded as a correlation length. In the continuum limit \(\xi\)
has to go to infinity, which is called a critical point of the theory.
In Figure 7 this is illustrated on a two-dimensional lattice
with different correlation lengths.

In pure gauge theory, there is a single, dimensionless bare coupling \(g_0\)
and \(am\) is clearly a function of \(g_0\ .\) In order to approach the
continuum limit, we have to vary \(g_0\) such that \(am \rightarrow 0\ .\) How
this is done, is controlled by a renormalization group equation:
\[
-a \frac{\partial g_0}{\partial a} = \beta_{LAT}(g_0) =
-\beta_0 g_0^3 - \beta_1 g_0^5 + \dots,
\]
where the first term of the expansion is
\[
\beta_0 = \frac{11}{3}\, N\, \frac{1}{16\pi^2}.
\]
In the perturbative regime of \(g_0\) this equation implies that for
decreasing \(am\) the bare coupling \(g_0\) is also decreasing, getting even
closer to zero. Hence the continuum limit is associated with the limit
\[
g_0 \rightarrow 0 \qquad \mbox{(continuum limit).}
\]
The solution of the renormalization group equation up to second order in
\(g_0\) is
\[
a = \Lambda^{-1}_{LAT}\ \exp \left(-\frac{1}{2\beta_0 g_0^2}\right)
\ (\beta_0 g_0^2)^{-\frac{\beta_1}{2\beta_0^2}}\ \{1+ \mathcal{O}(g_0^2) \},
\]
where the lattice \(\Lambda\)-parameter \(\Lambda_{LAT}\) appears.
Solving for \(g_0\) yields
\[
g_0^2 = \frac{-1}{\beta_0 \log{a^2\Lambda_{LAT}^2}} + \dots,
\]
which again reveals the vanishing of \(g_0\) in the continuum limit:
\[
g_0^2 \rightarrow 0 \quad \mbox{for} \ a \rightarrow 0.
\]
We can also observe that
\[\tag{5}
am = C\, \exp \left( -\frac{1}{2\beta_0 g_0^2}\right) \cdot (\dots),
\]

which shows the non-perturbative origin of the mass \(m\ .\)

These considerations, based on the perturbative \(\beta\)-function, motivate
the following hypothesis: the continuum limit of a gauge theory on a lattice
is to be taken at \(g_0 \rightarrow 0\ .\) Moreover, we expect that it involves
massive interacting glueballs and static quark confinement.

The scenario for approaching the continuum limit then is as follows.
Calculating masses in lattice units, i.e. numbers \(am\ ,\) and decreasing
\(g_0\ ,\) we should reach a region where dimensionless quantities \(am\)
follow a behaviour as given by Eq.(5),
which is called asymptotic scaling.

For mass ratios it can be shown that the exponential dependence on \(1/g_0^2\)
cancels out and it is thought that near the continuum limit
\[
\frac{m_1}{m_2} = \mbox{const.} \times (1+ \mathcal{O}(a^p))
\]
for some integer \(p\ .\)
Such a behaviour, \(m_1 / m_2 \approx\) const., is called scaling. In numerical simulations scaling of various physical quantities has been established for lattice gauge theories, lattice QCD and other models, whereas confirmation of asymptotic scaling is much more demanding.