The gist of the RG is this group property: as the scale μ varies, the theory presents a self-similar replica of itself, and any scale can be accessed similarly from any other scale, by group action, a formal conjugacy of couplings[3] in the mathematical sense (Schröder's equation).

On the basis of this (finite) group equation, Gell-Mann and Low then focussed on infinitesimal transformations, and invented a computational method based on a mathematical flow function ψ(g) = Gd/(∂G/∂g) of the coupling parameter g, which they introduced. Like the function h(e) of Stueckelberg and Petermann, their function determines the differential change of the coupling g(μ) with respect to a small change in energy scale μ through a differential equation, the renormalization group equation:

∂g / ∂ln(μ) = ψ(g) = β(g) .

The modern name is also indicated, the beta function, introduced by C. Callan and K. Symanzik in the early 1970s. Since it is a mere function of g, integration in g of a perturbative estimate of it permits specification of the renormalization trajectory of the coupling, that is, its variation with energy, effectively the function G in this perturbative approximation. The renormalization group prediction (cf Stueckelberg-Petermann and Gell-Mann-Low works) was confirmed 40 years later at the LEP accelerator experiments: the fine structure "constant" of QED was measured to be about 1/127 at energies close to 200 GeV, as opposed to the standard low-energy physics value of 1/137. (Early applications to quantum electrodynamics are discussed in the influential book of Nikolay Bogolyubov and Dmitry Shirkov in 1959.[4])

The renormalization group emerges from the renormalization of the quantum field variables, which normally has to address the problem of infinities in a quantum field theory (although the RG exists independently of the infinities). This problem of systematically handling the infinities of quantum field theory to obtain finite physical quantities was solved for QED by Richard Feynman, Julian Schwinger and Sin-Itiro Tomonaga, who received the 1965 Nobel prize for these contributions. They effectively devised the theory of mass and charge renormalization, in which the infinity in the momentum scale is cut-off by an ultra-large regulator, Λ (which could ultimately be taken to be infinite – infinities reflect the pileup of contributions from an infinity of degrees of freedom at infinitely high energy scales.). The dependence of physical quantities, such as the electric charge or electron mass, on the scale Λ is hidden, effectively swapped for the longer-distance scales at which the physical quantities are measured, and, as a result, all observable quantities end up being finite, instead, even for an infinite Λ. Gell-Mann and Low thus realized in these results that, while, infinitesimally, a tiny change in g is provided by the above RG equation given ψ(g), the self-similarity is expressed by the fact that ψ(g) depends explicitly only upon the parameter(s) of the theory, and not upon the scale μ. Consequently, the above renormalization group equation may be solved for (G and thus) g(μ).

A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilatation group of conventional renormalizable theories, came from condensed matter physics. Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group[5]. The blocking idea is a way to define the components of the theory at large distances as aggregates of components at shorter distances.

This approach covered the conceptual point and was given full computational substance[6] in the extensive important contributions of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1974, as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982.

Meanwhile, the RG in particle physics had been reformulated in more practical terms by C. G. Callan and K. Symanzik in 1970.[7] The above beta function, which describes the "running of the coupling" parameter with scale, was also found to amount to the "canonical trace anomaly", which represents the quantum-mechanical breaking of scale (dilation) symmetry in a field theory. (Remarkably, quantum mechanics itself can induce mass through the trace anomaly and the running coupling.) Applications of the RG to particle physics exploded in number in the 1970s with the establishment of the Standard Model.

In 1973, it was discovered that a theory of interacting colored quarks, called quantum chromodynamics had a negative beta function. This means that an initial high-energy value of the coupling will eventuate a special value of μ at which the coupling blows up (diverges). This special value is the scale of the strong interactions, μ = ΛQCD and occurs at about 200 MeV. Conversely, the coupling becomes weak at very high energies (asymptotic freedom), and the quarks become observable as point-like particles, in deep inelastic scattering, as anticipated by Feynman-Bjorken scaling. QCD was thereby established as the quantum field theory controlling the strong interactions of particles.

Momentum space RG also became a highly developed tool in solid state physics, but its success was hindered by the extensive use of perturbation theory, which prevented the theory from reaching success in strongly correlated systems. In order to study these strongly correlated systems, variational approaches are a better alternative. During the 1980s some real-space RG techniques were developed in this sense, the most successful being the density-matrix RG (DMRG), developed by S. R. White and R. M. Noack in 1992.

In string theory conformal invariance of the string world-sheet is a fundamental symmetry: β=0 is a requirement. Here, β is a function of the geometry of the space-time in which the string moves. This determines the space-time dimensionality of the string theory and enforces Einstein's equations of general relativity on the geometry. The RG is of fundamental importance to string theory i teorii wielkiej unifikacji.

It is also the modern key idea underlying critical phenomena in condensed matter physics[8]. Indeed, the RG has become one of the most important tools of modern physics. Jest często używane[9] razem z metodą Monte Carlo.

This section introduces pedagogically a picture of RG which may be easiest to grasp: the block spin RG. It was devised by Leo P. Kadanoff in 1966.

Let us consider a 2D solid, a set of atoms in a perfect square array, as depicted in the figure. Let us assume that atoms interact among themselves only with their nearest neighbours, and that the system is at a given temperature T{\displaystyle T}. The strength of their interaction is measured by a certain coupling constantJ{\displaystyle J}. The physics of the system will be described by a certain formula, say H(T,J){\displaystyle H(T,J)}.

Now we proceed to divide the solid into blocks of 2×2{\displaystyle 2\times 2} squares; we attempt to describe the system in terms of block variables, i.e.: some variables which describe the average behavior of the block. Also, let us assume that, due to a lucky coincidence, the physics of block variables is described by a formula of the same kind, but with different values for T{\displaystyle T} and J{\displaystyle J}: H(T′,J′){\displaystyle H(T',J')}. (This isn't exactly true, of course, but it is often approximately true in practice, and that is good enough, to a first approximation.)

Perhaps the initial problem was too hard to solve, since there were too many atoms. Now, in the renormalized problem we have only one fourth of them. But why should we stop now? Another iteration of the same kind leads to H(T″,J″){\displaystyle H(T'',J'')}, and only one sixteenth of the atoms. We are increasing the observation scale with each RG step.

Of course, the best idea is to iterate until there is only one very big block. Since the number of atoms in any real sample of material is very large, this is more or less equivalent to finding the long term behaviour of the RG transformation which took (T,J)→(T′,J′){\displaystyle (T,J)\to (T',J')} and (T′,J′)→(T″,J″){\displaystyle (T',J')\to (T'',J'')}. Usually, when iterated many times, this RG transformation leads to a certain number of fixed points.

Let us be more concrete and consider a magnetic system (e.g.: the Ising model), in which the sprzężenie J określa skłonność sąsiadujących spinów do bycia równoległymi. The configuration of the system is the result of the tradeoff between the ordering J term and the disordering effect of temperature. For many models of this kind there are three fixed points:

T=0{\displaystyle T=0} and J→∞{\displaystyle J\to \infty }. This means that, at the largest size, temperature becomes unimportant, i.e.: the disordering factor vanishes. Thus, in large scales, the system appears to be ordered. We are in a ferromagnetic phase.

T→∞{\displaystyle T\to \infty } and J→0{\displaystyle J\to 0}. Exactly the opposite, temperature dominates, and the system is disordered at large scales.

A nontrivial point between them, T=Tc{\displaystyle T=T_{c}} and J=Jc{\displaystyle J=J_{c}}. In this point, changing the scale does not change the physics, because the system is in a fractal state. It corresponds to the Curiephase transition, and is also called a critical point.

So, if we are given a certain material with given values of T and J, all we have to do in order to find out the large scale behaviour of the system is to iterate the pair until we find the corresponding fixed point.

In more technical terms, let us assume that we have a theory described by a certain function Z{\displaystyle Z} of the state variables {si}{\displaystyle \{s_{i}\}} and a certain set of coupling constants {Jk}{\displaystyle \{J_{k}\}}. This function may be a partition function, an action, a Hamiltonian, etc. It must contain the whole description of the physics of the system.

Now we consider a certain blocking transformation of the state variables {si}→{s~i}{\displaystyle \{s_{i}\}\to \{{\tilde {s}}_{i}\}}, the number of s~i{\displaystyle {\tilde {s}}_{i}} must be lower than the number of si{\displaystyle s_{i}}. Now let us try to rewrite the Z{\displaystyle Z} function only in terms of the s~i{\displaystyle {\tilde {s}}_{i}}. If this is achievable by a certain change in the parameters, {Jk}→{J~k}{\displaystyle \{J_{k}\}\to \{{\tilde {J}}_{k}\}}, then the theory is said to be renormalizable.

The change in the parameters is implemented by a certain beta function: {J~k}=β({Jk}){\displaystyle \{{\tilde {J}}_{k}\}=\beta (\{J_{k}\})}, which is said to induce a renormalization flow (or RG flow) on the J{\displaystyle J}-space. The values of J{\displaystyle J} under the flow are called running couplings.

As was stated in the previous section, the most important information in the RG flow are its fixed points. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points.

Since the RG transformations in such systems are lossy (i.e.: the number of variables decreases - see as an example in a different context, Lossy data compression), there need not be an inverse for a given RG transformation. Thus, in such lossy systems, the renormalization group is, in fact, a semigroup.

Let us consider a certain observable A{\displaystyle A} of a physical system undergoing an RG transformation. The magnitude of the observable as the length scale of the system goes from small to large may be (a) always increasing, (b) always decreasing or (c) other. In the first case, the observable is said to be a relevant observable; in the second, irrelevant and in the third, marginal.

A relevant operator is needed to describe the macroscopic behaviour of the system; an irrelevant observable is not. Marginal observables may or may not need to be taken into account. A remarkable fact is that most observables are irrelevant, i.e.: the macroscopic physics is dominated by only a few observables in most systems. In other terms: microscopic physics contains ≈1023{\displaystyle \approx 10^{23}} (Avogadro's number) variables, and macroscopic physics only a few.

Before the RG, there was an astonishing empirical fact to explain: the coincidence of the critical exponents (i.e.: the behaviour near a second order phase transition) in very different phenomena, such as magnetic systems, superfluid transition (Lambda transition), alloy physics, etc. This was called universality and is successfully explained by RG, just showing that the differences between all those phenomena are related to irrelevant observables.

Thus, many macroscopic phenomena may be grouped into a small set of universality classes, described by the set of relevant observables.

RG, in practice, comes in two main flavours. The Kadanoff picture explained above refers mainly to the so-called real-space RG. Momentum-space RG on the other hand, has a longer history despite its relative subtlety.[potrzebny przypis] It can be used for systems where the degrees of freedom can be cast in terms of the Fourier modes of a given field. The RG transformation proceeds by integrating out a certain set of high momentum (large wavenumber) modes. Since large wavenumbers are related to short length scales, the momentum-space RG results in an essentially similar coarse-graining effect as with real-space RG.

Momentum-space RG is usually performed on a perturbation expansion. The validity of such an expansion is predicated upon the true physics of our system being close to that of a free field system. In this case, we may calculate observables by summing the leading terms in the expansion. This approach has proved very successful for many theories, including most of particle physics, but fails for systems whose physics is very far from any free system, i.e., systems with strong correlations.

As an example of the physical meaning of RG in particle physics we will give a short description of charge renormalization in quantum electrodynamics (QED). Let us suppose we have a point positive charge of a certain true (or bare) magnitude. The electromagnetic field around it has a certain energy, and thus may produce some pairs of (e.g.) electrons-positrons, which will be annihilated very quickly. But in their short life, the electron will be attracted by the charge, and the positron will be repelled. Since this happens continuously, these pairs are effectively screening the charge from abroad. Therefore, the measured strength of the charge will depend on how close to our probes it may enter. We have a dependence of a certain coupling constant (the electric charge) with distance.

Momentum and length scales are related inversely according to the de Broglie relation: the higher the energy or momentum scale we may reach, the lower the length scale we may probe and resolve. Therefore, the momentum-space RG practitioners sometimes declaim to integrate out high momenta or high energy from their theories.

In fact, this transformation is transitive. If you compute SΛ′ from SΛ and then compute SΛ″ from SΛ′, this gives you the same Wilsonian action as computing SΛ″ directly from SΛ.

The Polchinski ERGE involves a smooth UV regulatorcutoff. Basically, the idea is an improvement over the Wilson ERGE. Instead of a sharp momentum cutoff, it uses a smooth cutoff. Essentially, we suppress contributions from momenta greater than Λ heavily. The smoothness of the cutoff, however, allows us to derive a functional differential equation in the cutoff scale Λ. As in Wilson's approach, we have a different action functional for each cutoff energy scale Λ. Each of these actions are supposed to describe exactly the same model which means that their partition functionals have to match exactly.

In other words, (for a real scalar field; generalizations to other fields are obvious)

and ZΛ is really independent of Λ! We have used the condensed deWitt notation here. We have also split the bare action SΛ into a quadratic kinetic part and an interacting part Sint Λ. This split most certainly isn't clean. The "interacting" part can very well also contain quadratic kinetic terms. In fact, if there is any wave function renormalization, it most certainly will. This can be somewhat reduced by introducing field rescalings. RΛ is a function of the momentum p and the second term in the exponent is

when expanded. When p≪Λ{\displaystyle p\ll \Lambda }, RΛ(p)/p^2 is essentially 1. When p≫Λ{\displaystyle p\gg \Lambda }, RΛ(p)/p^2 becomes very very huge and approaches infinity. RΛ(p)/p^2 is always greater than or equal to 1 and is smooth. Basically, what this does is to leave the fluctuations with momenta less than the cutoff Λ unaffected but heavily suppresses contributions from fluctuations with momenta greater than the cutoff. This is obviously a huge improvement over Wilson.

The Effective average action ERGE involves a smooth IR regulator cutoff. The idea is to take all fluctuations right up to an IR scale k into account. The effective average action will be accurate for fluctuations with momenta larger than k. As the parameter k is lowered, the effective average action approaches the effective action which includes all quantum and classical fluctuations. In contrast, for large k the effective average action is close to the "bare action". So, the effective average action interpolates between the "bare action" and the effective action.

to the action S where Rk is a function of both k and p such that for p≫k{\displaystyle p\gg k}, Rk(p) is very tiny and approaches 0 and for p≪k{\displaystyle p\ll k}, Rk(p)≳k2{\displaystyle R_{k}(p)\gtrsim k^{2}}. Rk is both smooth and nonnegative. Its large value for small momenta leads to a suppression of their contribution to the partition function which is effectively the same thing as neglecting large scale fluctuations. We will use the condensed deWitt notation

where J is the source field. The Legendre transform of Wk ordinarily gives the effective action. However, the action that we started off with is really S[φ]+1/2 φ⋅Rk⋅φ and so, to get the effective average action, we subtract off 1/2 φ⋅Rk⋅φ. In other words,

As there are infinitely many choices of Rk, there are also infinitely many different interpolating ERGEs. Generalization to other fields like spinorial fields is straightforward.

Although the Polchinski ERGE and the effective average action ERGE look similar, they are based upon very different philosophies. In the effective average action ERGE, the bare action is left unchanged (and the UV cutoff scale—if there is one—is also left unchanged) but we suppress the IR contributions to the effective action whereas in the Polchinski ERGE, we fix the QFT once and for all but vary the "bare action" at different energy scales to reproduce the prespecified model. Polchinski's version is certainly much closer to Wilson's idea in spirit. Note that one uses "bare actions" whereas the other uses effective (average) actions.

D.V. Shirkov (1999): Evolution of the Bogoliubov Renormalization Group. arXiv.org:hep-th/9909024. A mathematical introduction and historical overview with a stress on group theory and the application in high-energy physics.

T. D. Lee; Particle physics and introduction to field theory, Harwood academic publishers, 1981, [[[Specjalna:Książki/3718600331|ISBN 3-7186-0033-1]]]. Contains a Concise, simple, and trenchant summary of the group structure, in whose discovery he was also involved, as acknowledged in Gell-Mann and Low's paper.

L.Ts.Adzhemyan, N.V.Antonov and A.N.Vasiliev; The Field Theoretic Renormalization Group in Fully Developed Turbulence; Gordon and Breach, 1999. [[[Specjalna:Książki/9056991450|ISBN 90-5699-145-0]]].

The same author: Renormalization and renormalization group: From the discovery of UV divergences to the concept of effective field theories, in: de Witt-Morette C., Zuber J.-B. (eds), Proceedings of the NATO ASI on Quantum Field Theory: Perspective and Prospective, June 15–26, 1998, Les Houches, France, Kluwer Academic Publishers, NATO ASI Series C 530, 375-388 (1999) [ISBN ]. Full text available in PostScript.