I have no idea what principle people follow, when people have a Lagrangian, say for QED and then write down Lagrangians in the to-be-renormalized-stage. There seems to be a motivation to make them look similar to the old Lagrangian before introducing that coupling constrant expansion - and why in $g$, not other variables like $m$? Hence they write things like $m_{old}=c·m_{new}$, which seems faily conservative, because it doesn't introduce new terms, beyond maybe counter terms that look structurally list the old ones. But as far as I can see, the theory really just starts with the Lagrangian, which contains the to be found $Z$-expressions. You don't use the Lagrangian before that, do you? At least not beyond tree graphs. Therefore I think you could just begin with a buch of terms, with object that have to be fitted by renormalization. The theory effectively seems just to start with the non-bare object.

From all the possible 'unphysical numbers' in the expansion for the (finite number of) $Z$-terms, why does only the 'scale' $\mu$ survive? Do all scheme leave one number open, and if yes, why? I don't get the what this object '$\mu$' is, at all.

2 Answers
2

For most practical purposes, it doesn't matter precisely which Lagrangian you work with, because most of the physical values you're computing only depend on the large size asympotics of the correlation functions, i.e., on the universality class of the Lagrangian. You can add any reasonably small non-renormalizable perturbation to your short distance action, and you won't noticeably change the asymptotics you're computing.

It's a convenient ansatz though, because it means that you have a lot less stuff to keep track of. You're saying that the short distance physics has pretty much the same character as the long distance physics, up to rescalings. If you can satisfy the ansatz, you get nicer computations. But of course, you can't always satisfy the ansatz. If you're seeing non-renormalizable interactions in the long distance physics, you should expect to see new physics arising before you get to much smaller distance scales.

In 4 spacetime dimensions, an unrenormalized Lagrangian is meaningless, but serves as a template for the construction of a family of renormalizable theories. The reason is that the interactions are not relatively compact compared to the Hamiltonian of the free theory, so one can only add an infinitesimal amount of them.

Thus one never has, as in classical mechnaics, ''a Lagrangian'' but a family of basic operators whose linear combinations give the bare Lagrangians.

Because the coupling constant(s) $g$ must be infinitesimal, the expansion must be in terms of $g$. To proceed, one makes all coefficients arbitrary functions of $g$, regularizes to get finite approximations, and then adjusts the functions in such a way that a limit in which the regularization disappears, can be performed.

This determines the coefficient functions to a large extent, leaving a small vector space of coefficent functions.

In renormalizable theories, this space of is finite-dimensional, and can be parameterized by the renormalized masses and coupling constants. Thus one gets a few-parameter family of physical theories, and the parameters can then be adjusted to match experimental data and determine the ''correct'' theory.

But the space of parameters has always one more dimension than the space of theories, because of the particular way regularization and renormalization cooperate. Thus one gets 1-parameter families of equivalent theories, which are identified via an equivalence relation defined by the renormalization group equations.

In nonrenormalizable theories, this space is infinite-dimensional, but at any fixed energy scale, only finitely many coefficients must be taken into account, and again, the parameters can be adjusted to match experimental data.

Thus there is nothing arbitrary in the procedure, except for the particular way the coefficients are represented in specific calculations, and the particular way the solution set is parameterized. How to do these steps is more or less accidental, and chosen partly for historical reasons, partly because it simplifies subsequent calculations.