-------------------------
S8a. Why renormalization?
-------------------------
Quantum field theory is what particle physicists define it is, and
this includes many working interacting QFTs. But it is not a theory
in the mathematical sense. This is due to the freedom they take
when discussing the renormalization needed to remove formal
infinities from their theories.
Finite renormalization just refers to the fact that the coefficients
in a Hamiltonian are not directly measurable but only computable as
function of some key observables. It is simply a consequence of the
historical accident that these coefficients were given names (masses,
charges) that sound like real properties, while they are in fact
indirectly related to them.
Thus in solid state physics one gets bare masses of quasiparticles
from the coefficients of a Hamiltonian, but they are just parameters
and related to the measurable masses by some transformation, which is
dubbed the finite renormalization.
Infinite renormalization is needed in ordinary QM when the potential
gets too singular, for example with delta-function potentials that
model contact interactions. Hardly ever discussed in textbooks but
important for understanding. See, e.g., hep-th/9710061, or Chapter I.3
in
R. Jackiw,
Diverse topics in theoretical and mathematical physics,
World Scientific, Singapore 1995.
A paper by Dimock (Comm. Math. Phys. 57 (1977), 51-66) shows rigorously
that, at least in 2 dimensions, delta-function potentials define
the correct nonrelativistic limit of local scalar field theories.
In mathematical terms, infinite renormalization means that the
interaction is a limit of regularized interactions related to fixed
measurable quantities by finite transformations which, however,
diverge when the regularization is removed. The limiting interaction
remains, however, well-defined as a densely defined operator in
Hilbert space.
For exactly the same reason it is needed in relativistic QFT, since
local fields imply singular interactions. But in 4 dimensions, the
limiting process is not well understood mathematically.
In 1+1 dimensions, everything is then well-defined mathematically
in terms of rigorous renormalization theory, for arbitrary polynomial
interactions. (See the book by Glimm and Jaffe).
The 1+2-dimensional case is significantly more difficult and needs
a restriction on the polynomial degree. There is a nontrivial
renormalization theory for Phi^4 theory, which is mathematically
well-understood.
Only the 1+3 dimensional case is at present completely open.
What is loosely called 'infinite' in traditional discussions of
renormalization means, strictly speaking, only that for the bare
quantities, the limit where a cutoff goes to infinity does not exist.
At any finite value of the cutoff, both the Hamiltonian and the
counterterms are finite. If it were not so, one couldn't do
renormalization and get something finite.
The problem solved by Tomonaga, Schwinger and Feynman, for which they
got the Nobel prize, was that they discovered how to
produce a well-defined limiting theory for cutoff to infinity
that allows to extract finite values for quantities comparable with
experiment.
All renormalization until today follows the same pattern.
One does certain formal computations at finite cutoff and moves,
at some point where it no longer harms, the cutoff to infinity,
being left with approximate formulas (at some fixed or variable
loop order) that no longer contain a cutoff and have finite values.