The thermal interpretation of quantum mechanics

Arnold Neumaier (Vienna)

The thermal interpretation of quantum physics allows a consistent and deterministic relativistic quantum description of the universe from the smallest to the largest levels of modeling, including its classical aspects, without having to introduce any change in the formal apparatus of quantum physics.

Essential use is made of the fact that everything physicists measure is measured in a thermal environment for which statistical thermodynamics is relevant. This is reflected in the characterizing adjective ''thermal'' for the interpretation.

For microscopic experiments, the thermal interpretation claims that particles (photons, electrons, alpha particles, etc.) are fiction, simplifications appropriate under special circumstances only. In reality one has instead beams (states of the electron field, an effective alpha particle field, etc., concentrated along a small neighborhood of a mathematical curve) with approximately known properties (charge densities, spin densities, energy densities, etc.) If one places a detector into the path of a beam one measures some of these densities - accurately if the densities are high, erratically and inaccurately when they are very low.

When performing on a quantum system a measurement of an operator A with a physical meaning, one gets an approximation for its value. The thermal interpretation treats this value as an approximation not of an eigenvalue of A but of the q-expectation of A, the formal expectation value defined as the trace of the product of A with a density operator describing the state of the system. This deviation from the tradition has important theoretical implications.

The deterministic dynamics of the complete collection of q-expectations constructible from quantum fields, when restricted to the set of measurable ones, gives rise to all the stochastic features observed in practice.

The official description of the thermal interpretation of quantum physics can be found in my 2019 papers

Below is some slightly older (possibly less accurate) expository material, and some links to even older stuff.

The thermal foundations are easily stated and motivated since they are essentially the foundations used everywhere for uncertainty quantification, just slightly extended to accommodate quantum effects by not requiring that observables commute. The state of our solar system, when the latter is modeled by quantum fields, completely specifies what happens in any small space-time region within the planetary system - namely through the n-point correlation functions with arguments restricted to this neighborhood. There is nothing else in quantum field theory; what we can observe is contained in the least oscillating contributions to this correlations. The spatial and temporal high frequency part is unobservable due to the limited resolution of our instruments.

Unlike the thermal interpretation, statistical interpretations and interpretations of Copenhagen flavor apply only to the results of infinite-time few-particle scattering calculations derived from quantum field theory. They cannot apply to the finite time quantum field theory of our solar system since there is no external classical apparatus for measuring this system, and only a single realization of the solar system is experimentally accessible to us.

The thermal interpretation of quantum mechanics says that, consistent with statistical thermodynamics, an expectation (ensemble mean) should not be interpreted as a statistical average over many realizations (except when the statistical context is immediate). Instead, it should be interpreted as an in principle approximately measurable quantity. Therefore, the notion of ensemble is to be understood not necessarily (and in case of a quantum field theory of our solar system never) as an actual repetition by repeated preparation. It should be understood instead in the original sense used by Gibbs - who coined the notion of an ensemble as a collection of imagined copies of which only one is actually realized -, giving him an intuitive excuse to be able to use the statistical formalism to describe a single thermodynamic system such as a single piece of metal. What is conventionally called expectation becomes in the thermal interpretation simply the uncertain value.

According to the thermal interpretation of quantum mechanics, we need for a description of the universe a mathematical framework consisting of a Hilbert space carrying a unitary representation of the Poincare group to account for conservative dynamics and relativity, a representation of the standard model plus some form of gravity (not yet fully known) to describe the fundamental field content, density operators ρ encoding Heisenberg states, the formula Ã=<A>:=trρA defining the uncertain value (generally called expectation value) of the operator A, and for its interpretation the following simple rule generalizing statistical intuition:

Uncertainty principle: A Hermitian quantity A whose uncertainty σA, the square root of <(A-Ã)2>, is much less than |Ã| has the value à within an uncertainty of σA.

From this rule one can derive under appropriate conditions the following

Measurement rule: Upon measuring a Hermitian operator A, the measured result will be approximately Ã, with an uncertainty at least of the order of σA, the square root of <(A-Ã)2>. If the measurement can be sufficiently often repeated (on a system with the same or a sufficiently similar state) then σA will be a lower bound on the standard deviation of the measurement results.

Physicists doing quantum mechanics (even those adhering to the shut-up-and-calculate mode of working) use this rule routinely and usually without further justification. The rule applies universally. No probabilistic interpretation is needed, so it applies also to single systems. Born's famous rule turns out to be derivable under special circumstances only, namely those where the Born rule is indeed valid in practice. (Though usually invoked as universally valid, Born's rule has severe limitations. It neither applies to position measurements nor to photodetection, nor to measurement of energies, just to mention the most conspicuous misfits.)

Actually the above measurement rule should be considered as a definition of what it means to have a device measuring A. As such it creates the foundation of measurement theory. In order that a macroscopic quantum device qualifies for the description ''it measures A'' it must either be derivable from quantum mechanics, or checkable by experiment, that the property claimed in the above measurement rule is in fact valid. Thus there is no circularity in the foundations.

Most descriptions in physics are either very coarse-grained or of very small objects. The detailed state can be found with a good approximation only for fairly stationary sources of very small objects, that prepare sufficiently many of these in essentially the same quantum state. In this case, one can calculate sufficiently many expectations by averaging over the results of multiple experiments on these objects, and use these to determine the state via some version of quantum state tomography. Except in very simple situations, the result is a mixed state described by a density operator. Thus in the thermal interpretation, any realistic state is fully described by a density operator, not by a state vector as in conventional interpretations.

For macroscopic systems, one must necessarily use a coarse-grained description in terms of a limited number of parameters. In the quantum field theory of macroscopic objects, the averaging is always done inside the definition of the macroscopic operator to be measured; this is sufficient to guarantee very small uncertainties of macroscopic observables. Thus one does not need an additional averaging in terms of multiple experiments on similarly prepared copies of the system. This is the deeper reason why quantum field theory can make accurate predictions for single macroscopic systems.

Everything deduced in quantum field theory about macroscopic properties follows, and one has a completely self-consistent setting. The transition to classicality is automatic and needs no deep investigations - the classical situation is simply the limit of a huge number of particles. Whereas on the microscopic level, uncertainties of single events are large, so that state determination must be based by the statistics of multiple events with a similar preparation. (In this case, one can derive Born's traditional rule for perfect binary measurements in pure states; see Chapter 10.5 in my online book Classical and Quantum Mechanics via Lie algebras.)

Although only a coarse-grained description of a macroscopic system is can be explicitly known, this doesn't mean that the detailed state doesn't exist. The existence of an exact state for large objects has always been a metaphysical but unquestioned assumption. Even in classical mechanics, it is impossible to know a highly accurate state of a many-particle system (not even of the solar system with sun, planets, planetoids, and comets treated as rigid bodies). But its existence is never questioned.

In quantum optics experiments, both sources and beams are extended macroscopic objects describable by quantum field theory and statistical mechanics, and hence have (according to the thermal interpretation) associated nearly classical observables - densities, intensities, correlation functions - computable from quantum mechanics in terms of expectations.

The sources have properties independent of measurement, and the beams have properties independent of measurement. These are objects described by quantum field theory. For example, the output of a laser (before or after parametric down conversion or any other optical processing) is a laser beam, or an arrangement of highly correlated beams. These are in a well-defined state that can be probed by experiment. If this is done, they are always found to have the properties ascribed to them by the preparation procedure. One just needs sufficient time to collect the information needed for a quantum state tomography. The complete state is measurably in this way, reproducibly. Neither the state of the laser nor of the beam is changed by a measurement at the end of the beam. Thus these properties exist independent of any measurement - just as the moon exists even when nobody is looking at it!

It is a historical accident that one continues to use the name particle in the many microscopic situations where it is grossly inappropriate to think of it in terms of a tiny bullet moving through space. If one restricts the use of the particle concept to situations where it is appropriate, or if one does not think of particles as ''objects'' - in both cases all mystery is gone, and the foundations become fully rational and intelligible.

Unlike in conventional single-world interpretations of quantum mechanics, nothing in the thermal interpretation depends on the existence of measurement devices (which were not available in the very far past of the universe). Thus the thermal interpretation allows one to consider the single universe we live in as a quantum system, the smallest closed physical system containing us, hence strictly speaking the only system to which unitary quantum mechanics applies rigorously.

There is no longer a physical reason to question the existence of the state of the whole universe, even though all its details may be unknown for ever. Measuring all observables or finding its exact state is already out of the question for a small macroscopic quantum system such as a piece of metal. Thus, as for a metal, one must be content with describing the state of the universe approximately.

What matters for a successful physics of the universe is only that we can model (and then predict) the observables that are accessible to measurement. Since all quantities of interest in a study of the universe as a whole are macroscopic, they have a tiny uncertainty and are well-determined even by an approximate state. For example, one could compute from a proposed model of the universe the (expectation) values of the electromagnetic field at points where we can measure it, and (if the computations could be done) should get excellent agreement with the measurements.

Since every observable of a subsystem is also an observable of the whole system, the state of the universe must be compatible with everything we have ever empirically observed in the universe! This is a very stringent test of adequacy - the state of the universe is highly constrained since knowing this state amounts to having represented all physics accessible to us by the study of its subsystems. Cosmology studies this state in a very coarse (and partly conjectured) approximation where even details at the level of galaxies are averaged over. Only for observables localized in the solar system we have a much more detailed knowledge.

The official description of the thermal interpretation is given in my 2019 papers

  • Foundations of quantum physics I. A critique of the tradition,
  • Foundations of quantum physics II. The thermal interpretation,
  • Foundations of quantum physics III. Measurement.

    Earlier (pre 2019) discussions of the thermal interpretation can be found in the following topics from my theoretical physics FAQ:

    I have also written some technical documents about the thermal interpretation, see and Chapter 10 of my online book

    Themes related to the thermal interpretation can also be found in parts of the following discussions on PhysicsForums:

    In a preliminary German version, the thermal interpretation dates back to Spring 2004; see the beginnings of the thermal interpretation.

    Older, partially outdated but in many respects more detailed discussions

    Happy Reading!

    A theoretical physics FAQ

    Arnold Neumaier (