-----------------------------------------------
Does decoherence solve the measurement problem?
-----------------------------------------------
Many physicist nowadays think that decoherence provides a fully
satisfying answer to the measurement problem. But this is an illusion.
Decoherence is the (experimentally verified) decay of
off-diagonal contributions in a density matrix (written in a
preferred basis), when information dissipates into unobservable
degrees of freedom in the environment of a system.
In particular, decoherence reduces a pure state to a _mixture_
of eigenstates. This is enough to induce classical features
in many large quantum systems, characterized by a lack of
interference terms.
Thus decoherence is very valuable in understanding the classical
features of a world that is fundamentally quantum.
On the other hand, the 'collapse of the wave function'
selects _one_ of the eigenstates as the observed one.
This ''problem of definite outcomes'' is part of the measurement
problem. It is still a riddle, and not explained by decoherence.
More precisely, decoherence explains the dynamical decay of
off-diagonal entries in a density matrix rho, thus reducing a
nondiagonal density matrix (e.g., one corresponding to a pure state
psi via rho = psi psi^*) to a diagonal one, usually one with all
diagonal elements occupied. In particular, this turns pure states into
a mixture.
On the other hand, the collapse turns a pure state psi into another
pure state, obtained by projecting psi to the eigenspace corresponding
to a measurement result. In terms of density matrices, and assuming
that the eigenspace is 1-dimensional, a collapse turns a density
matrix rho into a diagonal matrix with a single diagonal entry.
This is not explained at all by decoherence.
A thorough discussion is given in the excellent survey article
M. Schlosshauer,
Decoherence, the measurement problem, and interpretations of quantum
mechanics,
Rev. Mod. Phys. 76 (2005), 1267-1305.
quant-ph/0312059
More recently, Schlosshauer published a book on decoherence; see his
home page http://www.nbi.dk/~schlossh/, where one can also find
four reviews of the content. All reviews highly recommend the book;
two reviews are wholly favorable. The review by Zeilinger (in Nature)
explains why the arguments given there against the Copenhagen
interpretation are not convincing, and that by Landsman (in Stud. Hist.
Phil. Mod. Phys.) emphasizes conceptual shortcomings, and refers
(among others) to
http:/to.stanford.edu/archives/win2004/entries/qm-decoherence
for a more balanced discussion of the merits of decoherence.
The champions of the decoherence approach are (not always
but at least sometimes) quite careful to delineate what decoherence
can do and what it leaves open. For example, Erich Joos, coauthor
of the nice book 'Decoherence and the Appearance of a Classical World
in Quantum Theory',
http://www.iworld.de/~ej/book.html
explicitly states in the last paragraph of p.3 in quant-ph/9908008
that (and why) decoherence does not resolve the measurement problem.
The prize-winning book by J.Bub, Interpreting the Quantum World,
http://de.wikipedia.org/wiki/Lakatos_Award
also emphasizes this point.
If the big crowd has a cruder point of view, it means nothing but
lack of familiarity with the details.
If the quantum mechanical state is taken only as a description
of a large ensemble, as in the Statistical Interpretation
(see next question), there is no problem.
But the riddle is present if one insists that the quantum mechanical
state describes a single quantum system (as seems to be required for
today's experiments on single atoms in a ion trap), which makes the
collapse a necessity.
In spite of all results about decoherence,
Wigner's mathematically rigorous analysis of the incompatibility
of unrestricted unitarity, the unrestricted superposition principle
and collapse, Chapter II.2 in:
J.A. Wheeler and W. H. Zurek (eds.),
Quantum theory and measurement.
Princeton Univ. Press, Princeton 1983,
in particular pp. 285-288, is unassailable.
In a nutshell, Wigner's argument goes as follows:
If a measurement of 'up' turns the complete system
(including the measured system, the detector, and the environment)
into the state
psi_1 = |up> tensor |up-detected> tensor |env_1>
and a measurement of 'down' turns it into
psi_2 = |down> tensor |down-detected> tensor |env_2>
and the projections of these states are stable under repetition
of the measurement (but possibly with different |env> parts>)
then, by linearity, measuring the state
|left> = (|up> + |down>)/sqrt(2)
necessarily places the whole system into the superposition
(psi_1 + psi_2)/sqrt(2)
of such states and _not_ (as would be needed to account for the
experimental observations) into a state of the form as psi_1 or psi_2,
depending on the result of the measurement.
Wigner's reasoning implies that a
resolution of the measurement problem requires giving up one of
the two holy tenets of traditional quantum mechanics: unrestricted
unitarity or the unrestricted superposition principle.
Von Neumann and with him most textbook authors opted for giving up
unitarity by introducing collapse as a process independent of the
Schroedinger equation. This is no longer adequate since we now know
that there is no dividing line between classical and quantum, so
that a measurement can no longer be idealized in the traditional
fashion. But then there is no longer a clear place for when the
collapse happens, and more specific solutions are called for.
My paper
A. Neumaier,
Collapse challenge for interpretations of quantum mechanics
quant-ph/0505172
(see also http://www.mat.univie.ac.at/~neum/collapse.html)
contains a collapse challenge for interpretations of quantum mechanics
that brings to a focus the requirements for a good solution of the
measurement problem.
In my opinion, the collapse is no fundamental principle but
the result of _approximating_ the entangled dynamics of a system
with its environment by a Markovian dynamics for the system itself,
resulting in a dissipative master equation of Lindblad type.
The latter have a built in collapse. The validity of the Markov
approximation is an _additional_ assumption beyond decoherence,
which is responsible for the collapse. Its nature is similar to
that of the socalled Stosszahlansatz in the derivation of the
Boltzmann equation.
Quantum optics and hence all high quality experiments for
the foundations of quantum mechanics are unthinkable without
the Markov approximation.