\documentclass[12pt]{article}
\usepackage[scaled=1.25]{helvet} %use \sffamily Then text
\usepackage{courier} %use \ttfamily Then text
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphics}
\usepackage{endnotes}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{bm}
\usepackage{authordate1-4}
\renewcommand{\baselinestretch}{2}
\setlength{\oddsidemargin}{0.5in} %1 inch margin is automatic
\setlength{\evensidemargin}{0.5in}
\setlength{\topmargin}{-0.5in}
\setlength{\textheight}{8.9in}
\setlength{\textwidth}{5.5in}
\def\btt#1{{tt$\backslash$#1}}
\begin{document}
\def\jwline#1#2#3#4{%
\put(#1,#2){\special{em:moveto}}%
\put(#3,#4){\special{em:lineto}}}
\def\newpic#1{}
\begin{center}\bf{\LARGE{The Consistent Histories Interpretation of
Quantum Mechanics}}\end{center} \normalsize
\normalfont
\begin{abstract}
for the measurement, or orthodox, interpretation.
\end{abstract}
The consistent histories (CH) reformulation of quantum
mechanics (QM) was developed by Robert Griffiths, given a formal logical
systematization by Roland Omn\`{e}s, and under the label `decoherent
histories',
was independently developed by Murray Gell-Mann and James Hartle and extended
to
quantum cosmology. Criticisms of CH involve issues of meaning,
truth, objectivity, and coherence, a mixture of philosophy and physics. We
will briefly consider the original formulation of CH and some
basic objections. The reply to these objections, like the objections themselves, involves a mixture of physics and philosophy. These replies support an evaluation of the CH formulation as a replacement
\section{The Consistent Histories Formulation} The Griffiths
formulation of Consistent Histories broke with the orthodox interpretation by
treating closed systems, by not assigning measurement
a foundational role, and by insisting that quantum mechanics supply an
account
of all basic processes including measurements\endnote{This is based on
Griffiths 1984, 1996, 1997, 2002a, 2002b; on Griffiths and Hartle 1997 and on
Griffiths's helpful comments on earlier drafts of this material.}.
There are three basic
features. First, there is the specification of a closed system at particular
times by a
series of events. An event is the specification of the properties of a system
through a
projection operator for the Hilbert sub-space representing the property.
Second,the time development is stochastic, involving many histories.
Though Griffiths relied on Schr\"{o}dinger dynamics, he
treated it
as going from event to event, rather than as a foundation for unitary
evolution
of a system
prior to
measurement and collapse. The events could be stages in a uniform evolution,
measurements, interaction with the environment, or a virtual interaction. At
this stage
there is no distinction between real and virtual processes. A history is a
time-
ordered sequence of events. It is represented by projectors on a tensor
product
of the
Hilbert spaces of the events. Third, a consistency condition is imposed on
histories, or families of histories. Only those that may be assigned
probabilities are given a physical interpretation.
The developers of the CH interpretation are insistent
on presenting this as a replacement for `the measurement interpretation'. Why
does this need
replacement? For Griffiths (2002a, Preface.) and
Omn\`{e}s (1994,
chap. 2, 1999, p. 80) the
basic reason is that a measurement-based interpretation does not accord with
what a fundamental theory should be. It subordinates the mathematical
formalism
to the language of experimental physics. A fundamental theory should supply a
basis for interpreting experiments. In evaluating this we will stress the
idea
that formulations and interpretations involve different criteria of
evaluation.
A comparison with classical physics clarifies the status accorded quantum
histories. In classical physics, stochastic dynamics is generally introduced
because of ignorance of precise values. Consider a fair flip of a coin n
times. The $2^n$ possible outcomes represents a sample space with histories
of the form, HHTHT... For a closer parallel, consider classical statistical
mechanics,where the state of a system is
represented by a point in phase space and the evolution of the system,
or its history, by the trajectory of this point. The phase space may be
coarse-grained by dividing it into a set of cells of arbitrary size that are
mutually
exclusive and jointly exhaustive. A cell will be assigned a value $1$
if the point representing the system is in the
cell, and has the value $0$ otherwise.. We introduce a variable, $B_i$ for
these $0$ and $1$ values, where the subscript, $i$, indexes the cells. These
variables satisfy
\begin{displaymath}
\sum_i{B_i}\,=\,1 \hspace{1.3in}
B_iB_j\,=\, \delta_{ij} B_j
\end{displaymath}
This assignment of $0$ and $1$ values supports a Boolean algebra. To
represent a
history, construct a Cartesian product of copies of the phase space and let
them
represent
the system at times $t_0, t_1, \ldots, t_n$. Then the product of the
variables, $,{B_i}$,
for these time slices represents a history. The relation to classical
probabilities can be
given an intuitive expression. The tensor product of the successive phase
spaces
has a
volume with an a priori probability of $1$. Each history is like a hole dug
by a phase-space worm
through this
volume. Its a priori probability is the ratio of the volume of the worm hole
to
the
total volume. The probability of two histories is additive provided the worm
holes don't
overlap. In the limit the total volume is the sum of a set of worm holes that
are
mutually exclusive and jointly exhaustive.
Quantum mechanics uses Hilbert space, rather than phase space and represents
properties
by sub-spaces. The correlate to dividing phase space into cells is a
decomposition of the
identity, dividing Hilbert space into mutually exclusive and jointly
exhaustive
subspaces
whose projectors satisfy:
\begin{equation}
\sum_i{B_i}\,=\,1 \hspace{0.8in} B_i^\dagger \,=\,B_i \hspace{0.8in}
B_iB_j\,=\, \delta_{ij} B_j \label{proj}
\end{equation}
Each history generates a
subspace wormhole through the tensor product of Hilbert
spaces.
The a priori probability of a particular history is the ratio of the volume
of
its
wormhole to the total volume. A
history might have incompatible quantities at different stages, e.g. of
$\sigma_x$ at
$t_1$ and $\sigma_y$ at $t_2$, but has only projectors for compatible
properties
at each
time slice. Corresponding to the intuitive idea of a wormhole volume the
\textit{weight} for a history is
\begin {equation}
K(Y) \,=\, E_1T(t_1, t_2)E_2T (t_2,t_3) \cdots T(t_{n-1},t_n) E_n,
\label{Griffiths}
\end{equation}
where $E$ stands for an event or its orthogonal projection operator,
$T(t_1,t_2)$ is the operator for the evolution of the system from $t_l$ to
$t_2$. Eq. (\ref{Griffiths}) can be simplified by using the Heisenberg
projection operators
\begin {equation}
\hat{E}_j\,=\,T(t_r,t_j)E_jT(t_j,t_r),
\end{equation}
where $t_r$ is a reference time independent of the value of $t_j$
leading to
\begin{equation}
\hat{K}(Y)\,=\,\hat{E}_1\hat{E}_2 \cdots \hat{E}_n.
\end{equation}
Then the weight of a history may be defined in terms of a product
\begin{equation}
W(Y)\,=\,\langle K(Y),K(Y')\rangle \:=\:\langle \hat{K}, \hat{K}')\rangle.
\label{inner}
\end{equation}
The significance of this equation, defined on the space of operators, may
be seen by the phase-space comparison used earlier. Classical weights used to
assign probabilities are additive functions on the sample space. If $E$ and
$F$
are two disjoint collections of phase-space histories, then $W(E \bigcup
F)\;=\;
W(E) + W(F)$. Quantum weights should also satisfy this requirement, since
they
yield classical probabilities and must be non-negative. As Griffiths (2002a,
121-124) shows, Eq. (\ref{inner}) achieves this. Quantum histories behave
like
classical histories to the degree that mutual interference
is negligible. This is the key idea behind the varying formulations of a
consistency
condition. If two histories are sufficiently orthogonal, $\langle K(Y),
K(Y')\rangle\,\approx \,0$, then their weights are additive and can be
interpreted as relative
probabilities. This idea of mutual compatibility may be extended to a
\textit{family} of
histories. A family is a sample space of compatible histories. Such a family
is represented by a consistent Boolean algebra of
history
projectors. This may be extended from a family of projectors,
\,$\mathfrak{F}$
\, to a
refinement,\,$\mathfrak{G}$, \, that contains every projector in
\,$\mathfrak{F}$.
Consistency considerations lead to the basic unit for interpretative
consistency, a \textit{framework},
a single Boolean algebra of commuting projectors based upon a particular
decomposition of the identity\endnote{This idea of a distinctive form of
quantum reasoning was developed in Omn\`{e}s, R.: (1994), Chaps. 9, 12, and
in Griffiths 1999, and 2002a chap. 10.}. A framework supplies the basis for
quantum reasoning
in CH. Almost all the objections to the CH interpretation are countered by
showing they violate the single framework rule, or by a straightforward
extension, the single family rule. Quantum claims that are meaningful in a
particular framework may be meaningless in a different framework. This
notion, accordingly, requires critical
analysis.
There are two aspects to consider: the relation between a framework and
quantum
reasoning, and whether the framework rule is an \textit{ad hoc} imposition.
The
first point is developed in different ways by Omn\`{e}s and Griffiths.
Omn\`{e}s
develops what he calls consistent (or sensible) logics. In the standard
philosophical application of logic to theories, one first develops a logic
system, or syntax, and then applies it. The content to which it is applied
does
not alter the logic. Omn\`{e}s (1994, sect. 5.2) uses `logic' for an
interpreted set of propositions. This terminology does not imply a non-standard logic.
Griffiths focuses on frameworks. He develops the logic of frameworks by
considering simple examples and using them as a springboard to general rules
The distinctive features of this reasoning confined to a framework can be
seen
by contrast with more
familiar
reasoning. Consider a system that may be characterized by two or more
complete
sets of
compatible properties. The Hilbert space representing the system may be
decomposed into
different sets of subspaces corresponding to the different sets of compatible
properties. To simplify the issue take $\sigma_x^+$ and $\sigma_z^+$ as the
properties. Can one attach a significance or assign a probability to
`$\sigma_x^+$ AND $\sigma_z^+$'? In CH propositions are represented by
projectors of Hilbert subspaces. The representation of $\sigma_x$ requires a
two-dimensional subspace with states $\mid X^+ \rangle$ and $\mid X^-
\rangle$,
projectors $X^\pm \;=\; \mid X^\pm \rangle \langle X^\pm \mid$, and the
identity, $I \;=\; X^+ \,+\, X^-$. One cannot represent `$\sigma_x^+$ AND
$\sigma_z^+$' in any of the allowed subspaces. Accordingly it is dismissed as
`meaningless'.
The distinctive features and associated difficulties of this framework
reasoning
are illustrated by Griffiths's reworking of Wheeler's (1983) delayed choice
experiment.
Both Wheeler and Griffiths
(1998) consider a highly idealized Mach-Zehender interferometer.
\setlength{\unitlength}{3pt}
\begin{picture}(100,70)
\jwline{30}{30}{30}{60}
\jwline{30}{60}{60}{60}
\jwline{60}{60}{60}{30}
\jwline{30}{30}{60}{30}
\jwline{17}{60}{30}{60}
\jwline{60}{30}{70}{30}
\jwline{60}{30}{60}{20}
\special{em:linewidth 3pt}
\jwline{25}{65}{35}{55}
\jwline{55}{65}{65}{55}
\jwline{55}{35}{65}{25}
\jwline{25}{35}{35}{25}
\put(10,60){\makebox(0,0)[c]{Laser}}
\special{em:linewidth 2pt}
\jwline{2}{55}{17}{55}
\jwline{2}{55}{2}{65}
\jwline{2}{65}{17}{65}
\jwline{17}{65}{17}{55}
\special{em:linewidth 1pt}
\bezier{250}(70,27)(68,30)(70,33)
\bezier{250}(70,27)(72,30)(70,33)
\jwline{70}{33}{76}{33}
\jwline{70}{27}{76}{27}
\bezier{250}(76,27)(78,30)(76,33)
\bezier{250}(57,20)(60,22)(63,20)
\bezier{250}(57,20)(60,18)(63,20)
\jwline{57}{20}{57}{14}
\jwline{63}{20}{63}{14}
\bezier{250}(57,14)(60,12)(63,14)
\bezier{250}(45,27)(43,30)(45,33)
\bezier{250}(45,27)(47,30)(45,33)
\jwline{45}{33}{51}{33}
\jwline{45}{27}{51}{27}
\bezier{250}(51,27)(53,30)(51,33)
\bezier{250}(57,42)(60,40)(63,42)
\bezier{250}(57,42)(60,44)(63,42)
\jwline{57}{42}{57}{36}
\jwline{63}{42}{63}{36}
\bezier{250}(57,36)(60,34)(63,36)
\put(30,66){\makebox(0,0)[c]{s}}
\put(45,62){\makebox(0,0)[c]{d}}
\put(23,62){\makebox(0,0)[c]{a}}
\put(28,45){\makebox(0,0)[c]{c}}
\put(62,65){\makebox(0,0)[c]{$M_1$}}
\put(26,26){\makebox(0,0)[c]{$M_2$}}
\put(74,24){\makebox(0,0)[c]{F}}
\put(64,32){\makebox(0,0)[c]{f}}
\put(62,24){\makebox(0,0)[c]{e}}
\put(49,24){\makebox(0,0)[c]{C}}
\put(66,38){\makebox(0,0)[c]{D}}
\put(67,16){\makebox(0,0)[c]{E}}
\put(58,27){\makebox(0,0)[c]{L}}
\put(50,2){\makebox(0,0)[c]{Figure 1: A Mach-Zehender Interferometer}}
\end{picture}
The classical description in terms of the interference of light waves may be
extended to an idealized situation where the intensity of the laser is
reduced
so low that only one photon goes through at a time. Here $S$ and $L$ are
beam-
splitters, $M_1$ and $M_2$ are perfect mirrors, and $C$, $D$, $E$, and $F$
are
detectors. If $D$ registers, one infers path $d$; if $C$ registers, then the
path is $c$. If $C$ and $D$ are removed, then the detectors $E$ and $F$ can
be
used to determine whether the photon is in a superposition of states.
Wheeler's
delayed choice was based on the idealization that detectors $C$ and $D$ could
be
removed after the photon had passed through $S$. It is now possible to
implement
such delayed choice experiments, though not in the simplistic fashion
depicted.
To see the resulting paradox assume that detectors $C$ and $D$ are removed
and
that the first beam splitter leads to the superposition, which can be
symbolized
in abbreviated notation as
\begin{equation}
\mid a \rangle \mapsto \mid s \rangle \;=\; (\mid c\rangle \,+\, \mid d
\rangle)/\surd \bar{2},
\label{one}
\end{equation}
where $\mid a \rangle,\, \mid c\rangle, \mbox{and} \, \mid d \rangle$ are
wave
packets at the entrance and in the indicated arms. Assume that the second
beam
splitter $L$ leads to a unitary transformation
\begin{equation}
\mid c \rangle \mapsto \mid u \rangle \;=\; (\mid e \rangle \,+\, \mid f
\rangle)/\surd \bar{2},\;\;\; \mid d \rangle \mapsto \mid v \rangle \;=\;(-
\mid
e \rangle \,+\, \mid f \rangle)/ \surd \bar{2},
\end{equation}
with the net result that
\begin{equation}
\mid a \rangle \mapsto \mid s \rangle \mapsto \mid f \rangle.
\label{two}
\end{equation}
Equations (\ref{one}) and (\ref{two}) bring out the paradox. If the
detectors,
$C$ and $D$ were in place, then the photon would have been detected by either
$C$ or $D$. If it is detected by $C$, then it must have been in the c arm. If
the detectors are removed and the $F$ detector registers, then it is
reasonable
to assume that the photon passed through the interferometer in the
superposition
of states given by eq. (\ref{one}). The detectors were removed while the
photon
was already in the interferometer. It may seem reasonable to ask what state
the
photon was in before the detectors were removed. Here, however, intuition is
a misleading guide to the proper formulation of questions in a quantum
context.
Griffiths treats this paradox by considering different families of possible
histories. Using C and D for the ready state of detectors, considered as
quantum
systems, and C* and D* for triggered states then one consistent family for
the
combined photon-detector system is
\begin{equation}
{{\vert a \rangle \vert CD \rangle} \longrightarrow {
{\vert c \rangle \vert CD \rangle \longrightarrow \vert C^*D\rangle}\choose
{\vert d \rangle \vert CD \rangle \longrightarrow \vert CD^*\rangle}}}
\hspace{2in}
\label{C}
\end{equation}
Here $\vert a \rangle \vert CD \rangle$ represents a tensor product of the
Hilbert spaces of the photon and the detector. Eq. (\ref{C}) represents a
situation in which the photon enters the interferometer and then proceeds
either
along the $c$ arm, triggering $C^*$ or along the $d$ arm, triggering $D^*$.
These paths and outcomes are mutually exclusive.
For the superposition alternative, treated in eqs. (\ref{one})--(\ref{two}),
there is a different consistent family of histories,
\begin{equation}
{{\vert a \rangle \vert EF \rangle \longrightarrow
\vert s \rangle \vert EF \rangle} \longrightarrow {{\vert e \rangle \vert
EF
\rangle \longrightarrow \vert E^*F \rangle}\choose
{\vert f \rangle \vert EF \rangle \longrightarrow \vert EF^*\rangle}}}
\hspace{0.5in}
\label{D}
\end{equation}
Eq. (\ref{D}) represents superposition inside the interferometer and
exclusive
alternatives after the photon leaves the interferometer. In accord with eq.
(\ref{two}) the upper history in eq. (\ref{D}) has a probability of 0 and
$F^*$
is triggered.
Suppose that we replace the situation represented in eq. (\ref{D}) by one in
which the photon is in either the $c$ or $d$ arms. There is no superposition
within the interferometer, but there is when the photon leaves the
interferometer. This can be represented by another consistent family of
histories,
\begin{equation}
{{\vert a \rangle \vert EF \rangle} \longrightarrow {{\vert c \rangle \vert
EF
\rangle \longrightarrow \vert u \rangle \vert EF \rangle \longrightarrow
\vert U
\rangle}\choose
{\vert d \rangle \vert EF \rangle \longrightarrow \vert v \rangle \vert
EF\rangle \longrightarrow \vert V \rangle}}},
\hspace{0.5in}
\label{U}
\end{equation}
where
\begin{displaymath}
{\vert U \rangle \;=\; (\vert E^*F \rangle \,+\, \vert EF^* \rangle )/\surd
2,}
\end{displaymath}
\begin{displaymath}
{\vert V \rangle \;=\; (-\vert E^*F \rangle \,+\, \vert EF^* \rangle)/ \surd
2}.
\end{displaymath}
Both $\vert U \rangle$ and $\vert F \rangle$ are Macroscopic Quantum States
(MQS), or Schr\"{o}dinger cat states. The formalism
allows for such states. However, they are not observed and do not represent
measurement outcomes. This delayed choice example represents the way
traditional
quantum paradoxes are dissolved in CH. Reasoning is confined to a framework.
Truth is framework-relative. The framework is selected by the questions the
physicist imposes on nature. If a measurement has an outcome, then one must
choose a framework that includes the outcome. Within a particular framework,
there is no
contradiction. One is dealing with consistent histories. The traditional
paradoxes all involve combining elements drawn from incompatible histories.
‘Measurement’ is a catchall term for a grab bag of problems. For present
purposes we consider three aspects. The first is the traditional theory of
measurement stemming from von Neumann (1955, chap. 6) and Wigner\endnote{In a
conversation with Abner Shimony Wigner claimed ``I have learned much about
quantum theory from Johnny, but the material in his Chapter Six Johnny
learned all from me’’, cited from Aczel, p. 102.}. The object to be measured
and the measuring apparatus together can be represented by a state function,
whose evolution is given by the Schr\"{o}dinger equation. This is linear
dynamics leading from a superposition of states only to further
superpositions. Von Neumann`s projection postulate, and similar collapse
postulates, were introduced to explain how a superposition becomes a mixture
in a measurement situation. Omn\`{e}s`s treatment of this will be discussed
later. Revisionary interpretations of QM generally reject collapse postulates
as \textit{ad hoc} principles.
By `measurement situation' we refer to a laboratory situation of an
experimenter conducting an experiment, or in the now fashionable jargon
performing a measurement. Here the maxim is: Properly performed measurements
yield results. A measurement interpretation of QM has been treated elsewhere
(MacKinnon 2007). This differs from the von Neumann approach in taking the
distinctive results of quantum measurements as its point of departure for
developing the formalism.
Griffiths's development also tailors the formalism of QM to fit experimental
measurements situations. The Schr\"{o}dinger equation is treated as one
method of path development, not as an overall governing principle. This leads
to two general principles: 1) \textit{A
quantum mechanical description of a measurement with particular outcomes must
employ a framework in which these outcomes are represented.}
2) \textit{The
framework used to describe the measuring process must include the measured
properties at a time before the measurement took place.} This embodies the
experimental practice of interpreting
a pointer reading in the apparatus after the measurement as
recording a property value characterizing a system before the measurement.
\subsection{Extending the Formalism}
Gell-Mann and Hartle independently developed a consistent history formalism
as a
transformation of Feynman's sum-over-histories formulation\endnote{Gell-Mann,
M. and Hartle, J. 1990; 1993. The differences between this program and older forms of reductionism is discussed in MacKinnon 2008a}. Quantum cosmology, their concern, requires a quantum mechanical
treatment of closed systems. The universe does not admit of an outside
observer.
The universe is the ultimate closed system. Now it is characterized by
formidable complexity, of which we have only a very fragmentary knowledge.
The
assumptions behind the big bang hypothesis confer plausibility on the further
assumption that in the instant of its origin the universe was a simple
unified
quantum system. If we sidestep the problem of a state function and boundary
conditions characterizing the earliest stages\endnote{This is treated in
Hartle, 2002a, 2002b}, we may skip to stages later than
the Planck
era, where space-time was effectively decoupled. Then the problem of quantum
gravity may be avoided. The universe branched into subsystems. Even when
the
background perspective recedes over the horizon, a methodological residue
remains, the treatment of closed, rather than open systems. To present the
basic
idea in the simplest form, consider a closed system characterized by a single
scalar field, $\phi(x)$. The dynamic evolution of the system through a
sequence
of spacelike surfaces is generated by a Hamiltonian labeled by the time at
each
surface. This Hamiltonian is a function of $\phi(\mathbf{x},t)$ and the
conjugate momentum, $\pi(\mathbf{x},t)$. On a spacelike surface these obey
the
commutation relations, $[\phi(\mathbf{x},t), \pi(\mathbf{x'},t)] = \imath
\delta
(\mathbf{x, x'})$ (with $\hbar, c = 1$). Various field quantities (aka
observables) can be generated by $\phi$ and $\pi$. To simplify we consider
only
non-fuzzy `yes-no' observables. These can be represented by projection
operators, $P(t)$. In the Heisenberg representation, $P(t) = e^{\imath
Ht}\,P(t_0)\,e^{-\imath Ht}.$
The novel factor
introduced here is a coarse graining of histories.
Coarse graining begins by selecting only certain times and by collecting
chains
into classes. The decoherence functional is defined as
\begin{equation}
D(\alpha',\alpha) \, =\, \mbox{Tr}[C_\alpha' \, \rho \,C^\dag_\alpha],
\label{Decohere}
\end{equation}
where $\rho$ is the density matrix representing the initial conditions. In
this
context `decoherence' has a special meaning. It refers to a complex
functional
defined over pairs of chains of historical projectors. The basic idea is the
one
we have already seen. Two coarse grained histories decohere if there is
negligible interference between them. Only decoherent histories can be
assigned
probabilities. Different decoherence conditions can be set. We will consider
two\endnote{Gell-Mann and Hartle 1994b}.
\begin{eqnarray}
Weak:\,& \, Re \, Tr[C_\alpha' \, \rho \,C^\dag_\alpha] \,& =\, \delta
(\alpha' \alpha) P(\alpha) \\Medium: \,& Tr[C_\alpha' \, \rho
\,C^\dag_\alpha]\,& =\, \delta (\alpha' \alpha) P(\alpha)
\end{eqnarray}
Weak decoherence is the necessary condition for assigning probabilities to
histories. When it obtains the probability of a history, abbreviated as
$\alpha$
is $P(\alpha) \,=\,D(\alpha \alpha)$. Medium decoherence relates to the
possibility of generalized records. Here is the gist of the argument.
Consider a
pure initial state, $ |\psi \rangle $ with $\rho \,=\, |\psi \rangle \langle
\psi |$. Alternative histories obeying exact medium decoherence can be
resolved
into branches that are orthogonal, $|\psi \rangle \,=\, \sum_\alpha C_\alpha
|\psi_\alpha \rangle$.
If the projectors did not form a complete set, as in weak decoherence,
then the past is not fixed. Other decompositions are possible. This relates
to
the more familiar notion of records when the wave function is split into two
parts, one representing a system and the other representing the environment,
$R_\alpha (t)$. These could not count as environmental records of the state
of a
system if the past could be changed by selecting a different decomposition.
Thus, medium decoherence, or a stricter condition such as strong decoherence,
is
a necessary condition for the emergence of a quasiclassical order.
It is far from a sufficient condition. The order represented in classical
physics presupposes deterministic laws obtaining over vast stretches of time
and
space. The GH program must show that it has the resources required to produce
a
quasiclassical order in which there are very high approximations to such
large
scale deterministic laws. At the present time the operative issue is the
possibility of deducing such quasi-deterministic laws. The deduction of
detailed
laws from first principles is much too complex. Zurek, Feynman and Vernon,
Caldeira and Leggett, and others initiated the process by considering
simplified
linear models. The GH program puts these efforts into a cosmological
framework
and develops methods for going beyond linear models. The standard
implementation
of a linear model represents the environment, or a thermal bath, by a
collection
of simple harmonic oscillators. In an appropriate model the action can be
split
into two parts: a distinguished observable, $q^i$, and the other variables,
$Q_i$, the ignored variables that are summed over.
The G-H program extends this to non-linear models, at least in a programmatic
way. I will indicate the methods and the conclusions. As a first step we
introduce new variables for the average and difference of the arguments used
in
the decoherence function:
\begin{eqnarray}
X(t)\, &=& \, 1/2(x'(t) \, + \, x(t))\nonumber\\
\xi(t) \, &=& x'(t) \, - \, x(t) \nonumber\\
D(\alpha', \alpha) \, &=& f(X,\, \xi),
\label{alpha}
\end{eqnarray}
where $ x'(t)$ and $x(t)$ refer to events.
The rhs of eq. (\ref{alpha}) is small except when $\xi (t) \approx 0$. This
means that the histories with the largest probabilities are those whose
average
values are correlated with classical equations of motion. Classical behavior
requires sufficient coarse graining and interaction for decoherence, but
sufficient inertia to resist the deviations from predictability that the
coarse
graining and interactions provide. This is effectively handled by an analog
of
the classical equation of motion. In the simple linear models, and in the
first
step beyond these, it is possible to separate a distinguished variable, and
the
other variables that are summed over. In such cases, the analog of the
equation
of motion has a term corresponding to the classical equation of motion, and a
further series of terms corresponding to interference, noise and dissipation.
The factors that produce decoherence also produce noise and dissipation. This
is
handled, in the case of particular models, by tradeoffs between these
conflicting requirements. The goal is to produce an optimum characteristic
scale
for the emergence of classical action. In more realistic cases, where this
isolation of a distinguished variable is not possible, they develop a coarse
graining with respect to hydrodynamic variables, such as average values of
energy, momentum, and other conserved, or approximately conserved,
quantities. A
considerable amount of coarse graining is needed to approximate classical
deterministic laws. Further complications, such as the branching of a system
into subsystems,e.g. galaxies,
systems, planets, present problems not
yet explored in a detailed way.
Nevertheless the authors argue that they could be handled by further
extensions
of the methods just outlined.
Omn\`{e}s has recently offered a speculative extension the CH formulation
that addresses the measurement problem\endnote{ Omn\`{e}s, 2008 I am grateful to Professor
Omn\`{e}s for an advance copy of this article.}. If QM is the basic science
of reality, then it should somehow explain the fact that properly performed
quantum measurements yield unique results. In this context a quantum
measurement can be thought of as a two stage process. The first stage is the
transformation of a superposition of states to a mixture, the traditional
measurement problem. When this is treated as a pure theoretical problem, then
it has no solution within the framework of QM applied to an isolated system.
Omn\`{e}s accepts the now common assumption that decoherence reduces a
superposition to a mixture FAPP. A mixture of states assigns different
probabilities to different components. The actual measurement selects one of
these possibilities, effectively reducing all the other probabilities to 0.
This reduction also leads to distinctively classical patterns, e.g.,
ionization, bubbles, tracks. Standard treatments of QM do not attempt to
explain how this reduction happens. They rely on the fact that QM is
intrinsically probabilistic. These probabilities are considered objective,
rather than the subjective probabilities associated with guessing whether a
tossed coin is heads or tails.
Omn\`{e}s's attempt to explain measurement relies on a particular assumption
about the way the probabilities in a mixture evolve. One evolves to a value
of 1, while all the others evolve to a value of 0. This does not follow from
the Schr\"{o}dinger equation. Others have tried to deduce this reduction by
modifying the Schr\"{o}dinger equation (see Pearle 2007). Omn\`{e}s
effectively reverses the procedure. What follows from the assumption that the
probabilities do evolve in this way and evolve very quickly? The key
conclusion he draws is that Tr$(\rho^2) \; \approx \; 1$, where $\rho$ is the
density matrix of the measuring system. Standard physics leads to the
conclusion that Tr$(\rho^2) \; << \; 1$. Omn\`{e}s's conclusion entails that
the measuring system is in an almost pure state. This, he argues, would
obtain if the universe were in a pure state. Then reduction is interpreted as
the breaking and regeneration of classicality.
Omn\`{e}s presents a possible mechanism to explain this. ``Reduction is a universal process and therefore
its explanation must be universal.'' If this be so, then the development of
any particular case should illustrate the universal process. There is a pure
state of the universe that controls \textbf{everything}. One should use this,
rather than phenomenological physics as a basis. The Hawking-Hartle cosmology
assumes a pure state function for the universe. However, the formalism does
not relate to particular measurement processes. I do not find this argument convincing. On a phenomenological level,
reduction is ubiquitous, but involves different mechanisms like friction or
approach to equilibrium. These need not have a common solution. Regardless
of whether one finds the proposed solution plausible, the new challenge
remains. If QM is the basic science of reality, then one should attempt to
explain everything physical on the basis of a QM formulation that is not
parasitic on classical physics.
\section{Criticisms of Consistent Histories}
The objections brought against the CH interpretation cluster around the
border
separating physics from philosophy. The technical physical objections have
been
answered largely by showing that confining quantum reasoning to a framework
eliminates contradictions (See Griffiths 1997, 1998, and 2002a, chaps. 20-25). Here we
will
focus on the more philosophical aspects and group them under three headings:
Meaning, Truth, and Arbitrariness. The first two share a core objection. The
CH
interpretation makes meaning and truth framework relative. Critics take this as an
\textit{ad
hoc} restriction that violates accepted norms concerning truth and meaning.
The
issue of arbitrariness concerns the selection of histories. The formalism
allows
in principle a very large number of histories The CH interpretation selects a
few privileged histories. Critics object that the formalism supplies no basis for the
selection. The G-H project specifies the conditions for the emergence of
quasiclassicality. The formalism allows an indefinitely large number of
extensions of the quasiclassical framework. Only a minute fraction of them
preserve quasiclassicality. Again, critics object that the formalism supplies no basis for
selecting
only the members of this minute fraction.
Adrian Kent has brought the issue of meaning to the forefront\endnote{Kent 1996 was answered by Griffiths and Hartle 1997, which was answered by
Kent, 1998.}. Consider two histories with the same
initial and
final states and intermediate states $\sigma_x$ and $\sigma_z$, respectively.
In
each history one can infer the intermediate state with probability 1. A
simple
conjunction of two true propositions yields `$\sigma_x \,\mbox{AND}\,
\sigma_z$'.
Griffiths and Hartle contend, and Kent concedes, that there is no formal
contradiction since the intermediate states are in separate histories. Kent
finds this defense arbitrary and counter-intuitive. Our concepts of logical
contradiction and inference are established prior to and independent of their
application of quantum histories. If each intermediate state can be inferred,
then their conjunction is meaningful.
The issue of truth comes to the forefront when one considers the ontological
significance of assigning quantitative values to properties. In classical
physics assigning a
value to a property means that the property possesses the value. Copenhagen
quantum physics fudges this issue. The CH interpretation exacerbates the
difficulty. A realistic interpretation of projectors take them as
representing
the properties a system possesses at a time. This does not fit the Griffiths
treatment of the delayed choice experiment when one asks what position the
photon \textit{really}
had at time $t_2$. Thus, d'Espagnat (1995, chap. 11)
argues that the CH interpretation involves inconsistent property assignments.
In
a similar vein Bub (1997, p. 236) expressed the objection that if
there are two quasiclassical histories of Schr\"{o}dinger's cat, then one
does
not really know whether the cat is alive or dead. Bassi and
Ghiradi(1999) make the issue
of truth explicit.
The attribution of properties to a system is true if and only if (iff) the system
actually possesses the properties. They find Griffiths's reasoning ``shifty
and
weak'', implying the coexistence of physically senseless decoherent families.
This criticism extends to probabilities. From an ontological perspective
probabilities of properties must refer to objective and intrinsic properties
of
physical systems. There is, they claim, no other reasonable alternative. If
they referred to the possibilities of measurement results, then this would be
a measurement interpretation, not a replacement for it.
Goldstein (1998) argues that the CH interpretation cannot be
true, since it contradicts established no-go theorems.
To treat the framework relevance of truth we should distinguish `truth' and `true'. In philosophical contexts `truth' inevitably conjures up theories of truth: correspondence theories, coherence theories, pragmatic theories, assertive-redundancy theories, and others. The most pertinent, the correspondence theory of truth, generates controversies concerning Aristotle`s original doctrine, Tarski`s specification of `true' for a formal language, and puzzles concerning the way a proposition corresponds to a state of affairs. The criticisms brought against the CH interpretation seem to presuppose only a minimal sense:\\
\begin{quote} ``The cat is on the mat'' is true iff the cat is on the mat.
\end{quote}
This looks unproblematic in the context of someone who sees the cat and understands the claim. It becomes highly problematic when one argues from the acceptance of a theory as true to what the world must be like to make it true. Thus Hughes (1989, p. 82) asks ‘Feynman's forbidden question’: “What must the world be like if quantum mechanics is true of it?”
In forbidding such questions Feynman was following the normal practice of physicists. Claims presented as true do not depend on a philosophical theory of truth,
but on the normal use of language in physics. This will be treated in much greater detail elsewhere (MacKinnon forthcoming). Here we will simply exploit Donald Davidson's truth semantics to indicate how `true' can be interpreted as a semantic primitive whose use is not dependent on theories of truth. Davidson's gradual abandonment of an extensional theory of `true' led to a critical rethinking of the interrelation of truth, language, interpretation, and ontology. I will summarize the overview presented in his (2001, Essay 14). Philosophers have been traditionally concerned with three different types of knowledge: of my own mind; of the world; and of other minds. The varied attempts to reduce some of these forms to the one taken as basic have all proved abortive. Davidson's method of interrelating them hinges on his notion of radical interpretation. My attempt to interpret the speech of another person relies on the functional assumption that she has a basic coherence in her intentions, beliefs, and utterances. Interpreting her speech on the most basic level involves assuming that she holds an utterance true and intends to be understood. The source of the concept `true' is interpersonal communication. Without a shared language there is no way to distinguish what is the case from what is thought to be the case. I also assume that by and large she responds to the same features of the world that I do. Without this sharing in common stimuli thought and speech have no real content. The three different types of knowledge are related by triangulation. I can draw a baseline between my mind and another mind only if we can both line up the same aspects of reality. Knowledge of other minds and knowledge of the world are mutually dependent. ``Communication, and the knowledge of other minds that it presupposes, is the basis of our concept of objectivity, our recognition of a distinction between false and true beliefs''. (\textit{Ibid.}, p. 217).
Our ordinary language picture of reality is not a theory. It is a shared vehicle of communication involving a representation of ourselves as agents in the world and members of a community of agents, and of tools and terms for identifying objects, events, and properties. Extensions and applications may be erroneous. There can be factual mistakes, false beliefs, incorrect usages, and various inconsistencies. But, the designation of some practice as anomalous is only meaningful against a background of established practices that set the norms. Our description of reality and reality as described are interrelated, not in a vicious circle, but in a developing spiral. The acceptance of any particular claim as true implicitly presupposes the acceptance of a vast but amorphous collection of presuppositions. (See Davidson 1984, esp. chap. 14.) These come into focal awareness only in specialized contexts, such as translating material from a primitive or ancient culture with quite different presuppositions or programming a robot to cope with a particular environment.
The acceptance as true of a scientific claim, whether an experimental report of a theoretical deduction, implicitly presupposes the acceptance of a vast, but not so amorphous, collection of claims as true, e.g. the reliability and calibration of instruments, established theories, basic physical facts, the validity of a deduction, the honesty of an experimental report, etc. Any particular claim may be called into question when there are grounds for doubting its truth or pertinence. However, it is not possible to call all the presuppositions into question and continue the practice of science. In the mid 1920s the normal function of implicit presuppositions began to cause serious difficulties in quantum contexts. The most striking example was the way the experimenters, Davisson and Germer (1927, 1928) backed into the acceptance of truth claims as framework relative. Their experimental research of scattering slowly moving electrons off a nickel surface were interrupted when the vacuum tube containing the nickel target burst. They heated the nickel target to remove impurities and then slowly cooled it. When they resumed their scattering experiments they were amazed to find that the earlier random scattering was replaced by a regular pattern very similar to wave reflection. The explanation that gradually emerged was that the heating and slow cooling of the target led to the formation of relatively large nickel crystals. They reluctantly accepted the then novel contention that electrons scattered off crystals behave like waves. The previous presupposition that electrons travel in trajectories is only true in particular experimental situations. Born's probabilistic interpretation of the $\psi$-function implicitly accommodated this framework relevance by according $int \psi \psi^*$ a value of 1 only when the integration is carried out in the proper environment.
In the CH formulation `true' is interpreted as having a probability of 1 relative to a framework. This is in accord with the normal usage of `true' in quantum physics. It does not invoke any version of the correspondence theory of truth and does not support a context-independent attribution of possessed properties.
Truth is related to implication. In formal logic a contradiction implies anything. This relation was recognized informally long before the development of formal systems. The medieval adage was: ``\textit{Ex falso sequitur quodlibet}''. If anything follows then no implications are reliable. As Kent noted, the CH formulation should accord with the normal relation between implication and contradiction. Here, it is important to recognize the way this was modified in the normal language of quantum physics. The mid 1920s difficulties just noted led to a situation where normal reliance on implicit presuppositions led to fundamental contradictions in the context of quantum experiments.\\
\hspace*{.3in}1a. {\bf Electromagnetic radiation is continuously distributed
in
space.} The high precision optical instruments used in measurements depend on
interference, which depends on the physical reality of wavelengths. \\
\hspace*{.3in}1b. {\bf Electromagnetic radiation is not continuously
distributed
in space.} This is most clearly shown in the analysis of X-rays as needle
radiation and in Compton's interpretation of his eponymous effect as a
localized
collision between a photon and an electron.\\
\hspace*{.3in}2a. {\bf Electromagnetic radiation propagates in wave fronts.}
This is an immediate consequence of Maxwell's equations.\\
\hspace*{.3in}2b. {\bf Electromagnetic radiation travels in trajectories.}
Again, theory and observation support his. The theory is Einstein's account
of
directed radiation. The observations concern X-rays traveling from a point
source to a point target.\\
\hspace*{.3in}3a. {\bf Photons function as discrete individual units.} The
key
assumption used to explain the three effects treated in Einstein's original
paper is that an individual photon is either absorbed as a unit or not
absorbed
at all. Subsequent experiments supported this.\\
\hspace*{.3in}3b. {\bf Photons cannot be counted as discrete units}
Physicists
backed into this by fudging Boltzmann statistics. It became explicit in Bose-
Einstein statistics.\\
These and further contradictions concerning electronic orbits, were not
contradictions derived from a theory. The Bohr-Somerfeld atomic theory
had become a thing of rags and patches. These contradictions were encountered
in
attempts to give a coherent framework for interpreting different experimental results.
We
will
distinguish the language of phenomena from the language of theories. Bohr's
resolution of these problems included a reformation of the language of
phenomena. In resolving this crisis, Bohr introduced something of a Gestalt
shift, from analyzing the apparently contradictory \emph{properties}
attributed
to
objects and systems to analyzing the \emph{concepts} used. As Bohr saw it,
the
difficulties were rooted in ``\ldots an essential failure of the pictures in
space and time on which the description of natural phenomena has hitherto
been
based.''\endnote{This was from a talk given in August 1925 before Bohr was
familiar with Heisenberg's new formulation of QM. It is reproduced in Bohr 1934, p. 34. Bohr never intended or presented an interpretation of QM as a theory. See Gomatam 2007}. Bohr reinterpreted the
role of the language used to give space-time descriptions of sub-microscopic
objects and properties.
The description of experiments and the reporting of results must meet the
conditions of unambiguous communication of information. This requires
ordinary
language supplemented by the terms and usages developed through the progress
of
physics. Thus, the meanings of the crucial terms `particle' and `wave' were
set
by their use in classical physics. Each of these terms is at the center of a
cluster of concepts that play an inferential role in the interpretation of
experiments. From tracks on photographic plates experimenters infer that a
particle originated at a point, traveled in a trajectory, collided with
another
particle, penetrated an atom, and displaced an inner electron. Waves do not
travel in trajectories. They propagate in wave fronts, interfere with each
other, are diffracted or absorbed. A straightforward extension of both
concepts
to different contexts generated contradictions.
Bohr's new guidelines, centering on complementarity, resolved these
contradiction by restricting the meaningful use of classical concepts to
contexts where these concepts could be related to real or ideal measurements.
Concepts proper to one measurement context could not be meaningfully extended
to
a complementary measurement context. Bohr treated the mathematical formalism
as
a tool and regarded these analyses
of idealized experiments as the chief means of establishing the consistency
of
the language of quantum physics\endnote{``The physical content of quantum
mechanics is exhausted by its power to formulate statistical laws governing
observations obtained under conditions specified in plain language''. (Bohr
1958, p. 12)}. This
explains the
chiaroscuro nature of his analyses featuring detailed representations of
grossly
unrealistic experiments: diaphragms rigidly clamped to heavy wooden tables, clocks
with the primitive mechanism showing, a scale supported by a dime-store
spring.
These are belligerently classical tools used to illustrate the limits of
applicability of classical concepts in atomic and particle experiments. Bohr
thought he achieved an overall consistency only after 1937. Subsequently he
introduced an idiosyncratic use of `phenomenon' as a unit of explanation. The
object studied, together with the apparatus needed to study it constitute a
phenomenon, an epistemologically irreducible unit. Wheeler's analysis of the delayed choice experiment draws on Bohr's terminology: ``No elementary phenomenon is a phenomenon until it is a
registered (observed) phenomenon.''Idealized thought
experiments
supplied the basic tool for testing consistency.
After these modifications were assimilated into normal linguistic usage in the quantum community the linguistic crisis that precipitated the Gestalt shift receded from collective memory. This forgetfulness allowed critics to couple normal physical language with incompatible extensions of a correspondence theory of truth. The CH formulation is in strict accord with the Bohrian semantics just summarized. We can make this explicit for the issues of meaning, implication, frameworks, and truth. CH relates to experiments through its analysis of measurement situations, i.e., the normal practice of experimenters. Thus, in the delayed choice experiments analyzed earlier if the C or D detectors detected a particle one could infer the trajectory of the photon. If the C and D detectors are removed and the F detector is triggered then one can infer that the photon was in a superposition of states. Bohr's use of `phenomenon' treats each experimental situation as an epistemologically irreducible unit. Within a particular experimental analysis one can rely on classical logic and normal experimental inferences. These inferences cannot be extended to a complementary experimental analysis.lness allowed critics to couple normal physical language with incompatible extensions of a correspondence theory of truth. The CH formulation is in strict accord with the Bohrian semantics just summarized. We can make this explicit for the issues of meaning, implication, frameworks, and truth. CH relates to experiments through its analysis of measurement situations, i.e., the normal practice of experimenters. Thus, in the delayed choice experiments analyzed earlier if the C or D detectors detected a particle one could infer the trajectory of the photon. If the C and D detectors are removed and the F detector is triggered then one can infer that the photon was in a superposition of states. Bohr's use of `phenomenon' treats each experimental situation as an epistemologically irreducible unit. Within a particular experimental analysis one can rely on classical logic and normal experimental inferences. These inferences cannot be extended to a complementary experimental analysis.
Griffiths's use of `framework' corresponds to Bohr's use of `phenomenon'. Within a framework one uses Boolean logic and relies on normal experimental inferences. However, one cannot juxtapose incompatible frameworks or detach inferences from the framework in which they function. These limitations on allowed inferences were introduced to avoid generating contradictions. Thus, in disallowing the meaningfulness of such juxtapositions as `$\sigma_x^+$ AND
$\sigma_z^+$', where these are intermediate states in different histories, the CH interpretation is in strict accord with the prior rules governing contradiction and implication in quantum contexts. Asserting that the photon traveled through the c arm is equivalent to\\ ``The photon traveled through the c arm" is true (t).\\ Physicists do not invoke an assertive-redundancy account of truth. They do rely on the normal linguistic practice of assertion encapsulated in (t). When one switches from this normal reliance on `true' to `truth', based on some kind of correspondence theory, then it seems to make sense to ask where the photon really was before the detection. This use of `really' and its ontological implications are not allowed in either Bohrian semantics of the CH formulation. This is what Feynman forbids.led through the c arm" is true (t).\\ Physicists do not invoke an assertive-redundancy account of truth. They do rely on the normal linguistic practice of assertion encapsulated in (t). When one switches from this normal reliance on `true' to `truth', based on some kind of correspondence theory, then it seems to make sense to ask where the photon really was before the detection. This use of `really' and its ontological implications are not allowed in either Bohrian semantics of the CH formulation. This is what Feynman forbids.
Dowker and Kent (1995, 1996)
criticized the CH interpretation as arbitrary and incomplete. We will separate this criticism from the problems related to quasiclassicality. Consider a
system whose initial density matrix, $\rho_i$ is given along with the normal complement of Hilbert-space observables. Events are specified by sets,
$\sigma_j$ of orthogonal Hermitian projectors, $P^{(i)}$, characterizing
projective decompositions of the identity at definite times. Thus,
\begin{displaymath}
\sigma_j(t_i)\;=\; \{{P_I}^{(i)}: \, i\,=\,1,2,\ldots,n_j\}_{t_j}
\end{displaymath}
defines a set of projectors obeying eq. (\ref{proj}) at time $t_i$. Consider a list of sets and time sequences. The histories given by choosing one
projection from each set in all possible ways are an exhaustive and exclusive set of
alternatives, $\mathcal{S}$. Dowker and Kent impose the Gell-Mann--Hartle medium decoherent
consistency conditions, restrict their considerations to exactly countable sets,
consider consistent extensions of $\mathcal{S}$, $\mathcal{S^{\prime}}$, and
then ask how many consistent sets a finite Hilbert space supports. The answer is a
very large number. This prompts two interrelated questions. How is one set
picked out as the physically relevant set? What sort of reality can be
attributed to the collection of sets?
Griffiths (1998) countered that these extended sets are meaningless. Their construction leads to histories that could not be assigned probabilities. To make the difficulty more concrete consider the simplest idealized realization of the Dowker-Kent \textit{Ansatz}, a silver atom passing through a Stern-Gerlach (SG) magnet. We will use the simplified notation, X, Y, and Z, for spin in these directions. At $t_1$ there are three families:
\begin{displaymath}
X_+(t_1), X_-(t_1) \hspace{.5in} Y_+(t_1), Y_-(t_1) \hspace{.5in} Z_+(t_1) Z_-(t_1)
\end{displaymath}
The passage from $t_1$ to $t_n$ allows of $6^{2n}$ possible histories. For the simple point we wish to make we consider 6 of the 36 possible histories leading form $t_1$ to $t_2$
\begin{eqnarray}
(a) X_+ (t_1) X_+ (t_2) & (c) X_+ (t_1) Y_+ (t_2) & (e) X_+ (t_1) Z_+ (t_2)\nonumber\\
(b) X_+ (t_1) X_- (t_2) & (d) X_+ (t_1) Y_- (t_2) & (f) X_+ (t_1) Z_- (t_2)\nonumber
\end{eqnarray}
The formalism does not assign probabilities to these histories. Here the appropriate experimental context would be successive SG magnets with various orientations. Suppose that the atom passes through an SG magnet with a X orientation at $t_1$ and one with a Z orientation at $t_2$, then only (e) and (f) can have non-zero probabilities. The selection of histories as meaningful is determined by the questions put to nature in the form of actual or idealized experimental setups. The fact that the formalism does not make the selection is not a shortcoming.
The final objection we will consider is the Dowker-Kent claim that the GH program cannot demonstrate the preservation of a quasiclassical order. Here again, it is misleading to expect the formalism to supply a selection principle. The GH program was set up more like a problem in reverse engineering, than as
the interpretation of a formalism.
\begin{quotation}
In a universe governed at a fundamental level by quantum-mechanical laws,
characterized by indeterminacy and distributed probabilities, what is the
origin
of the phenomenological, deterministic laws that approximately govern the
quasiclassical domain of everyday experience? What features of classical laws
can be traced to their underlying quantum-mechanical origin?\endnote{Gell-
Mann and Hartle 1993, p. 3345}
\end{quotation}
The G-H project was never presented as a deductive theory. The goal was to
see whether the acceptance of QM as the fundamental science of physical
reality allowed for an explanation of the large-scale deterministic laws
characterizing classical physics, a reverse engineering project that might
eventually lead to a more formal theory.
Consider a hacker trying to reverse engineer a computer game of shooting down
alien invaders and assume that he has developed a machine language
formulation that accommodates the distinctive features of the alien game at a certain stage of the action. Any
such machine language formulation admits of an indefinitely large number of
extensions, only a minute fraction of which would preserve `quasialienality'.
This is not an impediment. The hacker is guided by a goal, reproducing
a functioning game, rather than by the unlimited possibilities of extending machine-language code. The GH program has shown the possibility of programmatically
reproducing basic features of the deterministic laws of classical physics. To achieve this goal the program relies on decoherence and various approximations. It is misleading to treat the result as if it were an exact solution capable of indefinite extension.
When the consistent histories formulation and the Gell-Mann--Hartle project utilizing this formulation are put in the proper interpretative perspective, then they can adequately meet both the philosophical and the physical objections brought against them. Should the CH formulation be accepted as a replacement for the Copenhagen interpretation? My answer to this begins with the Landau-Lifshitz sense of `quasiclassical'. The CH analysis of actual and idealized experiments relies on quasiclassical state functions like $\vert C^*D\rangle$, indicating that the C detector has been triggered and the D detector was not. These are place holders for equivalence classes of state functions, that will never be specified in purely quantum terms. In an actual measurement one does not rely on $\vert C^*D\rangle$, but on a description of a measurement situation in the standard language of physics. This put us back in the realm where the Copenhagen interpretation has a well established success. The CH formulation/interpretation is not a stand alone interpretation in this practical sense. In the laboratory one carries on with physics as usual. Because of the way it is constructed the CH formulation parallels the Copenhagen interpretation with a projection postulate, or the measurement interpretation, as explained elsewhere (MacKinnon 2008b).
However, it does serve as a replacement for the Copenhagen interpretation in certain theoretical contexts. We have effectively considered two such contexts in the present article. The first is the acceptance of quantum mechanics as the basic science of reality. It cannot be understood as a basic independent science while its formulation has a parasitic relation with classical physics. Some other proposed replacements for Copenhagen exclude the orthodox treatment of measurements, thus generating a measurement problem. The CH formulation does not have this difficulty. Accordingly, it supplies a consistent \textit{formulation} of QM as a foundational science. This does not imply that it can stand alone in the normal practice of physics.The second context is an application of QM that excludes the possibility of outside observers. Cosmology is the prime example. Here again, one needs a formulation that is independent of, but compatible with, the orthodox interpretation. The CH formulation meets this requirement. We leave open the issue of whether it is superior to other proposed replacements, such as some version of the many-worlds interpretation.
\nocite{*}
\newpage
\begingroup
\parindent 0pt
\parskip 2ex
\def\enotesize{\normalsize}
\theendnotes\
\endgroup\normalfont
\newpage
\bibliographystyle{authordate1-4}
%\bibliography{C:/PhilofSC/ConHist/FinalCH}
\newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1}
\newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1}
\begin{thebibliography}{23}
\newcommand{\enquote}[1]{``#1''}
\expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi
\bibitem[\protect\citename{Aczel, }2001]{Aczel2001}
Aczel, A.:
\newblock Entanglement: The Greatest Mystery in Physics.
Four Walls Eight Windows, New York (2001)
\bibitem[\protect\citename{Bassi and Ghirardi, }1999]{Bassi1999}
Bassi, A. and Ghirardi, G.:
\newblock Decoherent Histories and Realism, arXiv:quant-ph 9912031. (1999)
\bibitem[\protect\citename{Bohr, }1934]{Bohr1934}
Bohr, N.:
\newblock Atomic theory and the Description of Nature.
Cambridge University Press, Cambridge (1934)
\bibitem[\protect\citename{Bub, }1997]{Bub1997}
Bub, J.:
\newblock Interpreting the Quantum World. Cambridge
University Press, Cambridge (1997)
\bibitem[\protect\citename{Davidson, }1983]{Davidson1983}
Davidson, D.:
\newblock Inquiries into Truth and Interpretation.
Cambridge University Press, Cambridge (1983)
\bibitem[\protect\citename{Davidson, }2001]{Davidson2001}
-----:
\newblock Subjective, Intersubjective, Objective
Clarendon Press, Oxford (2001)
\bibitem[\protect\citename{Davisson, }1927]{ Davisson1927 }
Davisson, C. and L. Germer, L:
\newblock Difffraction of Electrons by a Crystal of nickel.
Phys. Rev. 30: 705-740. (1927)
\bibitem[\protect\citename{Davisson, }1928]{Davisson1928 }
-----: (1928).
\newblock Are Electrons Waves?,
Franklin Institute Journal 205, 597-623. Reproduced in Boorse, H and L. Motz (eds.), The World of the Atom, Vol. 2, pp.1144-1165. Basic Books, New York (1966)
\bibitem[\protect\citename{D'Espagnat, }1995]{D'Espagnat1995}
D'Espagnat, B.:
\newblock Veiled Reality: An Analysis of Present-Day Quantum
Mechanical
Concepts Addison-Wesley, Reading, Mass (1995)
\bibitem[\protect\citename{Dowker, }1995]{Dowker}
Dowker, F. and A. Kent
\newblock Properties of Consistent Histories. Phys. Rev. Lett. 75, 3038-30411 1995)
\bibitem[\protect\citename{Dowker, }1995]{Dowker96}
-----:
\newblock On the Consistent Histories Approach to Quantum Mechanics. Journ of Stat. Physics 82, 1575-1646. (1996)
\bibitem[\protect\citename{Gell-Mann, }1994]{Gell1994}
Gell-Mann, M.:
\newblock The Quark and the Jaguar: Adventures in the Simple and
the
Complex. W. H. Freeman and Company, New York (1994)
\bibitem[\protect\citename{Gell-Mann and Hartle, }1990]{Gell1990}
----- and Hartle, J.:
\newblock Quantum Mechanics in the Light of Quantum
Cosmology. In Zurek, W. (ed) Complexity, Entropy, and
the Physics of Information, pp. 425-458
Addison-Wesley. Reading, Mass.(1990)
\bibitem[\protect\citename{Gell-Mann and Hartle, }1994]{Gel1994}
-----. (1994{\natexlab{a}}).
\newblock Classical Equations for Quantum Systems.
Phys. Rev. D 47, 3345-3358.
\bibitem[\protect\citename{Gell-Mann and Hartle, }1994]{Gel1995}
-----..
\newblock Strong Decoherence, arXiv:gr-qc/9509054v4 (1994{\natexlab{b}})
\bibitem[\protect\citename{Gell-Mann and Hartle, }1994]{Gel1996}
-----:
\newblock Equivalent Sets of Histories and Quasiclassical
Realms, arXiv:gr-qc/9404013 v3 (1996).
\bibitem[\protect\citename{Goldstein, }1998]{Goldstein1998}
Goldstein, S.:
Quantum Theory Without Observers--Part One, Physics
Today 51, 42-47 (1994)
\bibitem[\protect\citename{Gomatam, }(2007)]{Gomatam2007}
Gomatam, R.:.
\newblock Bohr's Interpretation and the Copenhagen Interpretation-Are the Two Incompatible?
In PSA06, Part I, pp. 736-748. The University of Chicago Press, Chicago (2007)
\bibitem[\protect\citename{Griffiths, }1984]{Griffiths1984}
Griffiths, R.:
\newblock Consistent Histories and the Interpretation of
Quantum Mechanics.
Journ. of Stat. Phys. 36: 219-272 (1984)
\bibitem[\protect\citename{Griffiths, }1986]{Griffiths1986}
-----:
\newblock Correlations in Separated Quantum Systems.
Amer. Journ. of Phys. 55, 11-18 (1996)
\bibitem[\protect\citename{Griffiths, }1996]{Griffiths1996}
-----:
\newblock Consistent Histories and Quantum Reasoning.
Phys. Rev. A 54, 2759-2765 (1996). arXiv:quant-ph/9606004
\bibitem[\protect\citename{Griffiths, }1997]{Griffiths1997}
-----:
\newblock Choice of Consistent Histories and Quantum
Incompatibility. Phys. Rev. A 57, 1604, arXiv:quant-ph/9708028. (1997)
\bibitem[\protect\citename{Griffiths, }1998]{Griffiths1998}
-----:
\newblock Consistent Histories and Quantum Delayed Choice,
Fortschr. Phys. 46: 6-8,741-748 (1998)
\bibitem[\protect\citename{Griffiths, }2002]{Griffiths2002a}
-----:.
\newblock The Nature and Location of Quantum Information. Phys. Rev. A 62, 66,
arXiv:quant-ph 0203058. (2002{\natexlab{a}})
\bibitem[\protect\citename{Griffiths, }2002]{Griffiths2002b}
-----:
\newblock Consistent Quantum Theory. Cambridge
University Press, Cambridge (2002{\natexlab{b}})
\bibitem[\protect\citename{Griffiths, }2002]{Griffiths2002c}
-----:
\newblock Consistent Resolution of Some Relativistic Quantum
Paradoxes, arXiv:quant-ph 0207015v1. (2002{\natexlab{c}})
\bibitem[\protect\citename{Griffiths, }1998]{Griff1997}
----- and Hartle, J.:
\newblock Comments on `Consistent Sets Yield Contrary
Inferences in Quantum Theory', arXiv:gr-qc/9710025 v1. (1997)
\bibitem[\protect\citename{Griffiths, }1998]{Griffiths1999}
----- and Omn\`{e}s, R.:
\newblock Consistent Histories and Quantum Measurements,
Physics Today 52, 26-31. (1999)
\bibitem[\protect\citename{Hartle, }2002{\natexlab{a}}]{Hartle2002a}
Hartle, J.:
\newblock The State of the Universe, arXiv:gr-qc 02090476 v1. (2002{\natexlab{a}})
\bibitem[\protect\citename{Hartle, }2002{\natexlab{b}}]{Hartle2002b}
-----:
\newblock Theories of Everything and Hawking's Wave Function of
the Universe. In Gibons, S., Shellard, E., and Rankin, S. (eds)The Future of Theoretical Physics and Cosmology, arXiv:gr-qc 0209047 v1. (2002{\natexlab{b}})
\bibitem[\protect\citename{Hughes, }1989]{Hughes}
Hughes, R.
\newblock The Structure and Interpretation of Quantum Mechanics.
Harvard University Press, Cambridge (1989)
\bibitem[\protect\citename{Landau, }1965]{Landau }
Landau, L. and Lifshitz, E.:
Quantum Mechanics: Non-Relativistic Theory, 2nd rev. ed.
Addison-Wesley, Reading, Mass. (1965)
\bibitem[\protect\citename{MacKinnon, }]{MacKinnon2007}
MacKinnon, E.:
\newblock Schwinger and the Ontology of Quantum Field Theory,
Fund Sci 12, 295-323 (2007)
\bibitem[\protect\citename{MacKinnon, }]{MacKinnon2008a}
-----:
\newblock The New Reductionism,
The Philosophical Forum 39, 439-461 (2008a)
\bibitem[\protect\citename{MacKinnon, }]{MacKinnon2008}
-----:
\newblock The Standard Model as a Philosophical Challenge,
Phil. Of Sc. 75, 447-457 (2008b)
\bibitem[\protect\citename{Omnes, }1992]{Omnes1992}
Omn\`{e}s, R.:
\newblock Consistent Interpretation of Quantum Mechanics,
Rev. of Modern Phys. 64, 339-382. (1992)
\bibitem[\protect\citename{Omnes, }1994]{Omnes1994}
-----.:
\newblock The Interpretation of Quantum Mechanics.
Princeton University Press, Princeton (1994)
\bibitem[\protect\citename{Omnes, }1999]{Omnes1999}
-----.:
\newblock Understanding Quantum Mechanics.
Princeton University Press, Princeton (1999)
\bibitem[\protect\citename{Omnes, }2008]{Omnes2008}
-----.:
\newblock Possible Agreement of wave function reduction from the basic principles of quantum mechanics, arXiv:quant-ph/0712:0730v1. (2008)
\bibitem[\protect\citename{Pearle, }2007]{Pearle2007 }
Pearle, P.:
\newblock How Stands Collapse II,
arXiv:quant-ph061121v3 (2007)
\bibitem[\protect\citename{Von~Neumann, }1955]{VonNeumann1955}
Von~Neumann, J.:
\newblock Mathematical Foundations of Quantum Mechanics
Princeton University Press, Princeton (1955) {\natexlab{[1932]}}
\bibitem[\protect\citename{Wheeler, }1983]{Wheeler1983}
Wheeler, J.
\newblock Law without Law. In Wheeler, J. and
Zurek, W. (eds), Quantum: Theory and Measurement, pp. 182-213.
Princeton University Press, Princeton (1983)
.
\end{thebibliography}
\end{document}