During recent years, there is increasing interest in the ontological status and meaning of the wave function, and it seems that there is even a shift in research focus from the measurement problem to the problem of interpreting the wave function. This motivates us to organize an online workshop on the meaning of the wave function. This group aims to address the controversies surrounding the different viewpoints (Bayesian, epistemic, nomological, ontic, etc).
The Quantum Wave Function as Property and as Preprobability
 This topic has 12 replies, 4 voices, and was last updated 6 years ago by Robert Griffiths.

AuthorPosts

October 25, 2014 at 8:45 am #719Robert GriffithsParticipant
In quantum theory a wave function can correspond to a physical property in the sense of a onedimensional subspace (ray) in the quantum Hilbert space. But in addition to this ontological role it can also be used in an epistemic sense as a preprobability, a mathematical tool for assigning probabilities. Maintaining a clear distinction between these two uses is helpful in avoiding the trap of thinking of the collapse of the (epistemic) wave function as some sort of physical process. Full text
October 29, 2014 at 7:43 pm #1008Robert GriffithsParticipantIllustration.
Alice is a competent experimentalist who has built an apparatus M to measure
S_z for a spin half particle, and another apparatus L which can prepare a
spin half particle in a state S_x = +1/2. Let z>, x+>, etc. be kets
with the obvious interpretation. Push a button on L, out goes the particle, to
be measured a short time later by M.If you model this with unitary time evolution following von Neumann then at the
end the apparatus will be in a superposition (forget the normalization)
Psi> = M+> + M> of two pointer positions. Alice, who took my quantum
course, treats Psi> as a preprobability from which she can calculate
probabilities that M will end up in states M+ or M.Suppose that the pointer is M. What can Alice conclude about the situation
just before the measurement took place? She can conclude z> for the
particle. Measurements carried out by competent experimentalists can be
interpreted that way: the final macroscopic state is correlated in a onetoone
fashion with the state of the measured particle just before the measurement
took place. Alice’s inference employs what I call the Z family.But the particle was produced in x+> and then went through a region free
of magnetic field before getting to M. So why shouldn’t Alice say that just
before the measurement took place the particle state was x+>? After all, she
built N, and competent experimentalists are probably not speaking nonsense.
Yes, Alice can assign x+> using the X family of histories.Now we have a contradiction don’t we? Using the Z family Alice concluded z>,
with the X family she concluded x+>. But surely a spin half particle cannot
have both z> and x+> at the same time. True. The histories approach deals
with this using the SINGLE FRAMEWORK RULE (SFR): One cannot combine the
descriptions provided by two incompatible (projectors in one don’t commute with
projectors in the other) families.However the SFR does restrict the theoretical physicist to using just one
framework for a quantum description. It is a rule which prohibits COMBINING
incompatible frameworks.October 29, 2014 at 8:49 pm #1009Ken WhartonMemberHi Bob,
I really like the program of looking for a more detailed history of what’s really happening between measurements, and I think there are many examples of these consistent histories that are certainly a better description of reality than the standard QM formalism.
That said, I have a couple biggishpicture questions for you:
1) If you’re correct that QM is giving us the wrong dynamics, what makes you think that it happens to be giving us the right *state space*? After all, if there are stochastic deviations from unitary dynamics, then that’s equivalent to some random input influencing the dynamics. And if there’s some source of random input outside the purview of standard QM, why shouldn’t we look for an expanded state space that would *include* such a (seemingly) random input as part of its makeup? The other approach would be to look at the arguments put forward by the psiepistemic proponents and wonder if perhaps the QM state space might be the wrong starting point in the first place, whether or not you add things to it.
2) It bothers me that you’re not able to offer a realistic story of what might be happening in cases where there *is* no consistent history. Maybe the issue is that (for singleparticle preparations and outcomes) you tend to be looking for a history in which there is always one single particle, at every step of the way. Have you considered allowing fieldtype explanations of singleparticle experiments? (Why couldn’t a single photon pass through both arms of an interferometer, by splitting up and then joining back together before the final measurement?) I think such fieldbased ontologies might perhaps expand the set of experiments for which you could find a consistent history, without even needing to deviate from ordinary logic/probability (as Hartle and GellMann claim is necessary. I’m not sure if you’re going as far as they are, or just cautioning against asking the very questions that I’m asking…)
Best,
Ken
October 29, 2014 at 9:00 pm #1010Matthew PuseyMemberHi,
Thanks for the interesting remarks. Given that ontic and epistemic are fundamentally different categories, would you agree that, generally speaking, it would be surprising to find something ontic and something epistemic represented by the same mathematics?
If you do not agree: can you think of an example of this occurring outside of quantum theory?
If you agree: is there something specific to wave functions that mitigates this surprise in your approach?
Yours,
MattOctober 29, 2014 at 9:12 pm #1011Robert GriffithsParticipantDear Ken,
I think you are mistaken is saying that because the histories approach is stochastic it is assuming that “QM is giving us the wrong dynamics”. Stochastic processes have been part of standard quantum theory ever since
Born proposed that this was the way to understand Schrodinger’s wave, which as I recall was some six weeks or so after Schrodinger’s timedependence paper. Von Neumann made stochastic dyanmics part of his understanding of temperal quantum evolution. And it is implicit in the textbook treatments,
though they have a confusing way of talking about it and concealing the lack of understanding by referring to “measurements”, which problem has not been solved by the people who insist that all quantum dynamics is unitary.
Every time you tell your San Jose students to calculate a probability as
<psiprojectorpsi> you are employing the wave function to generate probabilities. It is GRW, not histories, that has a source of randomness outside standard QM.Re your comment 2), could you give me an example of what you are thinking about as a situation where there is no consistent history?
Best! Bob Griffiths
October 29, 2014 at 9:19 pm #1012Robert GriffithsParticipantDear Matt, in response to your #1010
Note that in the histories approach ontic and epistemic are not always represented by the same thing. Ontic includes properties represented by Hilbert spaces of dimension 2 or more, and epistemic includes density operators and the probabilities assigned to histories involving 3 or more times. It is certainly the case that they overlap in the case of a 1d subspace or a ket used as a preprobability. The closest thing I can think
of in ordinary probability theory is the case of propositions with probability 1, which is somewhat analogous, though there I think it is a good idea to maintain the ontological and epistemic distinction, just as in the quantum overlap discussed above.October 29, 2014 at 9:32 pm #1013Matthew LeiferMemberMatt #1010,
Perhaps I can help Bob out by offering an example. It also happens in Spekkens’ toy theory. There, the set of pure states is isomorphic to the set of outcomes of maximally informative measurements. The former are epistemic and the latter represent ontic properties of the system.
October 29, 2014 at 9:35 pm #1014Matthew PuseyMemberThanks, I think that helps. It’s just occurred to me that a similar ‘coincidence of ontic and epistemic’ actually occurs in Spekkens’ toy theory: the pure states and pure effects are the same set (just as in quantum theory), and yet there is a fact of the matter about whether a given system will result in a given pure effect (or if you like, whether the system “has that property”) even though there isn’t a fact of the matter about which pure state applies to the system.
P.S.: The “Thanks” was directed to Bob. Looks like great minds think alike, Matt.
October 29, 2014 at 9:45 pm #1016Ken WhartonMemberHi Bob,
Let’s see if I can tighten up my original questions, in light of your responses:
1) You say in your introduction that if you use the Schrodinger equation without stochastic terms, it can only be used to compute ” probabilities, not the future state of the world”. Since you have some ontic aspects of your original psi>, then clearly the ordinary deterministic Schrodinger equation does not give us the “true” dynamics of whatever those ontic aspects are. (Right?) So it seems to me that from an ontic perspective, you’re saying that the Schrodinger equation is not giving us the full story of what is really happening. Therefore I wonder why you think it’s using the proper state space to begin with.
On #2, let’s talk about SternGerlach measurements on a single spin 1/2 system. It’s prepared in spinup via a S_z measurement, then passes through a S_x measurement device, but the two output beams (paths A and B) from the S_x magnet are brought back together without a whichpath measurement. Finally, the particle is remeasured with S_z, and of course it’s still spinup. I’m interested in the history between the two S_z measurements. Specifically, would you be happy with a history that says some ontological field travelled on *both* path A and B? Or do you think I shouldn’t be asking such a question in the first place?
Best,
Ken
October 29, 2014 at 9:46 pm #1017Matthew LeiferMemberBob,
This is not to do with the status of the wavefunction specifically, but what I have always disliked about your approach to consistent histories is a consequence of the single framework rule.
Different frameworks may contain some projectors in common, which is precisely what happens in BellKochenSpecker experiments. Any inferences between different frameworks are viewed as meaningless in the consistent histories approach, so there is no way of reasoning about a projector as it appears in one framework given facts about it as it appears in a different framework. Nevertheless, we have the curious coincidence that the same projector always receives the same probability regardless of which framework it is viewed as being a part of. If we are not supposed to view the projector as representing the same property in the two different frameworks then why should this be the case?
Now, you might want to respond that this just follows from the mathematics of quantum theory. However, I think that part of the role of an interpretation should be to explain why that mathematics must hold, at least partly, rather than just putting interpretive structure on top of the mathematics. A quantum logician, who does view the projectors as representing the same property in the two different frameworks, can appeal to Gleason’s theorem to explain the Born rule probabilities, but it seems that you cannot.
I understand that you need to adopt the single framework rule in order to avoid BellKochenSpecker contradictions, but it seems to me that in doing so you have also thrown out the explanation for why the Born rule holds in the first place.
October 29, 2014 at 10:01 pm #1019Robert GriffithsParticipantKen, responding to your 1016
1). The events that occur at successive times in a history are ontic, i.e., a succession of properties. However, you cannot assign probabilities to a
history involving more than three times (initial state and two times) by simply starting with the initial state and using Schrodinger’s equation.
That would give you the probabilities of the states at individual times conditioned on the initial state, but not the correlations between states at later times. However, these other probabilities are given by an extension of the Born rule which employs unitary time evolution (thus Schrodinger’s equation), but not just the wave function emanating from the initial state.
I hope this helps2). In cases in which interference occurs the histories approach can only assign probabilities to families of histories which satisfy consistency conditions, and these are restrictive. Let me discuss it in terms of the double slit, assuming an initial coherent state before the particle passes through the slit system. If you insist that the particle arrive at a definite point in the interference region you cannot specify a slit through which it passed: Feynman knew this intutively; the histories approach reduces it to mathematics. However, if you are willing to allow a different state at a later time you can construct a family of histories in which the particle goes through a definite slit. Ch. 13 of my book has an extended discussion. In this respect the histories approach is restrictive, and you may not like that. On the other hand, if you don’t abide by the restrictions you can construct paradoxes.
October 29, 2014 at 10:11 pm #1020Robert GriffithsParticipantDear Matt, with respect to #1017
The way I prefer to think about things is found in Ch. 16 of my book, and works in the following way. Probabilities are always to be assigned on the basis of certain data assumed known, i.e., they are conditione on these data.
If you are interested in assigning a probability to some property or history on the basis of these data, the best strategy is to employ the smallest possible framework that contains the data and what interests you. That is, in any case, the minimum framework hou can use. Now you can refine the framework by adding additional stuff, and construct various incompatible frameworks by in one case adding stuff_1 and in the other case stuff_2. You will get incompatible frameworks this way, but since they are a refinement on the smallest framework, the probabilities assigned to what you wer originally interested in remain the same. I find this perspective helpful, but of course would be happy if you could figure out how to improve on it .
I don’t understand why you think I have thrown out the explanation for why the Born rule holds. In any case I consider the Born rule as a sort of axiom, not a derived quantityOctober 29, 2014 at 10:16 pm #1021Robert GriffithsParticipantDear All,
I need to catch a bus, so I’ll finish at this point, and if additional things come in I will try and respond them later. Thank you for some very interesting
questions, and I hope you have found the exchange as helpful as I have.Best! Bob Griffiths

AuthorPosts
 You must be logged in to reply to this topic.