During recent years, there is increasing interest in the ontological status and meaning of the wave function, and it seems that there is even a shift in research focus from the measurement problem to the problem of interpreting the wave function. This motivates us to organize an online workshop on the meaning of the wave function. This group aims to address the controversies surrounding the different viewpoints (Bayesian, epistemic, nomological, ontic, etc).
Panel Discussion: How to make sense of the wave function?
 This topic has 28 replies, 9 voices, and was last updated 8 years, 1 month ago by Ulrich Mohrhoff.

AuthorPosts

October 29, 2014 at 8:57 am #989editorKeymaster
I would like to invite all participants, in the last day of our first online workshop, to think about a few general questions:
How to make sense of the wave function?
Do we need new experimental observations to understand the wave function?
Can we fathom the true meaning of the wave function in the end? Or we can only have a best interpretation?
Will the solution of this problem have deep implications for solving the measurement problem?
etc.
October 31, 2014 at 7:06 am #1114Quantum SpeculationsParticipantMy answers to these questions are as follows:
1. We don’t need new experimental observations to understand the wave function. This is different from the case of the measurement problem. This is why I think we should first solve this problem.
2. Although we cannot exclude the antirealist view, I believe we can find the true meaning of the wave function when assuming a realist view. In particular, I think we can determine which of the three views: epistemic, nomological, and ontic, is right. In my opinion, protective measurement (PM) provides a strong support for the ontic view.
3. However, I think an ontic interpretation of the wave function is not necessarily a continuous field in cf space (as usually thought). There are good arguments for the view that the wave function describes motion of particles in real space, and the motion is random in nature. For instance, the modulus squared of the wave function may give the probability density that the particles appear in certain positions in space. In this sense, we may say that the wave function describes the statistical property of the motion of a single system. (At a deeper level, I think it may represent the dispositional property of the particles that determines their random discontinuous.)
4. I think the true meaning of the wave function will have deep implications for solving the measurement problem (this is also the reason why I think the meaning of the wave function is an extremely important problem). The epistemic view is a good example. My opinion is that even for the ontic view, making sense of the wave function will also help solve the measurement problem. For example, if the Born probabilities originate from the objective probabilities inherent in the random motion of particles described by the wave function, then all existing realistic alternatives to quantum mechanics need to be reformulated. The reformulation may be easier for some alternatives, but more difficult for others. For example, it is relatively easy to find a dynamical collapse model where the chooser or the noise source that collapses the wave function is the underlying random motion of particles. However, it seems difficult to find a new formulation of Bohmian mechanics in which the probabilities of measurement results are objective and come from the wave function. Moreover, it seems that the manyworlds interpretation and the manyminds interpretation cannot be reformulated in terms of the objective probabilities inherent in the random motion of particles either.
October 31, 2014 at 2:26 pm #1118Lee SmolinMemberDear Shan,
I agree with you strongly. But deciding that the wavefunction is ontic, while very important, is of course not the end of the discussion, it is the beginning. It opens up the floor for us to propose hypotheses as to what beables the wave function corresponds to. Quantum mechanics doesn’t tell us, because of the measurement problem, because what we observe is particles not the wavefunction, and because it doesn’t give a complete description of individual processes. To answer the question of what properties of reality the wavefunction corresponds to requires a completion of quantum mechanics.
So, I would like to suggest your four questions pose a challenge to all of us. What hypothesis do we propose as to what the reality of the wavefunction corresponds?
dBB offers such a hypothesis: the wavefunction is a beable, it is a potential which guides the particles, which are also beables. The real ensemble formulation (REF) offers another: the wavefunction corresponds to a real ensemble of similarly prepared systems present in the universe. These trade beables through nonlocal processes which reproduce the Schroedinger dynamics.
Since the wavefunction corresponds to an ensemble of real, existing systems, it can correspond to real objective properties of this ensemble. It turns out that the absolute value and the phase correspond to different aspects of the ensemble. The absolute value measures a relative frequency probability distribution within the real ensemble of a beable corresponding to configuration space. The phase corresponds to another beable.
Both the dBB and REF predict regimes where the correspondence with QM fails, which opens the door to experiments which would distinguish them from QM. For the REF this regime consists of entangled states of systems complex enough to have no natural counterparts.
But this is just one hypothesis. Can I close with the provocative statement that with the PBR and other arguments for the reality of the wavefunction, it is time for research in QF to move beyond the phase of “interpretations” of QM to the task of discovering its completion. In other words, to answer your question, the way to make sense of the wavefunction is to replace it with a more precise description of the reality it corresponds to.
Many thanks for including me in this wonderful workshop.
Lee
October 31, 2014 at 6:35 pm #1121Robert GriffithsParticipantShan has posed several questions. Here are my answers
It is clear that wavefunctions are among the standard tools of textbook quantum mechanics, and nobody at this workshop has suggested that that is going to change. Furthermore, they are used to calculate or predict probabilities, and there are plenty of experimental data that agree with those predictions. (He who really nails down a discrepancy gets a ticket to Stockholm.) So it seems likely (very high probability) that there is something “there”, in the microscopic world, that corresponds in some way to the mathematical psi in the theorist’s notebook. On this I think most of us would agree, with the exception of Ruediger and his fellows QBists. But even for those who think there is something there, there is disagreement. One way to characterize different possibilities is to ask the question: psi gives us probabilities OF WHAT? Here are some of the options.
Probabilities of measurement outcomes and nothing else; in particular, don’t talk about what is going on in the microscopic quantum system which has (supposedly) been measured. Along with QBism include the quantum “orthodoxy” represented by de Muynck, which I characterize as the black box approach: There is a preparation and there is a measurement and between these everything is inside a black box which cannot be opened. To be sure there are differences between the strongly subjective Bayesian approach of the QBists and alternative, more objective Bayesian approaches, and still other options, but the point in common is that probabilitiesfrompsi do NOT refer to microscopic properties.
The alternative is to say that probabilities DO refer to something microscopic. Here there is an important divide between those who are looking for that “something” to some form of hidden variables of the sort which Bell and successors denote by the symbol lambda, and those who like me (not clear there are any others at this workshop) insist on a Hilbert subspace ontology, the ‘historians’. It is of some interest that representatives of the lambda school and the local historian are both unhappy with the idea that protective measurements of the wavefunction are measuring something real, rather than some sort of probability. A probability of what? Back to the disagreements.
Shan asks if more experiments are needed. In my opinion the answer is ‘NO’. There have been plenty of experiments and the vast majority agree with the predictions of quantum theory obtained by calculations that take into account some description of the preparation, and then supply probabilities for the macroscopic measurement outcomes (pointer positions). Do people still need to continue to carry out experimental tests of special relativity? Not that I know of, though an occasional check might not hurt. (Most of us were not particularly surprised to learn that those neutrinos who seemed to be violating the speed limit a couple of years ago were actually abiding by it.)
Shan asks whether finding the true meaning of the wavefunction will solve the measurement problem. I don’t think so, although some breakthroughs might shine light on both the meaning of the wavefunction and the measurement problem. Of course I am the one convinced that the measurement problem, or as I would put it both measurement problems, have been adequately solved in the histories approach. If that approach is correct, hidden variables theories are not going to make much progress because they employ what is fundamentally a classical approach to microscopic systems. But the reason we have quantum mechanics is that classical models have failed to explain things.
Bob Griffiths
October 31, 2014 at 6:53 pm #1122Richard HealeyParticipantBefore asking for the meaning of the wave function shouldn’t we ask which wave function we are talking about?
Too often talk of “the wave function” seems to arise from the perspective of a quantum model without regard to its applications (to an entire toy universe? to an actual system in the lab.?)
For dBB, Everett and REF there is one “special” wave function—that of the actual universe/multiverse. Effective wave functions or branch wave functions derived from this may have a lesser claim to beable status.
But for other views (QBist, Bohrian, Rovelli’s(?), my pragmatist) there are many wave functions with equal status (for QBists that status is subjective, or at least personal; for me, that status is objective but nonbeable), and a single system may consistently be assigned more than one of these at once, relative to something else (an agent’s epistemic state for QBists; the physical situation of a hypothetical agent for my kind of pragmatist).
Presumably one can’t perform a protective measurement on the wave function of the universe.
Since there are views of the meaning of the wave function in which there is no measurement problem, it is clear that “the solution of this problem” has implications for (dis)solving the measurement problem.
The term ‘realist’ needs to be used with discretion here since it could mean so many different things. One who holds that the wave function of a system (relative to the physical situation of a hypothetical agent) is determined by the values of physical magnitudes (many on other systems) is to that extent a realist, even if (s)he denies that the wave function is a beable, or that it is ontic in the sense of specifying (directly or indirectly) properties of that system.October 31, 2014 at 7:02 pm #1123Ken WhartonMemberWhile I obviously don’t think psi is (always) ontic, I do feel that there are a lot of different perspectives that make psi seem more like an ordinary probability distribution than it normally does. These perspectives include many of the cases we’ve been talking about: protective measurements, consistent histories, weak measurements, and even to some extent PBR.
What all these perspectives have in common, as I see it, is that they all essentially specify the *next* strong measurement that will be made on the wavefunction, and then talk about probabilities before that next measurement. (In the adiabatic case of protective measurements, I suppose that “next” should be read “simultaneous”.)
For me, this just adds to the evidence that \psi is best viewed as a collection of classical probability distributions, most of which are wrong. The correct distribution one should use is conditional on the future measurement geometry (i.e. the choice of the next strong measurement.) So \psi is isomorphic to something like P(m(x,t)G), where m(x,t) are some spacetimelocalized microstates, and G is the future measurement geometrychoice. For any given G, P is just a classical probability distribution, but you can’t use it until you know G.
It’s crucial to note that you can’t include G in some larger probability space; it’s incoherent to assign a probability to something you take to be a free choice, coming from outside the system that is being analyzed). So using \psi is necessarily a twostep process, in agreement to Bob’s view: You first have to pick out the correct classical probability distribution out of \psi, and then you can use it to make predictions.
This viewpoint explains lots of strange features of \psi, including the dimensionality; as the number of identical particles goes up, it’s not the space of “m” that increases, it’s the space of G. (There are more measurements you can make on 2 electrons than on one electron.) It goes up even faster for different types of particles, because then “m” increases as well… But the exponential (configuration space) growth is simply due to G. (As Ulrich points out, there’s no difference between electron 1 at A and electron 2 at B, and vice versa, but there *is* a difference if they are different particles, so even *more* G’s can be applied to the case where the particles are different vs. where they are the same.)
Now, the obvious retort to this position is that (to use Valia’s terminology) the primitive ontology (PO) of the theory (m(x,t)) can’t possibly depend on the future measurement geometry; as Lee says, we see evidence of causal order all around us, and such a story would be retrocausal. But if you don’t split up physics into an initial PO + dynamics, and instead look at more timeneutral ways to impose boundary constraints on the system (as in the Feynman path integral, a point raised by Ulrich), putting future boundary constraints on the system is perfectly natural. And once you do this, absent Cauchydynamics, you actually *expect* different future measurement/boundaries to have different influences on the past system. After all, for stat mech, different spatial boundary constraints change the intermediate probabilities (finite size effects, etc.). I’ve been meaning to get back to the discussion on Lee’s topic concerning boundary constraints, and a point I’ll soon make there is that initial (and final) boundaries look a lots less strange if they’re treated on the same footing as spatial boundaries, and of course this same spacetimeviewpoint is nicely compatible with Lagrangian general relativity.
For a more expansive take on the above points, http://www.mdpi.com/20782489/5/1/190 is a fairly easy read; basically starting from the Ruediger/Chris QBist stance that the wavefunction is a state of knowledge, and noting that with such finalboundaryconstraint effects, \psi can actually contain information about something that really exists in ordinary spacetime, without changing logic in any way. But to make \psi look like classical information, you have to know the final measurement geometry into account. And of course, this happens to be the case in protective measurement schemes. Maybe just a coincidence, but in Matt Pusey’s terms, it does seem clear that knowing what Charlie is up to can make a big difference.
Thanks, Shan, for putting this together!
Ken
October 31, 2014 at 7:26 pm #1124Robert GriffithsParticipantDear Lee,
In response to your #1118, I confess I don’t know a great deal about the real ensemble formulation (REF), as I have only looked at it recently. As I understand it, there seems to be some resemblance to de BroglieBohm (dBB). I note that dBB suffers from a serious problem which was pointed out many years ago by Englert, Scully, Sussman and Walther; and which Aharonov and Vaidman took rather seriously. It has to do with retrodiction from later properties to earlier times. There are circumstances in which a Bohmian particle trajectory can pass a long ways away from a detector which nonetheless detects passage of the particle. In other words, those nasty long range influences can do odd things. The defence by the Bohm school was to challenge the Englert et al. conclusion, because it was based on standard quantum mechanics plus the sorts of reasoning which experienced physicists tend to make when they are thinking about what experiments are telling us. It was not based on simple unitary evolution up to some time and then, bang, the wavefunction collapses.
My contribution [“Bohmian mechanics and consistent histories”, Phys. Lett. A 261, pp. 227234. arXiv: quantph/9902059; it includes references to Englert et al. and other work] was to show that if one employs consistent histories one can analyze the situation and show that Englert et al. were indeed correct. The Bohm school has yet to publish a response. I leave you to draw your own conclusions about dBB; you will be in no doubt as to mine. I am not, of course, saying that REF falls into the same difficulty, but it might be worth taking a look.
Finally, with respect to your statement that it is time to discover a “completion” of quantum mechanics I ask: What is incomplete about the one advocated by GellMann, Hartle, Omnes and me, and more recently by Friedberg and Hohenberg [R. Friedberg, P. C. Hohenberg, “Compatible Quantum Theory”, Rep. Progr. Phys. 77, 092001; arXiv:1405.1961] ? It resolves all of the paradoxes (at least those known to me), is demonstrably local, demonstrably noncontextual (using that term the way Bell employed). Granted, it is much more fun to work things out for yourself, but sometimes taking a look at the literature gives one useful hints.
Bob G
October 31, 2014 at 7:32 pm #1125Richard HealeyParticipantTo Ken at #1123
For me, this just adds to the evidence that \psi is best viewed as a collection of classical probability distributions, most of which are wrong. The correct distribution one should use is conditional on the future measurement geometry (i.e. the choice of the next strong measurement.) So \psi is isomorphic to something like P(m(x,t)G), where m(x,t) are some spacetimelocalized microstates, and G is the future measurement geometrychoice. For any given G, P is just a classical probability distribution, but you can’t use it until you know G
I think is a good way to look at \psi. But why not take the t in m(x,t) to refer to the future measurement time? Then the theory is not committed to any PO of its own, but helps itself to an independently specified ontology only when it needs one (as a first approximation, the magnitudes of classical physics defining the pointer basis for the decoherence involved in the “measurement”)?
October 31, 2014 at 7:41 pm #1126Robert GriffithsParticipantDear Richard,
Relative to your #1122. I think for most of us ‘the wave function’ is something we can imagine arising from some sort of preparation: Alice has a device that produces a spinhalf particle with S_x = 1/2, and we assign a wavefunction to the particle, and maybe also to the apparatus that produces it and to the measurement apparatus, but to include the whole cosmos is a bit too big. I tell all my students to begin with the simplest problems first; if you don’t understand them, you are unlikely to make progress on more complicated things.
But then you claim that “there are views of the meaning of the wave function in which there is no measurement problem”. Would you agree with me that the measurement problem is, fundamentally, giving a fully QUANTUM description of the measurement process? If so, then I know of at least one view of the wave function which fits in with an interpretation (mine, of course) that has no measurement problem. Which are the others?
Bob Griffiths
October 31, 2014 at 7:42 pm #1127Ken WhartonMemberRichard,
I agree that one *could* do this, but you’d end up with an impoverished ontology, for no particular gain (except perhaps a wider acceptance from people who don’t like retrocausality…? 🙂 )
Also, when it comes to entanglement, that approach wouldn’t shine any new light on what was going on. In the classical case that you have two matching gloves in two boxes, the “classical entanglement” that links the two boxes is only explicable in terms of the past history of the boxes+ gloves. If you were insistent on an ontology that only described the probabilities right before you opened the boxes, then this ‘classical entanglement’ (opening one box you would suddenly know the other) might look just as strange, magical and nonlocal as Bellinequality violations!
October 31, 2014 at 7:47 pm #1128Matthew PuseyMemberHow to make sense of the wave function?
I currently think the epistemic approach has the best hope of doing this. Even if one constructs a good psiontic interpretation, it seems unlikely to make sense of the wavefunction if that means provide natural explanations for it’s key properties (living in configuration space, collapse, etc).
Do we need new experimental observations to understand the wave function?
An experiment that falsified quantum theory would of course have profound effects on all foundation al questions. In general I am sceptical that experimental results that are compatible with quantum theory will have big effects on how we interpret it.
Will the solution of this problem have deep implications for solving the measurement problem?
The epistemic approach dissolves the measurement problem. But the wider ‘reality problem’ remains open even if the wave function is epistemic.
October 31, 2014 at 7:55 pm #1129Richard HealeyParticipantTo Robert at #1126,
Would you agree with me that the measurement problem is, fundamentally, giving a fully QUANTUM description of the measurement process?
Not quite. The measurement problem is a consistency problem: to reconcile a quantum model of a unitary interaction that entangles wave functions of system and apparatus with the interpretative assumption that a wave function completely describes the system to which it is assigned.
That problem is dissolved by noting that a wave function does not describe the system to which it is assigned—neither completely nor incompletely—but merely prescribes the correct degrees of belief one should hold in claims expressing alternative outcomes of the interaction, where the application of quantum theory presupposes that exactly one outcome will occur.
October 31, 2014 at 8:03 pm #1130Ken WhartonMemberBob; maybe I can amplify Richard’s point in 1129… Those asking for a “QUANTUM” account all the way down are assuming that at bottom, the quantum description is the most fundamental. But in the psiepistemic view (at least nonQBist versions of it), the quantum description is not fundamental at all.
A better way to frame your original question then, is how to treat the measured and the measurer on the same conceptual footing, whatever that footing happens to be. (In my view, for example, both look like classical fields, and I doubt you would want to call that a “quantum” description.)
October 31, 2014 at 8:04 pm #1131Richard HealeyParticipantTo Ken at #1127,
I remain a retroskeptic, but keep trying!
The handwaving response to the classically entangled gloves problem is to appeal to massive environmental decoherence of their state ensuring that each glove may be truly claimed to have its own classically developing trajectory since they were together. Then we get back a factorizable common cause explanation of the kind Bell wanted. Lack of such decoherence in the quantum entangled photons’ state rules out an explanation in terms of a factorizable common cause, but not a common cause that is not factorizable. (See my submission on the IJQF web site to the Bell at 50 volume).
October 31, 2014 at 8:07 pm #1132Richard HealeyParticipantTo Matt at #1128,
I agree.
October 31, 2014 at 8:08 pm #1133Ken WhartonMemberRichard; but you would *lose* the common cause explanation of the gloves if you denied any ontology until the moment that the box was opened. A perfectly good explanation of a simple correlation would go missing, simply because you perhaps don’t want a similar (retrocausal) account in the Bellviolating case. And once you discard those prior beables, the gloves again look mysterious.
October 31, 2014 at 8:09 pm #1134Robert GriffithsParticipantDear Richard,
I think your #1129 truncates the measurement problem. Not only do we want
the pointer to stop shaking, we want to learn something about the PREVIOUS state of the particle from the pointer’s later position. That’s how experimental physics is carried out, and most obviously so when the apparatus eats up the particle so it is not longer around. Solving the first measurement problem (in my notation) does not solve the second (again, my terminology), and I cannot see how any interpretation of quantum mechanics can be satisfactory without taking care of both.Bob G
October 31, 2014 at 8:10 pm #1135Maximilian SchlosshauerParticipantTo Richard #1129:
… where the application of quantum theory presupposes that exactly one outcome will occur.
Exactly! This is why I tend to think of the measurement problem as a pseudoproblem.
October 31, 2014 at 8:19 pm #1136Richard HealeyParticipantTo Ken at #1133,
I agree: but I wouldn’t want to do that. I don’t want to restrict claims about positions and such to contexts that we ordinarily think of as measurements (like opening the box). Claims about positions are both significant and (sometimes) true whenever a system’s state is environmentally decohered in position basis. Of course it’s not a coincidence that that is true when we open the box. (Though measuring the position of a photon doesn’t decohere the quantized EM field in “position basis”, so there we have to think of decohering the “pointer” position instead, perhaps when a photoelectron has initiated an avalanche in a photodiode.)
October 31, 2014 at 8:26 pm #1137Ken WhartonMemberRichard… Okay, but then I don’t see why you would want to restrict m(x,t) to only being meaningful at certain t’s. After all, m(x,t) would include the entire past history of the entities (such as one of Bob’s consistent history paths through an interferometer), and restricting this to m(x,t_m) where t_m is the measurement times would throw away those very beables that would encode the prior positions.
Now, sure, in my stories m(x,t) is a field and often spreads out (perhaps passing through both slits in a singleparticle doubleslit experiment, before coming back together), but it’s still *position* information about where that field exists. Plus, sometimes the field is localized, and it sounds like you wouldn’t want to throw out m(x,t) in those special cases. So why throw any of it out at all? (In fact, there are plenty of Bellinequality scenarios where the positions are always fairly classical, when photons are contained in fiber optic cables, etc.)
October 31, 2014 at 8:28 pm #1138Richard HealeyParticipantTo Bob at #1134,
I think my last remark about the position of a detected photon is a first step toward addressing your second aspect of the measurement problem. Some experimenters talk as if the results of their measurements simply reveal the value the measured magnitude already had (e.g. where it was). I think they can get away with this talk as long as they are careful in what inferences they draw from it, since on my pragmatist view the content of a claim is always a function of the role it plays in inferences. Perhaps you agree, since you want to confine reasoning to a frame (is that your word? I forget).
October 31, 2014 at 8:35 pm #1139Richard HealeyParticipantKen,
I don’t want to throw stuff out: I just want to make sure there is a principled way of letting in enough stuff to answer Bob’s question as to what quantum probabilities are probabilities of: i.e. without using any of Bell’s proscribed words such as ‘measurement’. If there is a principled and consistent way of allowing more stuff in, I’d be very happy to do so. Indeed, I might be so happy that I would be prepared to swallow retrocausality (if clearly explained), which is one reason I encouraged you to keep trying!
October 31, 2014 at 8:47 pm #1140Robert GriffithsParticipantDear Richard, re your #1134
I would agree that statements about what the measurement actually measured, when they are microscopic properties, need to be treated with caution and put in the proper framework or consistent family, and I think the experimentalists are correct (without being able to supply a full explanation of why they are correct) in practice. It is us theorists who get mixed up.
Here is an example. Alice measures S_x and finds some value, and that is the value the particle had before it was measured. But she could have (counterfactual) measured S_z, and therefore S_z also must have had a value.
You will recognized EPRBohm with just one particle and no entangled state.
If EPR had had the consistent solution to the (double!) measurement problem we now possess, a lot less ink would have been needed. Bob GOctober 31, 2014 at 8:57 pm #1141Robert GriffithsParticipantDear Richard, re #1139
I advocate as the principled way the Single Framework Rule; at least that seems to allow in enough things for the physicist to do physics, and not so many that one ends up with insoluble paradoxes. The result, by the way, is
a satisfactory ability to do retrodiction, but without retrocausality, though some criticis of the histories approach, including d’Espagnat, have thought that retrocausality was implied by the Single Framework Rule. Bob GOctober 31, 2014 at 9:01 pm #1142Robert GriffithsParticipantDear Shan,
Before the final bell rings let me express what I am sure is the universal belief of the participants: We all appreciate the hard work you have done
in organizing things for this workshop. Thanks a great deal!October 31, 2014 at 9:21 pm #1143Lee SmolinMemberDear Bob,
Many thanks for your comments. My first response is that I don’t think that the REF has the issue with retrodiction you mention, but I’ll look into it.
I haven’t thought through your consistent histories framework carefully in many years so I will take your invitation to do so again. At one time there was an argument by Dowker and Kent that consistent sets of histories did not distinguish the semiclassical domain from many other equally consistent sets of histories, was that ever resolved?
For me the measurement problem is just one of several reasons why I believe that none of our present theories can be successfully extended to the universe as a whole. (The other reasons are detailed in the books and papers mentioned.) So even were that resolved there would be additional reasons to presume QM is an approximation to a yet to be defined cosmological theory, suitable for small subsystems.
This is to say that there is a need for careful foundational thinking of the kind that is so well demonstrated here in the cosmological domain.
Many thanks,
Lee
October 31, 2014 at 10:11 pm #1144Robert GriffithsParticipantDear Lee,
My answer to the Dowker and Kent criticism is that the fact that there are alternative frameworks in no way invalidates the quasiclassical framework of GellMann and Hartle. This has not convinced either of them. A nottoolengthy discussion of things from my present perspective is in “The New Quantum Logic,” Found. Phys. 44 (June, 2014) pp. 610640. arXiv:1311.2619.
Then I have to add that I leave cosmology and quantum gravity to Jim Hartle. I no better than to tackle problems that are just too tough for me!
Bob Griffiths
October 31, 2014 at 10:14 pm #1145Robert GriffithsParticipantCorrect ‘no’ to ‘know’ in the last line of #1144. RBG
November 1, 2014 at 6:58 am #1148Ulrich MohrhoffParticipantThe reason I’m late to the party is the oddity of my time zone (India), not that I want to have the last word!
To my mind, the wave function is a computing “machine” with inputs and outputs. (I’m happy to note that some of you would agree with this.) Pop in (a) the outcomes of the relevant measurements that were made, (b) the times when they were made, (c) a measurement to the possible outcomes of which we want to assign probabilities, and (d) the time of this measurement — and out pop the wanted probabilities.
The measurement problem has two major components:
(i) How to account for the collapse of the wave function, which supersedes or disrupts its unitary evolution at the time of (or during) a measurement. This is a pseudoproblem, for the time on which \psi functionally depends is the time of the measurement to the outcomes of which it serves to assign probabilities, not the continuously advancing time on which an evolving physical state depends.
(ii) How to account for the absence of interference between macroscopically distinct states. In other words: establish what von Weizsäcker has called the semantic consistency of the theory, i.e., find a way of thinking in which the quantummechanical correlation laws are consistent with the existence of their correlata (measurement outcomes). Unlike Shan (#1114), who believes that the reification of the wave function can help solve this problem, I think that the transmogrification of a calculational tool into an evolving ontic state is what makes it impossible to solve this problem. Another reason solving problem (i) won’t help solving problem (ii) is that the former only arises in the wavefunction formulation of quantum mechanics whereas the latter is a problem concerning quantum mechanics regardless of how it is formulated.
I do not think (as Lee at #1118 appears to) that quantum mechanics requires completion. What is incomplete is not the formal apparatus of quantum mechanics but the physical world, inasmuch as the latter is not spatiotemporally differentiated “all the way down”, as I tried to explain in my contribution to this workshop (and in some of the references therein). That the spatiotemporal differentiation of the physical world is incomplete is the reason \psi cannot be an ontic state existing at every instant of time. It is also what makes it possible to establish the theory’s semantic consistency.
When asking ourselves what (if anything) in the microworld corresponds to any of the mathematical ingredients of the theory’s formal apparatus — not just the wave function — we ought not lose sight of the fact that “to our present knowledge subatomic reality is not a microworld on its own but a part of empirical reality that exists relative to the macroscopic world, in given experimental arrangements and welldefined physical contexts outside the laboratory”. This quote is from B. Falkenburg’s excellent book Particle Metaphysics. There (most probably) is a deeper level or reality, but (most probably) we won’t be able to discern it by reifying our calculational tools. For this we need both welldefined physical contexts in the macroworld and the quantummechanical correlation laws. I like to say this by paraphrasing Kant’s famous statement that “thoughts without content are empty, intuitions without concepts are blind”: without measurements the formal apparatus of quantum mechanics is empty, measurements without the formal apparatus of quantum mechanics are blind.
The deeper reality I (for one) discern is too complex and unfamiliar to most quantum philosophers to even try and outline it here (that I attempted in my contribution), but I want to conclude by saying that reifying our calculational tools is more like erecting an opaque wall between ourselves and that deeper reality. First we must make the correlation laws consistent with the existence of their correlata, and then we must look through the correlation laws and their correlata at what lies beyond them.
Thanks to all and especially Shan for having me as a participant at this great workshop.

AuthorPosts
 You must be logged in to reply to this topic.