It has been realized that the measurement problem in quantum mechanics is essentially the determinateexperience problem. The problem is to explain how the linear quantum dynamics can be compatible with the existence of our definite experience. This means that in order to finally solve the measurement problem it is necessary to analyze the observer who is physically in a superposition of brain states with definite measurement records. Indeed, such quantum observers exist in all main realistic solutions to the measurement problem, including Bohm’s theory, Everett’s theory, and even the dynamical collapse theories. Then, what does it feel like to be a quantum observer? Although these theories, as well as the bare theory, give their respective answers to this intriguing question, it is still unknown what the true answer is. It can be expected that the answer, once it has been obtained, will have significant implications for solving the measurement problem.
In parallel with the Physics of the Observer Program and RFP announced by FQXi, we will host an online Workshop on Quantum Observers from 9th January 2016 to 19th January 2016. The workshop will bring together leading experts in the field, and address the most pressing issues in understanding quantum observers and solving the measurement problem.
Based on the successful experience from First iWorkshop on the Meaning of the Wave Function, John Bell Workshop 2014 and Quantum Foundations Workshop 2015, this workshop will be also selforganized to a large extent. Every member may create a topic in the workshop forum on his own, which gives a concise introduction to his ideas to be discussed, and which also states the date and time of his twohour discussion. Then other members can leave comments beforehand or participate in the discussions by text chat in the forum in the twohour duration at the time.
The list of participants and the schedule of this workshop will be announced soon. Selected presentations in this workshop will be published in International Journal of Quantum Foundations.
Note: This is not a public workshop. Group content and activity will only be visible to members of the group, most of who are invited. If you would like to participate in the workshop, please log in or contact us.
Does the psiepistemic view really solve the measurement problem?
 This topic has 25 replies, 7 voices, and was last updated 7 years, 2 months ago by Jiri Soucek.

AuthorPosts

January 14, 2016 at 7:31 am #3284Quantum SpeculationsParticipant
It is widely thought that the psiepistemic view provides a straightforward resolution or even a dissolution of the measurement problem. In this paper we argue that this is not true. In order to explain the collapse of the wave function merely as a process of updating information about the underlying physical state, this view requires that all observables defined for a quantum system have preexisting values at all times, which are their eigenvalues. But this requirement contradicts the KochenSpecker theorem. We also point out that the ontological model framework, on which the existing psiepistemic models are based, needs to be extended to solve the measurement problem.
January 14, 2016 at 4:52 pm #3287Mark StuckeyParticipantAs Price & Wharton point out, once you consider QM to be giving 4D distributions in spacetime (Lagrangian schema), rather than timeevolved distributions in configuration space (Newtonian schema), mysteries like the MP are resolved trivially. This is a psiepistemic view.
January 14, 2016 at 10:02 pm #3289Ken WhartonMemberThanks for this, Shan.
I’m worried you’re setting up a bit of a straw man version of psiepistemic models (you call it the “realist psiepistemic” view, an interesting choice of words). I’d be surprised if there were many (or any!) quantum foundations people who take such a view.
Specifically, you are ascribing the following logic to anyone who takes the psiepistemic viewpoint:
“Since this explanation of wavefunction collapse is supposed to hold true for the measurement of every observable at any time, it must assume that all observables defined for a quantum system have definite values at all times, which are their eigenvalues, independently of any measurement context, and moreover, measurements also reveal these preexisting values. This means that the explanation is based on a hiddenvariables theory of the most straightforward sort,”
A “straightforward sort” indeed! Far too straightforward, as you then correctly point out.
Let me try to rewrite those sentences using less straightforward (more general) assumptions that I think psiepistemic proponents like Matt Leifer would be more happy with:
KW: This explanation of wavefunction collapse is supposed to hold true for the measurement of *any* observable at any *one* time. It must therefore assume that the *particular* observable that will be measured has a definite value at the time of measurement. Basic quantum nogo theorems tell us that this behavior must be contextual; i.e. these definite values must correspond to (and probably must be explained by) the actual type of measurement and the actual time at which it occurs.
Then, concerning your three problems, I only see the first one as a true problem. Given the necessary contextuality, your first problem is a very real one, but might be better expressed as: Why is it that the type and time of measurement picks out a special definite value of that one special observable at that time? You might see my forum contribution for some ideas on this front. This is really the only outstanding question for this more general (hopefully workable) psiepistemic view.
The second problem is not really a problem at all; you simply note that measurements are invasive, and affect the system. Granted: this is true in every psiepistemic model, and it’s also true classically. You can’t learn anything about a system without interacting with it, and that interaction will affect it. (My preferred solution to the first problem actually *requires* devices to constrain measured systems.) Note that this issue is completely and utterly distinct from the usual aspects of the “measurement problem”; the fact that interactions affect systems is not a problem whatsoever, just a logical consequence. (Granted, this looks a lot more normal in physical spacetime rather than in configuration space, but that’s another issue…)
The third problem is also not a problem so long as the unmeasured observables that don’t fall into nice eigenvalue categories are, in fact, unmeasured. The only observables that need to fall into such categories are the ones that are measured, so answering the first problem (above) is the only answer that is needed. If the actual measurement constrains which observables are forced to take a welldefined eigenvalue, there is no reason to worry about unmeasured properties that do not take such welldefined values. (In other words, under a momentum measurement, there is no reason why a photon cannot “really” be spatially spread out — so long as it is not spatially spread out under a position measurement.)
All the best,
Ken
January 15, 2016 at 1:05 pm #3290Quantum SpeculationsParticipantThanks Mark and Ken for your very helpful comments. I will need more time to think about them. For now I think Ken’s explanation seems to require the deny of the existence of free fill and essential randomness and have to resort to superdeterminism. Best, Shan
January 15, 2016 at 3:45 pm #3291Mark StuckeyParticipantKen, why do you think “realist psiepistemic view” is “an interesting choice of words”? The title of our last RBW paper in IJQF was “Relational blockworld: Providing a realist psiepistemic account of quantum mechanics” which we wrote after extensive correspondence with you.
January 15, 2016 at 4:25 pm #3292Ken WhartonMemberShan: I’m not sure what you mean by “essential randomness”, or whether that would be ‘good’ or ‘bad’. But you are correct to imply that if there is some ‘special observable’ A in the real system that is perfectly correlated with the measurement B that we utilize on that system, then this correlation needs to be explained causally.
One option is to say that A directly causes B. But since B is correlated with the experimenter’s seeminglyfree decision, That means that “A” is constraining the decision. I assume that is what you mean by ‘denying free will’.
Another option is to say that the correlation is induced by some other entity C that causes both A and B. That’s what most people mean by “superdeterminism”, and again our free will seems to be gone.
But you didn’t mention the third option: B causes A. This restores some semblance of free will, because B (the experimenter) is now the source of the causation. My contribution in this forum notes that B causing A is far more natural from a statistical perspective: large systems tend to constrain small systems, not viceversa. Of course, in many cases this third option is retrocausal, so it’s unnatural from the typical dynamical perspective.
Another argument for the third option can be found in the recent piece that you published, near the end of the piece. When it comes to entanglement, the first two options are subject to the WoodSpekkens finetuning argument, in a way that the third option is not. Also, Huw Price and I recently wrote an essay distinguishing retrocausality from superdeterminism that you might find entertaining.
January 15, 2016 at 4:30 pm #3293Ken WhartonMemberMark: concerning “realist psiepistemic view”. There’s no problem with the basic idea, so long as “realist” clearly modifies “view”. But a different parsing might make it seem that “realist” modifies “psi”, which would imply exactly the opposite of psiepistemic (psiontic). That’s all I was getting at… 🙂
January 15, 2016 at 5:14 pm #3294Arthur FineParticipantShan, Thanks for the post. Here are a few thoughts.
1. You catchout the ontogical models view at a weak point. For what they call “psiepistemic” has to do with possible overlap of ontic states in the preparation of distinct state functions. But that has nothing to do with explaining collapse on measurement. In the earlier literature, as in your citation from Einstein, knowledge was invoked to gesture at a possible explanation of collapse. (See, for example, Kemble, E.C. The Fundamental Principles of Quantum Mechanics (McGraw Hill, New York, 1937.) But this current use of “epistemic” is a technical term of art in a quasioperational approach to QM, with no special connection to collapse.
1. Re Einstein. Many commentators cite Einstein when it suits them, either to bolster their own attitude or to score a point over some alleged failing of Einstein’s. But Einstein’s writings are subtle, complex and diverse; and these sorts of citations do him a disservice. In particular you cite a passage where he seems to affirm a “state of knowledge” view of the state function. But I can give you many other passages where he affirms, just as surely, quite opposite views. I have an old paper on this, where I point to the diversity and try to explain it. (“Einstein’s Interpretations of the Quantum Theory”. Science in Context 6 (1993) 25773. Also in M. Beller, R. S. Cohen and J. Renn (eds.) Einstein in Context, Cambridge: Cambridge University Press, 1993, pp. 25773.) I am sure more recent scholarship has gone even further along those lines. One lesson: there really is no definite thing that can count as “Einstein’s interpretation” of QM. See the paper.
2. Your argument invokes the BellKochenSpecker theorem. (Let’s be historically accurate here; Bell published first. It was a small point of pride with him, even if that was a somewhat sore point for Kochen & Specker, whose publication was delayed.) But you are cavalier in suggesting that value definiteness is all that theorem needs. Of course one needs more (as you sort of acknowledge in a footnote). The assignment of values has to be noncontextual. But more still. We need orthogonal additivity, or its equivalent (product, sum rules etc.): that in any resolution of the identity by rank one projectors exactly one projector is assigned the value 1.
In the ontological models framework this is insured by a special additivity postulate; namely, that for any ontic state x the response function probabilities at x for the eigenvalues of any observable add up to 1. In the deterministic case, this implies orthogonal additivity and generates a nogo. They tend to gloss over this additivity assumption as something surely obvious: that every measurement has a result. But additivity is a postulate that needs to be examined, since in fact not all measurements do have results (talk to your laboratory bench mate). Even as an idealization, is it reasonable to assume that the ontic state resulting from any state preparation whatsoever would be suitable for performing absolutely any measurement of anything at all ? That constitutes a strong sort of noncontextuality jointly for preparations and measurements, one that goes beyond the usual BellKochenSpecker noncontextuality. Also note that it is a critical assumption, not usually brought out, but necessary in the demonstration that there are no maximally epistemic (noncontextual) models, or no preparation noncontextual ones. It is also critical for the PuseyBarrettRudolph theorem.
3. Lastly. Let me contrast another sort of picture of the wave function as representing incomplete knowledge to yours. Knowledge of what? You suppose it is knowledge of the exact values of observables. But it could be incomplete knowledge of “a real state of affairs”, just as Einstein says (in your Heitler citation, and elsewhere – sometimes). Moreover, as in the ontological models, in general that real state may only determine probabilities for measurement outcomes. The particular outcomes may be matters of chance, governed only by probabilistic laws. (Einstein says this too, sometimes.) Then, given a particular outcome, we update our (still partial) knowledge of the real state accordingly. This dissolves the specific problem of collapse, turning collapse into updating.
Other issues, however, may remain. Does the real state change under measurement? In the Heitler quote Einstein thinks not. But measurement is a physical interaction, so plausibly it might. In that case some might want a dynamics, which could be a probabilistic dynamics. And that could be an issue, as you say, for an epistemic view to address. But any view that postulates real change due to measurement would have the same issue.
January 15, 2016 at 7:29 pm #3295Mark StuckeyParticipantShan, there is no superdeterminism in retrocausality with global constraints. Superdeterminism entails a timeevolved story per the Newtonian schema (NS), i.e., invoking a physical mechanism that “causes” the experimentalist to make certain choices. In Wharton’s Lagrangian schema (LS), the explanation is spatiotemporally holistic, e.g., Fermat’s Principle. Timeevolved causal stories of the NS explanation are secondary to the 4D global constraints of the LS explanation in retrocausality. Price & Wharton address the issue of free will and “interventionalist causality” in retrocausality in Price, H., & Wharton, K.: Disentangling the Quantum World. Entropy 17, 77527767 (2015) http://arxiv.org/abs/1508.01140.
January 16, 2016 at 10:01 am #3301AnonymousInactiveShould we even be discussing the topic of `observers’ in the blockuniverse?
A (classical) blockuniverse is just spacetime filled with a certain locally conserved energymomentum tensor (It is this local constraint, despite a global construction, which gives the illusion of deterministic evolution in SOME very specific systems). In this blockuniverse, charges interact with the EM field – absorb and radiate energymomentum – and it is only reasonable to assume that, when sampled over the blockuniverse, certain statistical regularities emerge. It is further not unthinkable that these regularities are described by QM, which can be shown to be consistent with this alleged interpretation as a statistical description of ensembles of `segments’ cut from a blockuniverse constrained by local energy momentum conservation.
Now, in some remote part of the blockuniverse, an `observer’ is said to reside, along with his laboratory, containing polarimeters, detectors and sources (all of which are four dimensional energymomentum distributions in the block). Should we expect that the said statistical regularities prevailing in the rest of the universe, should not apply to the laboratory of the observer??? Note that the observer needs only preprogram an experiment before switching jobs.
An explicit construction of such a blockuniverse, consistent with the statistical predictions of QM can be found in the following link:
https://drive.google.com/file/d/0B94tMBt9zxIHcEEzUm9UNkI2ZGs/view?usp=sharingJanuary 16, 2016 at 1:56 pm #3303Quantum SpeculationsParticipantThanks Mark and Ken for your further comments. I have not thought too much on retrocausal models. I would like to know whether quantum randomness is also inherent in these models, and how these models account for the randomness of measurement results. Best, Shan
January 16, 2016 at 2:18 pm #3304Quantum SpeculationsParticipantThanks Arthur for your very detailed and helpful comments. I have learned much from these comments. I will improve my paper according to them. Best, Shan
January 16, 2016 at 4:32 pm #3309Mark StuckeyParticipantYehonatan, I read your paper. Is it published someplace, so we can reference it? Your approach shares many of the values found in the Relational Blockworld. See https://ijqf.org/wpcontent/uploads/2015/06/IJQF2015v1n3p2.pdf.
January 16, 2016 at 7:42 pm #3310AnonymousInactiveHi Mark,
My paper is currently (and for the past year or so…) under review, but there is an almost identical arxiv version: http://arxiv.org/abs/1201.5281I’m still struggling with your paper. I’m new to the “quantum foundations business” but to be honest, I think you are complicating things… I wasn’t trying to explain QM. I was only trying to solve the classical selfforce problem, and discovered that a PROPER solution requires the existence of a complementary FUNDAMENTAL, viz., not derivable from electrodynamics, statistical theory. That theory, I argue, is likely to be QM. Note that I only need to demonstrate the consistency of this conjecture; I do not need to, nor can anyone, derive QM from (well defined…) electrodynamics. So the solution to the conceptual basis of QM is only a minor corollary of my approach. The real gain is in new physics of individual systems, described by that (well defined…) electrodynamics, encompassing virtually all branches of physics.
I will need more time to fully understand your paper, but as far as “divisions into camps” go, I’m indeed on your global (4D) side 🙂
January 18, 2016 at 1:44 am #3313Matthew LeiferMemberShan,
We have had so many discussions about this now that I am beginning to think that you are being deliberately obtuse, and trying to court controversy where there ought to be none.
Let me first note that not all ontological models solve the measurement problem, psiepistemic or not. Some do and some don’t. Since Bohmian mechanics can be fit into the ontological models framework (more on this later), it is clear that some of them do. Since we are interested in investigating questions that are somewhat orthogonal to the measurement problem, such as the reality of the quantum state, we do in fact want to keep some models that do not solve the measurement problem within the framework, such as an account of the orthodox Diracvon Neumann interpretation with its explicit collapse upon the undefined primitive of measurement.
Note that, we (or at least those of us who are sufficiently careful) do not argue that collapse should be just an updating of information in the ontological models framework. It clearly cannot be. With sequential measurements, it is easy to prove using a sequence of spin measurements, that the ontic state must get disturbed by the measurement. Due to Bell’s theorem, we cannot say that the remote collapse of Bob’s system, caused by Alice’s measurement on a system that is entangled with it, is a pure information update either. Chris Fuchs and other neoCopenhagenists would make such an argument, but we don’t. It does not appear in my review article and it certainly does not appear in Rob Spekkens’ toy theory paper. Having the collapses be pure information updates is not required to solve the measurement problem in any case.
What is required to solve the measurement problem in the ontological models framework? No more and no less than in any other realist interpretation. Namely, in situations like Schroedinger’s cat, where there is a macroscopic superposition of distinct states like an alive and a dead cat, we require that the ontic state occupied by the system is one in which the cat is either definitely alive or definitely dead. This certainly does not entail that all observables need to have preexisting values. For example, a preferred observable, such as position as in Bohmian mechanics, would do the job just fine.
Further, even if we are dealing with an ontological model in which every observable does have a definite value, there is nothing in the framework that requires these values to be noncontextual (contra Arthur Fine, additivity does not imply this). In a careful definition of the framework (such as Rob Spekkens contextuality paper http://arxiv.org/abs/quantph/0406166) we see that response functions are associated to the outcomes of measurement procedures (descriptions of what to do in the lab to perform the measurement). They are not associated to observables, operators, projection operators, basis states, or anything of that type. Thus, there is nothing preventing two different measurement procedures from being represented by different sets of response functions, while those measurement procedures correspond to the same observable/projective measurement/basis/etc. You can have as much contextuality as you like.
Now, it is true that in the literature on psiepsitemic models, people often gloss over this and do in fact assign response functions to bases (although they usually do allow the same vector to have a different response function in different bases). This is OK because you are proving results that would have to be true in all of the different contextual representations, and it simplifies matters to focus on just one representation. You can imagine, in the experiments considered, that we are only looking at one way of preparing each pure state and one way of measuring each basis. Now, strictly speaking, we should allow for full contextuality in these theorems. If you look in my review article, you will see that I went into explicit detail about how to do so (http://quanta.ws/ojs/index.php/quanta/article/view/22). There is no problem with this.
Finally, you seem to think that models like Bohmian mechanics cannot be fit into the ontological models framework because they are deterministic. Of course this is wrong. Although ontological models need not be deterministic, they can be just by allowing some of the probabilities to have value 0 and 1. Bohmian mechanics would not actually be fully outcome deterministic when written as an ontological model, because when you measure an observable other than position, the outcome depends on the ontic state of the measurement device, which is not included in our description of the ontic state of the system. Thus, you would get a probabilistic answer from averaging over the possible states of the measurement device for a given measurement procedure.
That people have these misconceptions about the ontological models framework continues to surprise me. The framework is essentially the exact same framework that Bell used to prove Bell’s theorem, with a few bits of terminology added (such as system, preparation, measurement, etc.). But pretty much every valid criticism against the ontological models framework could be levelled against Bell’s work instead, so if you think that the setup of Bell’s theorem is valid and interesting, then you should have the same opinion of ontological models in general. If not, you are very confused.
January 18, 2016 at 10:05 am #3314Quantum SpeculationsParticipantHi Matt,
Thanks for your interesting comments, some of which I basically agree.
But I think you misunderstood my paper. The paper does not aim to show the (realist) psiepistemic view cannot solve the measurement problem. Rather, it only shows that the psiepistemic view does not provide a straightforward resolution or a dissolution of the measurement problem, as I clearly stated in the abstract of the paper. This result removes the main motivation to assume the psiepistemic view.
In fact, the writing of this paper is mainly motivated by your words in your review paper. You said, “A straightforward resolution of the collapse of the wavefunction, the measurement problem, Schrodinger’s cat
and friends is one of the main advantages of psiepistemic interpretations…. The measurement problem is not so much resolved by psiepistemic interpretations as it is dissolved by them. It is revealed as a pseudoproblem that we were wrong to have placed so much emphasis on in the first place.” It seems that your view expressed in your comments here is different from your view in your review paper.Here are my concrete replies to some of your comments.
“Namely, in situations like Schroedinger’s cat, where there is a macroscopic superposition of distinct states like an alive and a dead cat, we require that the ontic state occupied by the system is one in which the cat is either definitely alive or definitely dead. This certainly does not entail that all observables need to have preexisting values. For example, a preferred observable, such as position as in Bohmian mechanics, would do the job just fine.”
*** I agree. But my point is that if not all observables have preexisting values, then the psiepistemic view will not provide a straightforward resolution or a dissolution of the measurement problem. In other words, it will have no advantages in solving the measurement problem when comparing with the psiontic view.
“Finally, you seem to think that models like Bohmian mechanics cannot be fit into the ontological models framework because they are deterministic. Of course this is wrong. Although ontological models need not be deterministic, they can be just by allowing some of the probabilities to have value 0 and 1. Bohmian mechanics would not actually be fully outcome deterministic when written as an ontological model, because when you measure an observable other than position, the outcome depends on the ontic state of the measurement device, which is not included in our description of the ontic state of the system. Thus, you would get a probabilistic answer from averaging over the possible states of the measurement device for a given measurement procedure.”
*** I cannot agree with you. First, when one measures position, the outcome is completely determined by the ontic state of the system, and the theory is deterministic. In particular, the probability of the measurement outcome is not determined by the ontic state of the system, but by the initial position probability distribution. In some models without the quantum equilibrium hypothesis, the probability of the measurement outcome is even different from the Born probability. This is inconsistent with the second assumption of the ontological models framework. Next, even when one measures an observable other than position, the outcome depends on the ontic state of the measurement device, but this only reflects the contextuality of the measured property. Moreover, even if there is a probabilistic answer in this case, the probability is not the Born probability either.
“That people have these misconceptions about the ontological models framework continues to surprise me. The framework is essentially the exact same framework that Bell used to prove Bell’s theorem, with a few bits of terminology added (such as system, preparation, measurement, etc.). But pretty much every valid criticism against the ontological models framework could be levelled against Bell’s work instead, so if you think that the setup of Bell’s theorem is valid and interesting, then you should have the same opinion of ontological models in general.”
*** I think the ontological models framework is not the same framework that Bell used to prove his theorem. In the former, the ontic state of a system determines the probability of different outcomes for a projective measurement on the system (which is the second assumption of the framework). While in the latter, the ontic state of a system determines the outcome of each projective measurement on the system (this assumption is consistent with Bohm’s theory). Moreover, the proof of Bell’s theorem does not require the second assumption of the ontological models framework.
To sum up, maybe the view I objected in my paper is a bit of a straw man view as Ken said. But the paper does emphasize a few unsolved problems of the psiepistemic view (which may be not clearly realized by some people who believe in the view), and shows that the psiepistemic view does not provide a straightforward resolution or a dissolution of the measurement problem.
January 18, 2016 at 5:59 pm #3317Mark StuckeyParticipantThnx for the reply, Yehonatan. You don’t need to concern yourself with the details of our approach, as you noted it doesn’t bear directly on your specific motives. I just wanted you to be aware of the fact that your 4D global perspective has company 🙂
January 18, 2016 at 6:53 pm #3318Jiri SoucekParticipantDear Shan,
I am suprised that you consider the psiepistemic view together with the idea that all observables have preexisting values.
I understand the psiontic view as the standard assumption that each pure state represents the possible state of an individual system. For me the psiepistemic view means that not all pure states represent the possible state of an individual system – possibly, no pure state represent an individual state.
In this situation the assumption that each observable has a preexisting value is not a good idea.
The set of individual states can be quite small. One can expect that two individual states must be orthogonal since the system in certain individual state cannot be found in another individual state. This implies that the set of individual states should be a particular orthogonal bases.
These ideas are basis of the modified quantum mechanics introdused in the attached paper. In this theory the collapse problem can be solved along lines proposed in your paper – as an update procedure.
I would like only to remark that your argument of the impossibility of the psiepistemic view based on the KochenSpecker theorem is not clear.jiri soucek
January 18, 2016 at 10:00 pm #3320Jiri SoucekParticipantDear Matthew,
I have some comments
1) The very idea of the ontological models is based on the assumption that there exists only one probability theory – the standard Kolmogorov theory. But from 2008 there exist two probability theories – the Kolmogorov (linear) probability theory and the new quadratic probability theory published in arXiv:1008.0295v2 where the probability distribution is the quadratic function f(x,y) of two elementary states x and y. Then the choice of the Kolmogorov probability theory in the definition of the ontological models is unjustified.
2) I think that QM cannot be modelled in the linear probability theory since this probability theory does not allow the reversible time evolution (see the first attached paper).
3) The ontological model based on the linear probability theory seems to me to be completely unnatural. On the other hand using the quadratic probability theory there is a completely natural model for the “real” QM (i.e. QM based on the real numbers instead on complex numbers). In fact, the quadratic probability theory with the reversible evolution is almost identical to the “real” QM. This is described in the section 2 in the first attached paper. This results can be generalized, in some extend, to the complex QM (see section 3 of the first attached paper).
4) You have mentioned the “Diracvon Neumann interpretation with its explicite collapse upon the undefined primitive of measurement”. I agree with you that the “undefined primitive of measurement” is the basic problem of QM. I think that the first step in the solution of the measurement problem is to give the axiomatic formulation of QM which does not contain the masurement among its axioms. Exactly this is done in the modified QM (see the second attached paper).
jiri soucek
January 19, 2016 at 8:52 am #3323Matthew LeiferMemberShan,
“In fact, the writing of this paper is mainly motivated by your words in your review paper. You said, “A straightforward resolution of the collapse of the wavefunction, the measurement problem, Schrodinger’s cat
and friends is one of the main advantages of psiepistemic interpretations…. The measurement problem is not so much resolved by psiepistemic interpretations as it is dissolved by them. It is revealed as a pseudoproblem that we were wrong to have placed so much emphasis on in the first place.” It seems that your view expressed in your comments here is different from your view in your review paper.”My point here is that in order to even formulate the measurement problem, you have to assume that the quantum state is real (and indeed that it is the only thing that is real). Only then can you say that a macroscopic superposition represents a different physical state of affairs than a definite macroscopic state. In this sense, the problem is dissolved, because you cannot even formulate it.
On the other hand, one still has to deal with the problem of determining the ontology of the theory, and then explaining how that ontology can account for the definite classical states that we seem to experience. This is what I was referring to as the “measurement problem” in my previous post, but I now realize that I shouldn’t have called it that. I view this problem as more general than the measurement problem, as it can be formulated in a more interpretationneutral way, as opposed to the measurement problem, which is usually formulated in an (often implicitly) psiontic view. I agree that psiepistemic models do not immediately solve this more general problem, so maybe we are just arguing about terminology here.
“I agree. But my point is that if not all observables have preexisting values, then the psiepistemic view will not provide a straightforward resolution or a dissolution of the measurement problem. In other words, it will have no advantages in solving the measurement problem when comparing with the psiontic view.”
I agree that a theory like Bohmian mechanics, for example, solves the ontology/definite classical appearances problem just as well as a psiepistemic theory might do. But this was never the only or main reason for considering psiepistemic theories. Hell, if all we care about is solving the measurement problem then we have at least half a dozen different ways of doing that now, so we might as well just stop doing foundations. Instead, we are interested in whether all of the huge variety of psiepistemic explanations pointed out in Spekkens’ toy theory paper have merit. It is this vast array of seemingly natural explanations that make the psiepistemic view seem appealing, not just the potential explanation of one overly studied issue in the foundations of quantum theory.
“*** I cannot agree with you. First, when one measures position, the outcome is completely determined by the ontic state of the system, and the theory is deterministic. In particular, the probability of the measurement outcome is not determined by the ontic state of the system, but by the initial position probability distribution. In some models without the quantum equilibrium hypothesis, the probability of the measurement outcome is even different from the Born probability. This is inconsistent with the second assumption of the ontological models framework. Next, even when one measures an observable other than position, the outcome depends on the ontic state of the measurement device, but this only reflects the contextuality of the measured property. Moreover, even if there is a probabilistic answer in this case, the probability is not the Born probability either.”
OK, I was considering the case with a realistic pointer variable that has a narrow Gaussian quantum state or suchlike, which would induce probabilities, but that is an inexact measurement. If you use deltafunction pointers then there would be no randomness, but still contextuality. Anyway, the point that the ontological models framework can handle 0/1 probabilities and contextuality perfectly well still holds.
“*** I think the ontological models framework is not the same framework that Bell used to prove his theorem. In the former, the ontic state of a system determines the probability of different outcomes for a projective measurement on the system (which is the second assumption of the framework). While in the latter, the ontic state of a system determines the outcome of each projective measurement on the system (this assumption is consistent with Bohm’s theory). Moreover, the proof of Bell’s theorem does not require the second assumption of the ontological models framework.”
Bell does not assume that the outcome of each measurement is determined. He may have done so in 1964 (people are still arguing about whether he did), but by 1973 when he was using the CHSH version, probabilistic predictions are definitely allowed. Mathematically, you can always replace a probabilistic ontic state with a convex combination of deterministic ones in Bell’s theorem, but physically you can have nondeterministic outcomes. In any case, even if this criticism were valid, it would only make the ontological models framework *more* general than Bell’s framework, so clearly any criticism of ontological models could still be levelled at Bell.
January 19, 2016 at 8:55 pm #3324Ken WhartonMemberHi Shan,
Turning back to your question in #3303, concerning randomness:
A central point of the paper I posted in this forum is that all of the probabilities in retrocausal quantum models can be classical/conventional in the sense that they all result from a lack of knowledge. But there are two different parts of this. First is the knowledge about the future measurement device/settings. If this is unknown, you need a huge configuration space to keep track of all the possibilities. Once you decide/learn about the settings, you mentally update/reduce this space to something much more manageable. The benefit of retrocausal models is that you no longer have to be consistent with counterfactual histories in which you measure something different. Once this first round of updating occurs, you no longer need these huge configuration spaces.
But even after this initial updating occurs, you still don’t know the outcome of the experiment, so you still have to assign probabilities to those outcomes. (Upon learning which outcome occurs, there will be further updating, but I think that’s not what you’re asking about.) As before, this probability distribution arises because there’s some hidden variables that you don’t know. (Without HVs you can’t have any retrocausality, because there would be nothing for the future to cause.) I take your question about randomness to be a question about the ultimate reason that we don’t know the HVs.
At the point, depending on the specific retrocausal model, you might get a different answer to your question. Some models might have the eventual outcome entirely dependent on the *initial* hidden variables, back at the prior preparation. (Like the source of uncertainty in Bohmian mechanics.) In these cases, the relevant HVs wouldn’t be “random” so much as “unmeasured” (although in my book these are essentially the same thing). Still, the *reason* that you don’t know the HVs would presumably be tied into issues of measurementprecision and the uncertainty principle.
You can also put some of the relevant HVs in the future measurement device itself, but not all of them, because then there’s no room for retrocausality.
In the models I prefer, knowledge of the initial HVs are not sufficient: there are also additional relevant HVs in the entire history between preparation and measurement. In this case, the reason you don’t know the HVs is because 1) you’re not making any measurements at all in that intermediate spacetime region, and 2) the underlying rules are not dynamicallydeterministic (it’s not a Cauchy problem). I imagine that you might think such models were more “random” than others — but to me there’s not much difference: in all cases there are some unknown HVs that are the reason for imperfect predictions. The difference comes down to the reason why the relevant HVs are unknown, and this reason could certainly vary from model to model.
Best,
Ken
January 20, 2016 at 7:49 am #3325Matthew LeiferMemberjiri soucek,
I am somewhat sympathetic to the view that quantum theory should be understood in terms of some sort of nonKolmogorovian probability theory. However, there are a couple of problems with this.
First, it is not just since 2008 that this option has existed. There are currently a few dozen competing generalized probability theories that could account for quantum theory. To name just a few, we could allow negative probabilities, complex probabilities, or noncommutative operatorvalued probabilities, and there are several ways of setting up each of these theories. We can add your quadratic probabilities to the list. Personally, I like noncommutative probability theory, but there is really no knockdown argument as to why we should prefer one of these theories over another. The whole idea seems vastly underdetermined.
This, however, is a minor worry compared to the second problem, which is the following. Usually, exotic probability theories are cashed out in operational terms. Briefly, such theories assign ordinary classical probabilities to every observable property. Things that aren’t directly observable may be assigned an exotic probability, but we don’t worry about the meaning of this because we can just view them as intermediate stages in the calculation of an observably meaningful probability.
The problem with this is that, if we are happy with operationalism, then we don’t need to introduce exotic probabilities to dispel the quantum mysteries. We can just state that only observable quantities are meaningful and leave it at that. So it seems that the exotic probability theory, whilst maybe being mathematically attractive, is doing no actual interpretive work.
On the other hand, if we want to be realist then we want our theory to be fundamentally about some stuff that exists independently of observers, the behaviour of which accounts for what we see in experiments. We may have some uncertainty about the exact state of this stuff. It is difficult to see how an exotic probability theory could apply to this stuff. Whilst it may not be directly observable in experiments that we can perform today, it exists in the same sense as the properties of a classical system. Its precise state is presumably knowable in principle, at least logically if not physically, even though it is not actually known to us. So, all the arguments used to establish that we should use ordinary classical probability theory to reason in the face of uncertainty (e.g. Dutch book if you are a Bayesian) apply to it.
Now, I am not opposed to the idea that a realist account of why we should use an exotic probability theory might be possible. But such an account is never given. Instead, there is usually just a fallback to operationalism. If exotic probability is supposed to be a response to nogo theorems like Bell, on a par with ideas like simply admitting nonlocality, then a realist account is required. Otherwise, as I said previously, the move is doing no interpretational work.
January 20, 2016 at 1:13 pm #3326Quantum SpeculationsParticipantHi Matt,
Thanks again for your further comments, most of which I agree. There is only one minor point I disagree. I still think the ontological models framework is more specific than Bell’s framework due to its second assumption. Even in the proof of the CHSH version of Bell’s theorem, although the ontic state λ is a random variable, it still determines the outcome of a projective measurement on the system, not the probability of different outcomes for a projective measurement on the system.
Best,
ShanJanuary 20, 2016 at 1:18 pm #3327Quantum SpeculationsParticipantHi Ken,
Thanks again for your very detailed reply! I learned a lot about the retrocausal quantum models from them. I feel that such models seem more complex than Bohm’s theory. Maybe the reason is that there are no wave functions in them?
Best,
ShanJanuary 20, 2016 at 1:22 pm #3328Quantum SpeculationsParticipantHi Jiri,
Thanks for your comments. I think you misunderstood the main idea of my paper. It does not aim to argue the impossibility of the psiepistemic view based on the KochenSpecker theorem. It only argues that the psiepistemic view does not provide a straightforward resolution or a dissolution of the measurement problem, as I clearly stated in the abstract of the paper.
Best,
ShanJanuary 24, 2016 at 3:32 am #3335Jiri SoucekParticipantDear Matthew Leifer,
Thank you very much for your comments. These are very helpful for me. In general I agree with your remarks but some points are to be discussed.
In general, it is clear that QM must be an applied probability theory and it is clear that this probability theory cannot be the Kolmogorovian probability theory.
1. The first point is the concept of a probability theory. I think that this concept requires the following two properties.
(i) The probability theory excludes the use of the concept of an observable and of the concept of a measurement so that any general probability theory (as an example the socalled quantum probability theory) are not probability theories in this sense. (The general probability theories are something like the general statistical schemas.) The probability theory is only about events and its probabilities and on the probability distributions interpreted as states of a system.
(ii) The probability theory requires the concept of an individual state, i.e. the state of an individual system (in a given time instant). Thus to each system there must be associated the (finite) set of its possible individual states. Then the state of an ensemble of systems is the density matrix defined on the set of individual states.
At the end there are only two “true” probability theories: the Kolmogorov theory and the quadratic probability theory. At least I think so.2. The relation to the operationalism. I agree with you but not completely. QMand the modified QM are empirically equivalent (i.e. at the operationalism level they are equal) but they are different on the theoretical level. For example, in the modified QM the individual superposition principle is false, the Bell’s theorem cannot be proved and the Bell nonlocality cannot be proved. So that empirically equivalent theories may be theoretically very different. (See may paper in the topic “The absolute and relative truth in quantum theory …” in this conference.)
3. Let us assume that we want to be realist. At first we must consider the concept of an individual state of a measuring system since it is exactly what we observe in an experiment: the individual state of an individual measuring system.
What is the set of individual states? (The idea that pure states are individual states is excluded by the possible nonunique decomposition of the state into pure states – see the attached paper.) Thus the set of individual states is a subset of the set of pure states. There are clear reasons that different individual states must be orthogonal. For example: if the system is in an individual state, it cannot be found in another individual state.
Assuming this we obtain that the maximal set of individual states must be identical to a certain orthogonal bases of the Hilbert space of the system.
But this is exactly the starting point of the modified QM: we assume that each system is associated with some particular bases containing all individual states.
The idea that each system is associated to a particular orthogonal bases is completely comfortable with the concept of a probability theory – this orthogonal bases is the set of all individual states.As a conclusion I would like to state that there is an interpretational work inside the modified QM and also in the probability approach to the QM. I think that this matter should be discussed in more details later.
Your Jiri Soucek

AuthorPosts
 You must be logged in to reply to this topic.