Forum Replies Created
January 22, 2015 at 10:58 pm #1884
I agree with your first paragraph.
“Local determinism” (LD) indeed encompasses everything you need for the 1964 Bell’s theorem [minimal]. I was attempting to reserve DL just for the formulation of locality once determinism is already in place. I think it makes phrases like “any reasonable “localist” notion that manifestly reduces to DL in the deterministic case” a bit easier to get your head round than the (formally equivalent) phrase using LD. But it’s not that important.
I guess 7 out of 9 isn’t bad. Perhaps I’ll have a go at writing out an actual statement and proof of the EPR+Bell theorem some time – I’m hopeful it would be a bit less painful than you imagine (the difference from your appendix treatment would probably be that I’d use a different choice of primitive concepts).
MattJanuary 22, 2015 at 8:41 pm #1882
I can’t help wondering if a few tweaks to Howard’s nomenclature might help bridge much of the remaining divide. How about something like this:
Deterministic Locality (DL): A somewhat clunky name for Travis’ equation 3.
A 1964 Bell’s theorem: any theorem of the form “There exist quantum phenomena for which there is no theory satisfying [Localist Assumption] and determinism.” where [Localist Assumption] is any reasonable “localist” notion that manifestly reduces to DL in the deterministic case.
Perhaps the three most important flavours are:
The 1964 Bell’s theorem [minimal]: Take [Localist Assumption] = DL.
The 1964 Bell’s theorem [modern]: [Localist Assumption] = PI.
The 1964 Bell’s theorem [EPR]: [Localist Assmption] = EPR’s premises.
The EPR+Bell theorem: “There exist quantum phenomena for which there is no theory satisfying EPR’s premises” (Proof: Combine EPR’s argument for determinism with the 1964 Bell’s theorem [EPR].)
With this terminology in hand, I would hope that the following statements are fairly uncontroversial:
- When a typical “realist” says “Bell’s theorem”, they mean either the 1976 Bell’s theorem and/or the EPR+Bell theorem.
- When a typical “operationalist” says “Bell’s theorem”, they mean a 1964 Bell’s theorem.
- Bell’s purpose in writing the 1964 paper was to establish the EPR+Bell theorem.
- Anybody should be able to reconstruct a rigorous version of the EPR+Bell theorem by reading the EPR paper and then Bell’s 1964 paper.
- Trying to reconstruct a rigorous version of the EPR+Bell theorem from Bell’s 1964 paper alone would, to put it mildly, require quite a lot of work.
- The EPR+Bell theorem is only of historical interest, having been superseded by the 1976 Bell’s theorem which serves essentially the same purpose but has both better-motivated premises and a more transparent derivation.
- The 1964 Bell’s theorem [modern] is what a modern physicist is most likely to take away from reading Bell’s 1964 paper in isolation, and is also the most “practically applicable” form of Bell’s theorem.
- The 1964 Bell’s theorem [minimal] is the sole theorem stated and proven in full rigour in Bell’s 1964 paper.
- Exactly which flavour of Bell’s 1964 theorem is the “official” one is a speculative or perhaps even meaningless question (but recall point 3).
In particular, I think the first two points give a fuller understanding of why the realists and operationalists tend to talk past each other – even if they agree to discuss only “the theorem proven in 1964”, one can mean the EPR+Bell theorem and the other a 1964 Bell’s theorem!
MattJanuary 22, 2015 at 5:16 pm #1880
Of course nobody would have put it quite like I did in 1964. But the DAG is an attempt to formalise fairly natural and long-standing way of thinking about causality (as evidence: two different formulations based on functions and probabilities respectively, turn out to be completely equivalent), the basic ideas of which (e.g. correlations need causal explanation) were clearly in the minds of Einstein and Bell. So a somewhat more heuristic version of what I said would not have been completely beyond Bell’s reach in 1964.
I’m not claiming that any of this is an implicit assumption in Bell’s 1964 theorem, which I still largely agree with your formulation of. I was really just trying to show by counterexample that your four Bell quotes do not unambiguously mean PI (whereas once determinism is assumed they do easily translate into Travis’ equation 3). But it does also suggest an alternative to your theory (i.e. that Bell misunderstood the implications of PI) about why 1964 Bell thought that his remarks and quotations on locality were sufficient to, at least informally, capture the (most important?) assumptions needed to run the EPR argument for determinism.
MattJanuary 21, 2015 at 11:23 pm #1871
I can’t deny that the “operationalist” in me jumps to the parameter independence conclusion when reading any of your four quotations. Indeed that is why I didn’t question your interpretation until I read Travis’ paper. But, outside of the deterministic case, that interpretation requires a certain style of thinking about causation in probabilistic theories that may not have been very common in 1964, and is still not universal today. I don’t think it’s fair to assume Bell would necessarily have thought in that way (especially given his general scepticism of operationalist thinking).
To give one example of an alternative interpretational path, the most fully developed framework for formal causal reasoning is the one described in Pearl’s book Causation, based on directed acyclic graphs. If we (rather naturally) interpret the requirement of the setting not influencing the remote outcome as being the absence of a directed path from the setting to the remote outcome, add in the usual free will assumption (the settings are parentless) and the fact that the setting does influence the local outcome, we are led to the DAG a -> A <- λ -> B <- b which implies local causality.
I think you agree that, rightly or wrongly, 1964 Bell took EPR to have already established the need for determinism. It therefore seems probable that 1964 Bell simply didn’t think much, if at all, about exactly what his definition of locality would be in a stochastic theory. If somebody had demanded that 1964 Bell explain exactly what the quotes i-iv mean in a stochastic theory, I find both of the following responses plausible:
1) “Well I guess it means the probability of the outcome, given any hidden variables, doesn’t depend on the remote setting.”
2) “Good question, let me think about it. [Disappears for a few days.] It means … [local causality]”.
Even if you judge one response vastly more likely than the other, do you not agree that at least some amount of speculation is involved?
MattJanuary 21, 2015 at 8:11 pm #1866
I guess B’ was (and still is) usually what drives people to B, so that “refuting” B’ certainly undermines the case for B, which I think is what Einstein was getting at in your quote.
Of course I agree that the EPR paper contains a valid argument from their background assumptions + perfect correlations to determinism, and that Bell attempted to review that argument in his 1964 paper.
P.S.: I can’t resist putting on the record my strong objection to your parenthetical remark “Post-Bell — i.e., once it is established that locality is just false — one would no longer say that any EPR-ish argument actually proves that the completeness doctrine is false or that determinism is true.”. Whilst (translating “locality” into, say, “local causality”) this is certainly the right thing to say about any argument based on local causality, I personally find EPR-ish arguments based on “localistic” premises weaker than local causality to be among the most compelling reasons to reject the reality of the quantum state. But I know you don’t have much time for any contemporary use of “localistic” assumptions that don’t amount to local causality, so I’ll leave it there.January 21, 2015 at 4:49 pm #1864
I agree that Bell was probably taking the Einstein quote to be the definition of locality, and that it is stronger than your equation (3), as it applies to any “real factual situations”, not just pre-determined measurement outcomes. However, to my mind the quote is not totally unambiguous in all cases (particularly when probabilities are involved), which still leaves us with “creative interpretation”.
Thanks for the comments on physicists A and B, I hadn’t read the autobiographical notes closely enough. You’re quite right that the fundamental distinction is between
A: q has a predetermined value
B: q does not a have predetermined value
which are self-evidently exhaustive options. However, the discussion of physicist B ends with a stronger statement
B’: The ψ-function is an exhaustive description of the real situation.
Einstein says that physicist B “will (or, at least, he may) state” B’. My physicist C was supposed to be somebody who believes in B but not B’, which shows that Einstein’s parenthetical proviso is essential.
I now see that Einstein is actually pretty careful to say that he is only ruling out B’: “B will have to give up his position that that the ψ-function constitutes a complete description of a real factual situation.” There is no mention of B giving up his B-defining belief that q does not have a predetermined value.
Your interpretation of my C in terms of “tilting the balance” is fine. It is indeed the case that such a model, if locally causal, will not predict perfect correlations. But the autobiographical notes neither give an unambiguous general definition of local causality nor make any mention whatsoever of perfect correlations! Doesn’t it therefore make more sense to read Einstein as giving a perfectly clear and logically watertight refutation of B’ than as giving an uncharacteristically sloppy refutation of B that doesn’t so much as hint at a key premise (perfect correlation)?
MattJanuary 20, 2015 at 10:48 pm #1862
This is a belated comment to say thanks for your thought-provoking paper.
In particular, you paper has changed my mind on one point: it is wrong to say, as Howard did, that the theorem Bell proved in 1964 uses [what is now often called] parameter independence. Bell’s locality assumption is more accurately captured by your equation 3, i.e. the 1964 locality assumption cannot even be stated without the determinism assumption.
Hence we can only speculate on how the 1964 Bell would have formally defined locality in a stochastic hidden variable theory. Since both parameter independence and local causality both reduce to exactly your equation (3) in the deterministic case, the mathematically rigorous part of the 1964 paper does not distinguish between those two possibilities. Hence any arguments that pick out one of those will always involve “pretty creative interpretation”.
You may not be surprised to hear that I don’t agree with everything you say. Perhaps my strongest disagreement is that I read the conclusion of the argument in Einstein’s autobiographical notes to be ψ-incompleteness (i.e. physicist B on your p7 is wrong) rather than determinism (i.e. physicist A is right). This root of this disagreement is your assumption that the A and B on page 7 are exhaustive possibilities. As far as I can see, Einstein doesn’t argue that they are exhaustive, and indeed they aren’t. For example, a third possibility is:
C – the real factual situation of the individual system allows us to predict measurement outcomes better than ψ, but still not with certainty.
MattOctober 31, 2014 at 7:47 pm #1128
How to make sense of the wave function?
I currently think the epistemic approach has the best hope of doing this. Even if one constructs a good psi-ontic interpretation, it seems unlikely to make sense of the wave-function if that means provide natural explanations for it’s key properties (living in configuration space, collapse, etc).
Do we need new experimental observations to understand the wave function?
An experiment that falsified quantum theory would of course have profound effects on all foundation al questions. In general I am sceptical that experimental results that are compatible with quantum theory will have big effects on how we interpret it.
Will the solution of this problem have deep implications for solving the measurement problem?
The epistemic approach dissolves the measurement problem. But the wider ‘reality problem’ remains open even if the wave function is epistemic.October 31, 2014 at 5:00 pm #1120
I’ve agreed that more work is required to clarify exactly what Bob’s strategy is in the original scheme and whether or not this is equivalent to my “recap”. Thanks for your additional ideas on this matter.
But as I have tried to make clear, the operational argument does not depend in any way on what the protective measurement scheme tells Bob to do. The argument is based purely on what the scheme requires of Alice and Charlie. You have not disputed my characterization of what Alice and Charlie do, and so as far as I can see the operational argument stands. I think we are starting to go round in circles so perhaps we should agree to disagree for now. I’ll be happy to send you a draft of our paper whenever we finally write it.
MattOctober 31, 2014 at 4:46 pm #1119
I can tell you how I currently see this issue. As we’ve mentioned, the adiabatic scheme is present in the Gaussian toy theory. Recall that the ontology of that toy theory is just that of classical particle mechanics. This basically turns out to move the particle around in such a way that, for any observable allowed in the theory, the amount of time the particle has a specific value is proportional to the probability of it being measured to have that value. Since the protective measurement scheme operates slowly, it ends up seeing the time average. So the role of the protection is again to provide access to the entire probability distributions, but instead of providing repeated random samples as in the Zeno case it provides a deterministic dynamics such that if we sample at a random time we see the distribution.
As it happens, the basic idea of this explanation was anticipated in one of the original papers. Two responses are provided (p4625), here’s my paraphrase of them and thoughts on them:
- It is difficult to reconcile this idea with the existence of nodes in the wavefunction – does the particle need to go infinity fast past these? This is an interesting point that I’d like to think about more. It doesn’t arise in the Gaussian theory because none of the wavefunctions have nodes. But we already now from things like Bell’s theorem that the toy theory ontology cannot be extended to all of quantum mechanics, I don’t think this undermines the argument that phenomena present in the toy theory are compatible with the ψ-epistemic view.
- This is not how things work in Bohmian mechanics. That seems irrelevant since Bohmian mechanics is ψ-ontic.
MattOctober 31, 2014 at 1:45 am #1105
[Duplicate post was here]October 31, 2014 at 1:45 am #1104
Getting inaccurate expectation values is something that may or may not happen to Bob. Charlie just sits there doing projective measurements in a fixed basis over and over again, right?
MattOctober 31, 2014 at 1:39 am #1101
As the other Matt has already mentioned, the existence of psi-epistemic models of protective measurement makes your argument difficult to swallow. But let me focus here on two more specific questions here:
1) Why couldn’t somebody also run your argument using the tomography-of-protector then projective measurement-of-system scheme?
2) Are you saying the same protection can apply to two non-orthogonal states? Can you give an example of how that would work?
MattOctober 30, 2014 at 6:16 pm #1082
The best toy model for thinking about weak measurements is the Gaussian theory, because then you already have continuous variables to act as your pointer, the pointer can be prepared in a Gaussian state, and the “von-Neumann measurement” interaction is present in the theory.
A nice example for imaginary weak values is to prepare the system in a Gaussian centred at the origin, do a weak measurement of position, and then post-select on momentum p. The weak value goes like i p. If you look at this in the toy theory, there’s a natural explanation for why that value manifests itself in the way it does.
MattOctober 30, 2014 at 4:22 pm #1080
I’m afraid I don’t quite get your question. Which contexts are you talking about?
MattOctober 30, 2014 at 4:20 pm #1079
We definitely need to think more about which schemes are or are not equivalent to the original ones. (There is also a question of what equivalence means exactly – for example if one scheme requires classical post-processing of the data whilst another does the same processing “as it goes along”, does that necessarily mean they are not equivalent in the sense that matters?)
But I still don’t see how this affects are operational argument, which is independent of the details of Bob’s strategy – we only need to to be right about what Charlie (the protector) does.
MattOctober 30, 2014 at 4:12 pm #1078
What we do or don’t know has no bearing on which POVMs exist. Of course it may affect our choice of POVM – if we already know what basis the system was prepared in, we can measure it in that basis and determine the correct state. I don’t think anybody would argue that this establishes the reality of the wave-function. The claim is that, because of the limits on which POVMs exist, when you put Bob and Charlie into a black box their actions must amount to exactly the same measurement on the initial system (a projective measurement in the protected basis).
MattOctober 30, 2014 at 2:03 am #1052
I’m also done for today, but I’ll keep on eye on this tomorrow in case there are further thoughts.
Thanks to everyone for the stimulating responses that I think will, in the true spirit of a workshop, lead to improvements in the paper whenever it finally appears!
MattOctober 30, 2014 at 1:50 am #1048
The claims you refer to apply only when one considers the totality of Bob and Charlie’s actions as a measurement procedure on the system from Alice. (Imagine putting Bob and Charlie in a huge black box, that has an input for the quantum system and a classical output of Bob’s estimate of the state.)
Since an arbitrarily sequence of quantum operations that takes in a quantum system and gives out a classical outcome can always be represented as a POVM, the facts about POVMs that I used are applicable regardless of how elaborate the implementation (the inside of the black box) may be.
MattOctober 30, 2014 at 1:42 am #1047
You might be interested to know that your example is pretty much exactly what protective measurement amounts to when carried out within the Bartlett, Rudolph, Spekkens model.
MattOctober 30, 2014 at 1:30 am #1044
We should be able to reach agreement at least on this narrow point: Thinking of the protection-by-measurement (aka Zeno) scheme, does the protection amount to repeated applications of the channel given by eq. (1) in my notes?
(How this compares to the protection in the Hamiltonian-based scheme is probably a question for another day. If it provides even more, that would only strengthen my argument, since I’m saying that the information Bob obtains from Charlie’s repeated application of (1) is already sufficient to undermine the argument to the reality of the wave-function.)
MattOctober 30, 2014 at 1:11 am #1041
Thinking on the spot now, is there actually any difference between a small time-slice of a von-Neumann type strong measurement and a weak measurement? Weak measurements are normally obtained by reducing the interaction strength, but wouldn’t interacting for a shorter time amount to exactly the same thing?
If there is no difference, then my “recap” was just a different way of talking about the original scheme rather than a fundamentally different one.
MattOctober 30, 2014 at 12:52 am #1034
Having returned to one of the original papers, I can see that you’re right that my “recap” of protective measurement does not quite agree with the original scheme, in which the “protecting” measurements are done during Bob’s measurement rather than only between them. Perhaps I heard about this scheme elsewhere and somehow I confused it with the original. My apologies for the confusion. Section 4 probably needs some revision in order to provide an analogy to the original scheme.
All is not lost, however. The argument in Section 3 only relies on the resources available to Bob (1 system prepared in the state + the protecting channel), which still match the initial scheme.
Futhermore, as the other Matt has already alluded to, a Section 4-style argument is actually easier and more complete in the case of the protection-by-time-evolution scheme. If the Hamiltonian is that of the simple harmonic oscillator, and the state is the ground state, then the entire protective measurement scheme can be carried out within Gaussian quantum mechanics, which admits a simple ψ-epistemic interpretation.
MattOctober 30, 2014 at 12:24 am #1025
Thanks for your comment. Just to be clear about something I didn’t make explicit in the notes, I’m certainly not trying to argue that protective measurement is incompatible with the reality of the wave-function, and indeed in interpretations in which the wave-function is (always or sometimes) real it may well be a perfectly good method for determining it. In some interpretations, and I take your point to be that yours is one such, the measurement could even be the thing that ‘brings about’ the reality of the wave-function.
The claim is simply that protective measurement itself (outside of a specific interpretation) does not provide support for the notion that a correct understanding of quantum mechanics must require the wave-function to be real.
MattOctober 29, 2014 at 9:35 pm #1014
Thanks, I think that helps. It’s just occurred to me that a similar ‘coincidence of ontic and epistemic’ actually occurs in Spekkens’ toy theory: the pure states and pure effects are the same set (just as in quantum theory), and yet there is a fact of the matter about whether a given system will result in a given pure effect (or if you like, whether the system “has that property”) even though there isn’t a fact of the matter about which pure state applies to the system.
P.S.: The “Thanks” was directed to Bob. Looks like great minds think alike, Matt.October 29, 2014 at 9:00 pm #1010
Thanks for the interesting remarks. Given that ontic and epistemic are fundamentally different categories, would you agree that, generally speaking, it would be surprising to find something ontic and something epistemic represented by the same mathematics?
If you do not agree: can you think of an example of this occurring outside of quantum theory?
If you agree: is there something specific to wave functions that mitigates this surprise in your approach?
MattOctober 28, 2014 at 9:23 pm #929
Without wanting to put words in anyone’s mouth, I took Shan’s second point to be that if we can establish the reality of the expectation value of an arbitrary observable, then we have established the reality of the expectation values of all observables, and the latter is (more than) sufficient to reconstruct the wave function.
MattOctober 27, 2014 at 3:48 pm #890
Thanks for your comments so far, I think I now understand your position a bit better. Obviously I’m more than a little late to the party, and my question isn’t about the wave function per se, but if you get a chance to respond to the following at some point, I’d be interested in your thoughts.
You’ve said that “quantum theory does not introduce any new beables” and “quantum theory itself does not say anything about the world we could not say without using waves functions, vectors, operators or Hilbert space.”. If we’re talking about something like position or momentum then we are indeed using language that was present before quantum theory. But what about spin? Can “magnitude claims” be made about spins, and if so how can they be expressed without using quantum theory? Or am I simply reading too much in to the statements I quoted?