Home › Forums › 2015 International Workshop on Quantum Foundations › Retrocausal theories › Are retrocausal accounts of entanglement unnaturally finetuned?
 This topic has 12 replies, 3 voices, and was last updated 5 years, 7 months ago by Nathan Argaman.

AuthorPosts

June 26, 2015 at 7:17 pm #2427Ken WhartonMember
An explicit retrocausal model is used to analyze the general WoodSpekkens argument that any causal explanation of Bellinequality violations must be unnaturally finetuned to avoid signaling. The nosignaling aspects of the model turn out to be robust under variation of the only free parameter, even as the probabilities deviate from standard quantum theory. The ultimate reason for this robustness is then traced to a symmetry assumed by the original model. A broader conclusion is that symmetrybased restrictions seem a natural and acceptable form of finetuning, not an unnatural modelrigging. And if the WoodSpekkens argument is indicating the presence of hidden symmetries, this might even be interpreted as supporting timesymmetric retrocausal models.
(Joint work with SJSU students. Online Discussion: TBA)
July 6, 2015 at 5:00 pm #2502Nathan ArgamanMemberGreat work! In my mind, this is just the right sort of reply to Wood and Spekkens.
It is worth mentioning that our colleagues seeking foundational theories of nature in the field of highenergy physics, generally consider “protection by symmetry” to be a legitimate, and in fact standard, form of fine tuning.July 9, 2015 at 12:56 am #2544Ken WhartonMemberThanks, Nathan… But I wonder what sort of “fine tuning” highenergy physicists are trying to justify using symmetries. Not this same “nosignalling” issue, I assume? Or is it essentially the same: are they worried about how to maintain the perfect commutation of spacelikeseparated operators, and that’s what you’re talking about here?
July 9, 2015 at 5:36 am #2551Nathan ArgamanMemberNo. In highenergy physics it is typically the mass of a particle which is protected. Because of renormalization, the parameters of the Lagrangian change their values (“running coupling constants”), so the observed mass of a particle should “naturally” be on the order of the energy scale of the theory. Some masses are much smaller. The prime example is the photon mass. It is zero because of gauge symmetry (a mass term for the photon would break gauge symmetry). Another typical example is the pion, which has a small mass because of an approximate symmetry.
July 9, 2015 at 9:37 pm #2583Nathan ArgamanMemberHowever, your planned attempt to generalize this to partially entangled states leads me to think that the symmetry principle may not always work: it appears that introducing asymmetric states will be easy.
More likely, the relevant physical principle is the increase of entropy, or the fact that the entropy was low in the past (I say this partly because of the relation between “information causality” and Tsirelson’s bound). After all, isn’t that always what prevents us from signalling into the past? If we didn’t have a “resource” of low entropy in the past, we wouldn’t be able to signal to the future either, in the following sense.
Think of a protocol where Alice sends a spin1/2 particle to Bob (who is to her future), and they both perform measurements on it. The usual thing is that Alice can encode one bit per particle, by “preselecting” the output of her measurement. That means that she acts upon the result of her measurement, passing the particle to Bob only if its spin is in the direction which corresponds to the data she wants to send. But we can design a protocol where she is prohibited (by fiat) from doing so – the particle is passed to Bob regardless of her output. In that case all she can control is the measurement she makes. Thinking in terms of causation, it would appear that she can still signal to Bob, but in fact all Bob will be able to measure are EPRBtype correlations with her.
In fact, it is clear why: in order to convey information she has to pass to Bob a system or particle which has a large phasespace (or Hilbert space), and to encode the information by restricting the state of the particle to be in part of that Hilbert space, with the different parts corresponding to different messages. By preselecting, she does just that. If you think of the entropy of the particle or system she is sending in order to convey one bit of information, it must be smaller by at least one bit relative to its entropy were she to send a random signal (or the overall log of the size of the space of its possible states). However, if she can’t act on the output of her measurement, she can’t reduce the entropy – it remains at least as large as it was prior to her measurement.
This type of entropic analysis works in both a classical or a quantum description of a system, and I guess it will have to work in any “good” retrocausal model as well. I don’t think that the retrocausal toymodels we worked with so far are “good” in this sense.
What do you think?
I want to make one more minor point, regarding your discussion of Tsirelson’s bound. First, you use the term in two distinct ways, one which I think accords with the usual usage, where the Tsirelson bound is fixed at $ 2 \sqrt{ 2 }$, and one where you imply that it may change when you vary $\gamma$. What changes is the maximum value that the relevant combination of correlators can take in your model, and not the Tsirelson bound itself. In fact, I don’t think it is a coincidence that your model always conforms to the bound (by some finite margin for nonzero values of $\gamma$). But I’m no longer clear as to how one could best hope to demonstrate that that’s necessary for such models – by symmetry or by entropy considerations.
July 10, 2015 at 2:09 am #2595Ken WhartonMemberHi Nathan; Thanks for the catch on the occasional misuse of the phrase “Tsirelson Bound”… That’s an easy fix, at least!
On the more substantive issues, I do think that it will turn out that symmetry will play the main nosignaling role for even partially entangled states. You’re right that some of the symmetries will disappear in those cases, but some of the nosignaling disappears as well, in a certain sense. Namely, for maximally entangled states, there’s not even *selfsignaling*; Alice can’t even signal to her own output, let alone Bob’s output. (Thanks to Pete Evans for bringing this to my attention.) Once the maximalsymmetry is broken, by going to partiallyentangled states, selfsignaling reappears (although of course AliceBob signaling does not). I can’t quite prove that the remaining nosignaling is due to a symmetry, since I don’t have the partiallyentangled model working quite yet, but stay tuned…
You’re absolutely right about the entropy issue being connected with not being able to signal to the past, of course. Take a look at section 4 of a piece I wrote with Huw Price, which is essentially the same argument you make above. (So maybe these models are “good” in the sense you mention after all.)
That said, I still haven’t properly sorted out the objective and subjective roles of entropy on signaling… Some of the signaling asymmetry is certainly due to the effect of entropy on consciousness (we don’t know the future), and some is due to the direct accessiblity of lowentropy sources by experimenters. Right now I’m leaning towards putting most of the explanatory burden on the subjective (consciousness) side, and very little on the objective (source) side, but I need to set aside some time to think about all this more carefully and systematically. Your post has motivated me to do just that! ðŸ™‚
July 11, 2015 at 9:01 pm #2640Nathan ArgamanMemberThanks, Ken. I now read your work with Price, and indeed my point above largely overlaps with your discussion there.
Regarding consciousness, please don’t bring that into the discussion â€“ I’m sure it won’t help, just like the introduction of the concept of “free will” led to much discussion, with only a fraction pertaining to the relevant quanum phenomena. These notions are too human, and therefore much harder to understand than physical phenomena, even quantum phenomena! And for the purposes of discussing and studying quantum phenomena, measuring devices which irreversibly register the results (in their memories) are completely sufficient (in the other case, “free variables” is a completely welldefined mathematical concept which plays the relevant role in the discussion, without leading one astray from quantum physics into human affairs).
That said, I completely agree with you that one needs to spend some time clarifying these issues for oneself. I did so recently, rereading parts of Price’s book in the process. The upshot is that the fact that we can only remember the past and not the future (or, to be more careful, our computer memories can only register information from their pastâ€¦) is yet another instance of the Principle of Independent Incoming Influences, and is tightly related with the fact that there are sources of low entropy in the past. For example, if you want to store a bit A in memory, you can take a blank bit M=0 of memory, and perform a controllednot operation, with A your control. After this, M will “remember” A. The controllednot operation is reversible, but you can’t run the procedure in reverse because there’s no way to bring in a blank (low entropy) bit of memory from the future.
I hope this helps. Nathan.
July 15, 2015 at 11:21 am #2762Ken WhartonMemberHi Nathan,
You’re precisely right that the issue of “not remembering the future” is entropyrelated, and it’s just as true for machines as it is for humans. I’m fine with taking humans out of the equation.
And given this connection, I suppose it doesn’t make sense to imagine timereversing the direction one remembers things without also timereversing where the lowentropy sources lie. So I guess I shouldn’t be parsing up the analysis into separate “subjective” and “objective” aspects: they both should always go together. If lowentropy sources lay in both the future and the past then we could remember them both.
That’s useful. One of the cases that was confusing me was signaling between agents with opposite arrows of time (even though years ago I published a science fiction story about this ðŸ™‚ ). But in such a case, any interaction would essentially provide both agents with lowentropy sources in both directions, so neither of them would be restricted to “remembering” in just one direction. I think that insight solves the biggest problems I was encountering, so thanks!
But I still wish I could make sense of signaling on a microscale, finegrained, below the level at which one could make entropyrelated arguments. Sure, there are no agents at that level to send or receive signals, but this mismatch makes it hard to see how the nosignaling issue fits together with my lowlevel ontological models. Any further insight you have on this would be much appreciated…
July 16, 2015 at 9:07 pm #2810Nathan ArgamanMemberHi Ken,
There’s one point which has been nagging at the back of my mind these last few days: When I said “good” retrocausal models, what I meant is that they should be clear about what the ontic variables and the epistemic variables are, and that there would be a natural way to take the log of the number of possible ontic states and associate it with an entropy. Intuitively, I think in my model lambda does not represent an ontic variable – it is an angle which seems to divide the available phasespace into parts which lead to different outcomes. When the parameter settings are the same, there are just two relevant parts. When Alice and Bob choose different settings, apparently the structure of the available phasespace is different. You could think that it’s only the way the phasespace is subdivided, but you can’t go too far in that direction and here’s why: If the structure of the phase space is not changed then there’s no apparent reason for the probability density to change, and for such models Bell’s original analysis works (with lambda representing the phasespace variable), so they cannot violate the inequality and won’t explain anything.
Now I’m not saying it’s going to be simple, if the parameter settings affect the structure of the phase space, but I think that’s a thing to explore. Also, in this sense your model is different, because you do have an explicit description of something rotating along the path, so it looks very much like an ontic variable. Again, you can think of my lambda as the value of that variable at the point along the path which correponds to the source. I think that’s just one way that you can subdivide a phasespace: the space of functions is clearly divided into classes which share the same value at a point. But intuitively I think that that’s not the relevant subdivision. I would think that the entropy would refer to the phase space of the values of the ontic variable at the source, at just one instant. So we have to keep looking.
Best, Nathan.
July 17, 2015 at 1:41 pm #2834Ken WhartonMemberHi Nathan,
I think you’re almost exactly right about what should be considered a “good” variable, but I’ll throw one suggested change at you: Instead of taking the log of the number of possible ontic *states*, what about the log of the number of possible ontic *histories*? This associates “entropy” with regions of spacetime rather than instantaneous slices of some system, so it’s a bit different than what we’re used to entropywise, but I think this has to be the right way to think about blockuniverse retrocausality, as I outline in the toy models in http://www.mdpi.com/20782489/5/1/190 .
This viewpoint also makes it natural to change the structure of the phase space, exactly as you describe. (Again, see the linked paper.) The 3D analog is finiteedge effects in stat. mech., where the structure of the border of the region changes the state space for which one calculates the probabilities inside the bulk. Now, extend this analysis to 4D, and you have a clearcut way to allow the structure of the 4D border (including the future!) to change the statespace in the enclosed spacetime region.
As for how this might tie into my rotatingspinvector, you can try wading through http://arxiv.org/abs/1301.7012 for some ideas, but there are plenty of unresolved issues here, many of which I’ve placed on the back burner while I’m working on partiallyentangled states.
July 19, 2015 at 1:22 pm #2899Dustin LazaroviciParticipantHi Ken,
I also wanted to give you some feedback on your paper. Unfortunately, I don’t have that much to add, because I think your discussion is very much on point. ðŸ™‚ The Schulmann model is quite interesting (I didn’t know it before) and your arguments concerning symmetry are, of course, correct.
It’s not so much a factual critique, rather a personal feeling, that you’re still giving too much credit to the WoodSpekkens argument, though. To me, it’s just one of those metaresults that seem deep but are actually quite irrelevant. In a toymodel, where any postulation of probability distributions is ad hoc and where you might have to introduce some artificial variables, the “finetuning” objection seems to have some bearing. In any more serious theory, where the probability distribution is either part of – or better – derivable from the fundamental postulates/law of the theory, the WoodSpekkens argument amounts to the claim: if the theory was different, it’d be wrong. I mean: if the Boltzmann distribution was different, pigs might be able to fly. But who cares?
Still, the Schulmann model is nice as an “intermediate step”, because it demonstrates how the “correct” (i.e. nonsignalling) distributions can be justified by deeper principles (e.g. symmetry).
Best, Dustin
July 20, 2015 at 2:17 pm #2913Ken WhartonMemberHi Dustin; Thanks for the kind words!
As for giving the WoodSpekkens argument too much credit… If there was an existing retrocausal model they were attacking, that gave the right probabilities already, I’d certainly agree with you. But they’re framing it as an argument against trying to develop such a model in the first place. And since most people’s instincts are aligned against retrocausality to start with, I think it’s an argument that many people would be inclined to accept. So I’m not inclined to dismiss it quite so readily.
And I do think it’s a reasonable argument… after all, I still can’t quite answer it definitively, because moving to partiallyentangled states breaks some of the symmetries that make the twoparticle version of Schulman’s model work so nicely. A closely related issue, that I’m having even more trouble with, is framing classes of retrocausal models for which you can’t signal into the past. (You can see some of the above discussion with Nathan relating to this point).
Cheers!
Ken
July 27, 2015 at 3:38 pm #2968Nathan ArgamanMemberHi Ken,
I’ve finally read not only your “information” article but also your 1307 arXiv preprint, which indeed required some “wading.” I must say I think you’re on the right path, with the most appropriate motivations I’ve seen yet (that is, of course, to the best of my judgement). And there’s a lot to do. I wonder why there aren’t more people working along such lines. For example, do you know what Rob Spekkens thinks about your line of argument? Does he accept now that retrocausation is worth pursuing?
Two things on the technical level: (a) I think you will need a measure for paths, as in Feynman path integrals; they’re not discrete, and you can’t just count them with integers; (b) Even accepting your idea of an entropy for 4D histories, I still think we’ll need the 3D concept in addition, e.g., so as to be able to identify lowentropy past boundary conditions, etc.
And one more thing: I think that by the time we’ve learned how to describe a measurement in a retrocausal theory of local beables, we will see that our ontic variables describe waves (with quantum noise, not just a solution of a PDE), but our epistemic variables are nevertheless “corpuscular,” in the sense that once something has been measured irreversibly, even just a single click in the detector, it’s either there or it isn’t. So the “it from bit” ideas will still apply in some sense, but only in the limited sense relevant to the epistemic variables. (And, of course, unitary evolution will be natural – you simply can’t change the information in your epistemic state between updates).
OK, I guess that’s it. Thank you very much for your efforts in oganizing this, and especially for inviting me to join in. Please tell me if you’d like to look at the stochastic quantization ideas and discuss them as well.
Cheers, Nathan.

AuthorPosts
 You must be logged in to reply to this topic.
Comments are closed, but trackbacks and pingbacks are open.