Reply To: Are retrocausal accounts of entanglement unnaturally fine-tuned?

Home Forums 2015 International Workshop on Quantum Foundations Retrocausal theories Are retrocausal accounts of entanglement unnaturally fine-tuned? Reply To: Are retrocausal accounts of entanglement unnaturally fine-tuned?

#2583

However, your planned attempt to generalize this to partially entangled states leads me to think that the symmetry principle may not always work: it appears that introducing asymmetric states will be easy.

More likely, the relevant physical principle is the increase of entropy, or the fact that the entropy was low in the past (I say this partly because of the relation between “information causality” and Tsirelson’s bound). After all, isn’t that always what prevents us from signalling into the past? If we didn’t have a “resource” of low entropy in the past, we wouldn’t be able to signal to the future either, in the following sense.

Think of a protocol where Alice sends a spin-1/2 particle to Bob (who is to her future), and they both perform measurements on it. The usual thing is that Alice can encode one bit per particle, by “pre-selecting” the output of her measurement. That means that she acts upon the result of her measurement, passing the particle to Bob only if its spin is in the direction which corresponds to the data she wants to send. But we can design a protocol where she is prohibited (by fiat) from doing so – the particle is passed to Bob regardless of her output. In that case all she can control is the measurement she makes. Thinking in terms of causation, it would appear that she can still signal to Bob, but in fact all Bob will be able to measure are EPRB-type correlations with her.

In fact, it is clear why: in order to convey information she has to pass to Bob a system or particle which has a large phase-space (or Hilbert space), and to encode the information by restricting the state of the particle to be in part of that Hilbert space, with the different parts corresponding to different messages. By pre-selecting, she does just that. If you think of the entropy of the particle or system she is sending in order to convey one bit of information, it must be smaller by at least one bit relative to its entropy were she to send a random signal (or the overall log of the size of the space of its possible states). However, if she can’t act on the output of her measurement, she can’t reduce the entropy – it remains at least as large as it was prior to her measurement.

This type of entropic analysis works in both a classical or a quantum description of a system, and I guess it will have to work in any “good” retrocausal model as well. I don’t think that the retrocausal toy-models we worked with so far are “good” in this sense.

What do you think?

I want to make one more minor point, regarding your discussion of Tsirelson’s bound. First, you use the term in two distinct ways, one which I think accords with the usual usage, where the Tsirelson bound is fixed at $ 2 \sqrt{ 2 }$, and one where you imply that it may change when you vary $\gamma$. What changes is the maximum value that the relevant combination of correlators can take in your model, and not the Tsirelson bound itself. In fact, I don’t think it is a coincidence that your model always conforms to the bound (by some finite margin for non-zero values of $\gamma$). But I’m no longer clear as to how one could best hope to demonstrate that that’s necessary for such models – by symmetry or by entropy considerations.

Comments are closed, but trackbacks and pingbacks are open.