Reply To: Are retrocausal accounts of nonlocality conspiratorial? A toy model.

Home Forums 2015 International Workshop on Quantum Foundations Retrocausal theories Are retrocausal accounts of nonlocality conspiratorial? A toy model. Reply To: Are retrocausal accounts of nonlocality conspiratorial? A toy model.

#2930
Ken Wharton
Member

Hi Travis,

There are many aspects of the measurement problem, but I think the one that is hardest for retrocausal theories is that there should be no difference between an *interaction* and a *measurement*, since one can’t define a hard distinction between the two. Standard QM of course has this problem. It incorporates mere interactions into multi-particle unitary dynamics, while external measurement is treated with a separate part of the theory. The other, related problem in standard QM is that if one takes a bigger view, and encompasses the measurement device as part of a bigger system, one gets a different story: now it’s back to unitary dynamics for the whole bigger system. Both of these accounts can’t be correct, so your preferred solution to this logical mismatch is a QTWO, like Bohmian mechanics.

In small-scale versions of retrocausal accounts, for which we have simple toy models, it may seem like we’re heading towards the same problem. A few-particle system might have self-interactions, treated via one framework, but the retrocausality is almost always imposed at the very end of the experiment, at some final “measurement”. This final measurement is usually imposed as a special boundary condition on the previous system, unlike the few-particle interactions that might precede it. So interactions and measurements look different, the very same problem from QM, and one might think that all the other related measurement problems of standard QM would arise for these models as well.

But, at least for retrocausal models which are couched in spacetime (in a Block universe), the other problems don’t follow in anything close to the same way.

The ambiguity of how measurements work in standard QM , combined with the use of large configuration spaces to deal with multi-particle systems, means that there are always two different ways to apply QM to a measurement device (MD) interacting with a system S. One can either include MD+S in a large, unitarily-evolving configuration space, or apply the usual measurement rules on S alone.

But if everything is couched in terms of spacetime-local beables (the biggest selling point of retrocausal approaches), there is no ambiguity of how to represent MD+S. MD fills some region of spacetime, S fills another region of spacetime, and in these models there is no ontological configuration space into which one can combine the two. Sure, you can come up with a larger spacetime region, MD+S, and call this larger region a new system S’. But thanks to the Block universe, everything about S’ will exactly map to MD and S, and vice-versa. The two ontologies are guaranteed to be compatible, unlike the situation in standard QM where configuration spaces can’t generally be unpacked into its local pieces.

But that’s just the ontology; what about the *theory*? It must be true that any ultimate theory capable of analyzing a finite system must be applicable to both S and S’. Given that the theory is retrocausal, MD imposes a future boundary condition on S (at least, that’s how all the small-scale toy models work). But Block-Universe-consistency means that when the theory is applied to S’, it must reveal an intermediate boundary constraint on S at the S/MD interface. And this interface isn’t an external boundary of S’, it’s just some internal “interaction”. Nevertheless, for consistency’s sake, it must be true that some internal interactions are treated as boundary constraints, of one subsystem on the other. That’s the only possible conclusion: effective boundary constraints cannot just be at the beginning and end of the universe, they have to occur periodically throughout. (Travis: perhaps this helps with your point #2 when discussing Rod’s model, just now?)

Still, this observation brings us right back to the initial problem: in these toy models, few-particle interactions and final measurements are *not* treated the same way; the latter are imposed as boundary conditions and the former are not.

But unlike standard QM, which has other irresolvable issues due to the ambiguity of configuration space, this problem does not appear to be fatal in Block-Universe retrocausal models. There are several paths forward, at least two:

1) Rod Sutherland’s approach (I think?): *Every* interaction is effectively a measurement, and therefore a boundary condition. There are other issues that come up here, of course, but I’m going to set them aside for now because it’s not my favored solution. (But I do think it’s a viable path that should be explored.)

2) My suggestion: Effective boundary conditions are imposed on lower-entropy systems (such as S) where they interact with higher-entropy systems (such as MD). But when two comparable-entropy systems interact, there’s no broken symmetry, and instead of a directed boundary constraint (MD constrains S, not vice-versa) one simply finds mutual correlations.

This is actually what happens in stat mech. Infinite-entropy systems are just thermal reservoirs, and they act as boundary constraints on low-entropy systems that come into contact with them. To the extent that there is a large asymmetry between their number of possible internal states, the one with many more possible states effectively imposes a boundary condition on the other. But for comparable-entropy systems in stat mech, that boundary constraint disappears, and one is merely left with a correlation between two systems.

It also is what happens in my most explicit analysis of this, in arXiv:1301.7012. In a simple 0+1D history-counting framework, it falls out of the math that higher-energy density systems constrain lower-energy-density systems, in just the right way to impose a measurement-like boundary constraint. But for two comparable systems they just get correlated, with no measurement-like boundary.

It seems to me that given this account, the retrocausal toy models go through. The few-particle interactions aren’t imposed as boundary conditions, because they’re interactions (correlations) between comparable-entropy systems. Then, at the end of the experiment, some (presumably massive) MD interacts with these small systems, imposing a (future) boundary constraint. That’s exactly how the toy models work. And now, you can see how it would scale up: If the MD was a buckyball absorbing a photon, it would impose a boundary on the photon. But the buckyball might itself be measured by a much larger detector, which would impose an external boundary condition on MD, at some later time.

Instead of an infinite regress, you eventually hit a cosmological boundary condition, which is probably the best analog to an infinite-entropy system one could come up with in 4D-stat-mech. So that cosmological boundary might be an ultimate cause, but there would be plenty of other, smaller, proximate causes on low-entropy systems. And many of those would be imposed exactly at this microscopic-macroscopic interface, where “quantum measurements” are generally assumed to occur. All of these boundaries throughout the universe would of course have to be solved “all at once” rather than dynamically, but the larger-entropy pieces would still constrain the smaller-entropy pieces, all the way down to the loneliest little particles passing through carefully-shielded vacuum chambers.

There’s a lot more to say here — a nice retrocausal resolution of delayed-choice and quantum-eraser scenarios, etc. — but I’ll leave it there and see what you and others think. Does my favored approach offer any hope for solving the biggest measurement problems in retrocausal theories?

Best, Ken

Comments are closed, but trackbacks and pingbacks are open.