Home › Forums › 2015 International Workshop on Quantum Foundations › Retrocausal theories › Are retrocausal accounts of nonlocality conspiratorial? A toy model. › Reply To: Are retrocausal accounts of nonlocality conspiratorial? A toy model.
Hi Ken. I think I agree with your description of the situation and the problem and even your statement of what the solution must be:
“Nevertheless, for consistency’s sake, it must be true that some internal interactions are treated as boundary constraints, of one subsystem on the other. That’s the only possible conclusion: effective boundary constraints cannot just be at the beginning and end of the universe, they have to occur periodically throughout.”
But I would put it this way (and I’m not sure if this is just exactly what you meant to be saying, or something else entirely): it better turn out that the boundary conditions on S’ *imply*, via the application of the basic dynamical postulates of the theory, the same sort of “internal facts” that were instead imposed by hand, as boundary conditions, when you analyzed S. That’s, I think, the “block universe consistency” that’s needed.
But then, I think unlike you, I’m not at all optimistic that this can be achieved. You mention Rod’s theory as one example of how it could be achieved, but to me Rod’s theory is instead a very clear cut example of this *not* working out, at all, in the desired way. But then I admit that I remain fuzzy on what his theory is supposed to be saying, exactly — none of the stuff about getting rid of config space wave functions in favor of “spacetime local beables” ever made any sense to me, and it is still not clear to me how he intends to resolve the interaction/measurement problem that you described so clearly (my sense is that he wants to say that every interaction is a measurement, but that can’t be right — you can’t make literally everything in the block a measurement boundary at which you simply stipulate, by hand, what’s going on… there’d be literally nothing left for the theory to say).
And then your other proposed solution (treating large-entropy systems as dynamically privileged w.r.t. small-entropy systems) also strikes me as not too promising — it seems like just another way of trying to hide the standard vagueness and ambiguity pea, from ordinary QM, under some macro/micro terminological shell. That is, I don’t see how all of your prose descriptions of how it might work out (which sound nice, but to me are riddled with exactly the same sort of vagueness and ambiguity that one finds in standard OQM talk about how quantum systems evolve one way when they’re not interacting with a measuring device, but another way when they are) can possibly be realized in clean mathematical terms.
I mean, it’s not like I’m claiming to see with certainty that this’ll never work. It’s just that it doesn’t look promising to me. But I’ll of course be interested to hear about any progress that is made.