Nathan Argaman replied to the topic Retrocausal Bohm Model in the forum Retrocausal theories 7 years, 10 months ago
So far, I only skimmed through your present article, but I did read the 2008 (or rather, the 2006) version thoroughly at the time. My understanding was that it gives a very interesting description of what happens between a preparation and a measurement, but it does not give a definite prescription for calculating the probabilities for different outcomes of the said measurement (for this reason, I did not include it in the comparison table in my article on Bell’s theorem, as noted there). It seemed to me that, implicitly, one is to calculate these probabilities by the usual rules of QM. That is quite distinct from Bohmian mechanics, where there is a clear independent prescription for evaluating probabilities for measurement results (and, in fact, if the original density distribution is taken as non-standard, one may obtain non-QM results).
Let me ask: Have I understood correctly? Is the current updated version different in this sense?
Of course, even if the answer is negative, your work does accomplish a lot, and is quite impressive. And also, it is in good company – the two-time formalism of Aharonov et al shares this attribute – it provides no way of predicting the outcome probabilities of the final measurement (other than using standard QM).
The short answer to your question is that my model is simply an “add-on” to quantum mechanics and so just assumes the Born rule for probabilities as part of the pre-existing formalism. Yes, I would certainly like to see a more fundamental derivation of this rule, but my personal opinion is that none of the interpretations of QM have succeeded in doing this in a way that is rigorous and generally accepted.
In the case of the standard Bohm model, all the maths seems to tell us is that if we start with the Born distribution then this distribution will persist through time. My understanding is that attempts have been made to show that other distributions will decay with time to the right one, but that these attempts have not been fully convincing. So it seems to me that the usual model is essentially just resorting to the rules of QM too. It is true that the Bohm theory of measurement is impressive and constitutes an advance (in my opinion), but again a similar version can be formulated for the my model (Sec. 13 in my 2008 paper). In particular, given the initial probability distribution for position provided by my model, the maths ensures that this distribution is maintained through time.
Finally, concerning non-standard distributions, I would have thought that both models are on the same footing in being able to accommodate them.
Anyway, this time it’s my turn to ask if I’m understanding things correctly.