Yes, the best derivation I know is that of Schwinger, who assumes an H and H_0. H generates transitions in the levels of H_0, and he estimates the time for moving \Delta E in H_0 to be \Delta E \over \hbar. Certainly not rigorous. In relativistic quantum theory (e.g. my book with Springer 2015) it is established rigorously since, long with x and p. t and E are observables in the covariant theory and have covariant commutation relations. Larry Horwitz ]]>

It appears clear that TI and RTI provide a physical account of measurement as well as a physical derivation of the Born Rule. Previous objections to TI (such as that of Maudlin) have been unambiguously resolved and/or nullified (e.g., https://arxiv.org/abs/1610.04609). Thus, TI is still perfectly viable and provides long-sought solutions to pressing problems in quantum theory regarding the need for a physically grounded definition of ‘measurement’ and the source of the Born Rule (as well as a solution to the consistency problems for QFT as reflected in Haag’s theorem, http://www.ijqf.org/archives/2004). Curiously, however, TI is still not generally recognized as among the ‘mainstream’ approaches. I look forward to discussing with other IJQF members why that might be. Is it the apparent “action at a distance” of the direct-action theory that is off-putting? Is it because both Wheeler and Feynman abandoned their theory (though Wheeler was later advocating it again in 2003)? Comments welcome.

]]>http://rsta.royalsocietypublishing.org/content/375/2106/20160390

More light on this aspect of quantum mechanics and the Copenhagen interpretation in the named form can only be good for quantum foundations.

]]>Consider another event C, perceived by another subject k’’, and suppose that the separation between A and C is space-like just like that between A and B, but that the separation between B and C is time-like. If A and B are considered simultaneous, as well as A and C, then so should B and C. This is inappropriate, of course.

Even though space-like separation isn’t a sufficient condition for simultaneity, we assume that the question whether two events are simultaneous or not always has a definite answer. This is necessary in order to construct the universal sequential time n, which is at the core of the present reconstruction of quantum mechanics.

(It should be noted that simultaneity with respect to n, which is considered here, is not the same thing as simultaneity with respect to the time variable t that appears in relativity. Even though I argue that simultaneity with respect to n can have a universal meaning, simultaneity with respect to t cannot, of course. The relation between n and t is discussed in a follow-up paper: arXiv:1801.03396v2)

]]>First, he is still looking for empirical predictions from RTI that differ from standard quantum theory. But as I’ve repeatedly noted in an email exchange with him, RTI is empirically equivalent to standard QED (up to the non-unitary transition); **this is a theorem**, as noted in Kastner/Cramer 2017 (https://arxiv.org/abs/1711.04501).

The only sense in which RTI differs from standard QM/QED is in predicting collapse (i.e. predicting that we will get definite outcomes, which we DO in fact get). In contrast, the unitary-only theory fails to predict what we see; i.e., definite outcomes. Thus, to the extent that it differs from standard QM/QED (i,e only in predicting the measurement transition), RTI is empirically corroborated; while the unitary-only theory is not.

Now for the next problem: Prof. Marchildon states that “he charge is not associated with the amplitude of a physical process”. But this assertion is exactly contradicted by Feynman, the founder of QED, who correctly noted that the charge is the amplitude for an electron (or positron) to emit a real photon. The fact that each Feynman diagram represents a term in a sum in no way refutes this interpretation of the coupling amplitude. Such sums express situations in which no real photon was in fact emitted (usually because the photons are off-shell and/or their emission would violate the conservation laws). But the amplitude still functions as Feynman stated.

Prof. Marchildon’s remaining objections are also off-target. Getting specific behavior for the mesoscopic realm (including Buckeyballs) obviously requires detailed calculations based on the detailed structure of whatever molecules are being used, and those calculations will be done with standard QM (with which RTI is **empirically equivalent**). A molecule that for example is subject to excitation by extraneous photons will be a source of loss of unitarity (leading to ‘which-way information’) even according to standard QM. It’s just that standard QM won’t be able to explain why.

Regarding hypothetical coupling constants that don’t exist: the idea that one could imagine a large electromagnetic coupling constant that does not in fact exist in our world, and that this should be a refutation of a physical theory about our world, leads to absurdities. I can imagine a world in which real photons have large finite rest mass, thus ‘refuting’ the theory of relativity as it applies to our world, since then photons will fail to travel on null cones. Does this mean that relativity is wrong?

In any case, as is explicitly shown in Kastner/Cramer 2017 (https://arxiv.org/abs/1711.04501) , the basic coupling amplitude between fields is not the only arbiter of the non-unitary transition. Marchildon has overlooked transition amplitudes, which contribute to the probability that a measurement-type interaction will take place. This issue is explicitly discussed in the above paper, in the form of decay rates, which depend on both the coupling constant and specific transitions between atomic states. Thus, transition probabilities are crucial aspects of the (time-dependent) probability of a measurement transition, and contribute factors that greatly decrease the basic coupling probability of 1/137.

The same observation applies to the strong force coupling, in which the probability of non-unitarity is always greatly decreased by the relevant transition probabilities. Finally, the suggestion that the strong coupling constant might exceed unity in no way refutes the interpretation of RTI, since that only occurs for extreme separation between quarks, and could be seen as expressing a critical transition zone, beyond the limit of quark confinement, in which enormous energies have to be injected. In this extreme zone, you have to put in so much energy that you create new quarks, which corresponds very nicely to exceeding what would be a coupling of unity for a single quark.

In conclusion, I can find no substantive objections presented in Marchildon’s discussion. I hope I’ve corrected the misunderstandings he expresses here. ]]>