Arthur Fine

Forum Replies Created

Viewing 1 post (of 1 total)
  • Author
    Posts
  • #3294
    Arthur Fine
    Participant

    Shan, Thanks for the post. Here are a few thoughts.

    1. You catch-out the ontogical models view at a weak point. For what they call “psi-epistemic” has to do with possible overlap of ontic states in the preparation of distinct state functions. But that has nothing to do with explaining collapse on measurement. In the earlier literature, as in your citation from Einstein, knowledge was invoked to gesture at a possible explanation of collapse. (See, for example, Kemble, E.C. The Fundamental Principles of Quantum Mechanics (McGraw Hill, New York, 1937.) But this current use of “epistemic” is a technical term of art in a quasi-operational approach to QM, with no special connection to collapse.

    1. Re Einstein. Many commentators cite Einstein when it suits them, either to bolster their own attitude or to score a point over some alleged failing of Einstein’s. But Einstein’s writings are subtle, complex and diverse; and these sorts of citations do him a disservice. In particular you cite a passage where he seems to affirm a “state of knowledge” view of the state function. But I can give you many other passages where he affirms, just as surely, quite opposite views. I have an old paper on this, where I point to the diversity and try to explain it. (“Einstein’s Interpretations of the Quantum Theory”. Science in Context 6 (1993) 257-73. Also in M. Beller, R. S. Cohen and J. Renn (eds.) Einstein in Context, Cambridge: Cambridge University Press, 1993, pp. 257-73.) I am sure more recent scholarship has gone even further along those lines. One lesson: there really is no definite thing that can count as “Einstein’s interpretation” of QM. See the paper.

    2. Your argument invokes the Bell-Kochen-Specker theorem. (Let’s be historically accurate here; Bell published first. It was a small point of pride with him, even if that was a somewhat sore point for Kochen & Specker, whose publication was delayed.) But you are cavalier in suggesting that value definiteness is all that theorem needs. Of course one needs more (as you sort of acknowledge in a footnote). The assignment of values has to be noncontextual. But more still. We need orthogonal additivity, or its equivalent (product, sum rules etc.): that in any resolution of the identity by rank one projectors exactly one projector is assigned the value 1.

    In the ontological models framework this is insured by a special additivity postulate; namely, that for any ontic state x the response function probabilities at x for the eigenvalues of any observable add up to 1. In the deterministic case, this implies orthogonal additivity and generates a no-go. They tend to gloss over this additivity assumption as something surely obvious: that every measurement has a result. But additivity is a postulate that needs to be examined, since in fact not all measurements do have results (talk to your laboratory bench mate). Even as an idealization, is it reasonable to assume that the ontic state resulting from any state preparation whatsoever would be suitable for performing absolutely any measurement of anything at all ? That constitutes a strong sort of noncontextuality jointly for preparations and measurements, one that goes beyond the usual Bell-Kochen-Specker noncontextuality. Also note that it is a critical assumption, not usually brought out, but necessary in the demonstration that there are no maximally epistemic (noncontextual) models, or no preparation noncontextual ones. It is also critical for the Pusey-Barrett-Rudolph theorem.

    3. Lastly. Let me contrast another sort of picture of the wave function as representing incomplete knowledge to yours. Knowledge of what? You suppose it is knowledge of the exact values of observables. But it could be incomplete knowledge of “a real state of affairs”, just as Einstein says (in your Heitler citation, and elsewhere – sometimes). Moreover, as in the ontological models, in general that real state may only determine probabilities for measurement outcomes. The particular outcomes may be matters of chance, governed only by probabilistic laws. (Einstein says this too, sometimes.) Then, given a particular outcome, we update our (still partial) knowledge of the real state accordingly. This dissolves the specific problem of collapse, turning collapse into updating.

    Other issues, however, may remain. Does the real state change under measurement? In the Heitler quote Einstein thinks not. But measurement is a physical interaction, so plausibly it might. In that case some might want a dynamics, which could be a probabilistic dynamics. And that could be an issue, as you say, for an epistemic view to address. But any view that postulates real change due to measurement would have the same issue.

Viewing 1 post (of 1 total)