Maximilian Schlosshauer

Forum Replies Created

Viewing 22 posts - 1 through 22 (of 22 total)
  • Author
    Posts
  • #2614

    Dear Roderich and Travis,

    Thanks a lot for your insightful replies. I’ll dig up some of the references you listed. And I certainly agree with Travis that no retrocausality is required to explain the delayed-choice experiments.

    Best,
    Max

    #1135

    To Richard #1129:

    … where the application of quantum theory presupposes that exactly one outcome will occur.

    Exactly! This is why I tend to think of the measurement problem as a pseudoproblem.

    #1109

    Hi Matt L.,

    I’m hope I’m not joining this conversation too late. Regarding your comment in #1094:

    The issue is that the protection is repreparing the system in an independent copy of the initial state, so you effectively have an ensemble of independent copies which you are measuring.

    This way of describing the protection seems specific to the Zeno-type protective measurement. If the protective measurement is adiabatic (i.e., no intermediate projections back onto the initial state), then I don’t see the ensemble analogy you are drawing; at least it’s not obvious. Is your argument specific to the Zeno version?

    Best,
    Max

    #1084

    Hi Shan,

    Unfortunately your talk will be during a time slot when I will have my two young kids to look after. So I will not be able to chime in in real time. But I’ll be sure to check back later.

    Also, I’m thinking that some of the points about the foundational implications of protective measurement we have talked about during Matt Pusey’s talk and my talk might be pertinent to today’s discussion.

    Have fun!

    Best,
    Max

    #1058

    Hi Shan,

    Thank you for your reply, that’s helpful.

    I will need to take a closer look at Matt’s notes myself. Tomorrow!

    Best,
    Max

    #1056

    To Shan #1028:

    An example is a trapped atom, where the potential may not be known beforehand, but one does know that after a sufficiently long time the atom is to be found in the ground state.

    At the risk of beating this horse to death: I would rephrase this to read “one does know that after a sufficiently long time the atom is to be found in the ground state with high probability.”

    My point is that once again there is an irreducible indeterministic element. You can never know for sure if the system is actually in an eigenstate of the Hamiltonian (which is what you need for a PM to proceed). That is, of course, unless you have actively prepared it in such an eigenstate, but then you know the state from the outset and there’s no need for a protective measurement anyway. Or what am I missing here?

    Best,
    Max

    #1055

    Hi all,

    Just back from dinner, sorry to be joining so late.

    Since you’ve been discussing the two different implementations of a PM (adiabatic and Zeno), here’s a question for Shan. Do you think a Zeno-type PM is equally indicative of the reality of the wave function as an adiabatic PM? If yes, why? If not, what do you see as the differences?

    (I think there are conceptual differences that may have interpretive consequences — even though one might be able to show a formal equivalence of the two schemes, as Matt L has suggested.)

    Best,
    Max

    #1027

    Hi Matt,

    Thanks for your intriguing contribution. I just managed a cursory read this afternoon, and I will still have to take a closer look, and unfortunately now is not a good time (the family is calling to dinner). But I’ll get back to it later. And I look forward to seeing what the other participants will have to say.

    Best,
    Max

    #979

    Hi Shan,

    Thank you for your reply.

    As for your revised criterion, well, I think ultimately it’s a matter of taste which criterion each of us deems satisfactory for establishing the reality of the wave function. I would still say that “disturbing a system with probability arbitrarily close to zero” is not sufficient, because any nonzero disturbance whatsoever implies an irreducible indeterministic (statistical) element.

    The identification of (ideal) measurability with reality of course has its origin in classical physics, where we consider something a physical property (and hence “real”) if we can measure it. But since QM forbids reliable measurement of an unknown quantum state, this criterion of measurability cannot be applied to the wave function. Which is why I would agree with you in saying that equating measurability with reality (that is, deducing reality from measurability) is too simplistic an approach for quantum mechanics — and it is why, I suppose, PBR had to go a different route. (In my opinion, they still did not succeed in establishing the reality of the wave function, but their argument is nonetheless interesting for various other reasons.)

    But we should discuss more during your talk!

    Best,
    Max

    #960

    Dear Richard,

    The kind of view of the wave-function you were suggesting (which I think is basically right) is better expressed by calling the wave-function a source of probabilities rather than as a description of the statistical properties of ensembles.

    I’m happy to go with that. But what do you exactly mean by “source of probabilities”? Is that source fed (determined) by something external, objective, physical? Or “source” just in the sense of a collection of my personal beliefs, like a subjective Bayesian would have it? I think you’re on the latter bandwagon, but I’d like to make sure.

    I do concur that the ensemble conception is confusing (see my confused attempt to explain it in #955), and certainly the derivation of probabilities from relative frequencies is circular, as is well known.

    Best,
    Max

    #956

    Hi all,

    Thanks so much for all your excellent comments. I have to dash and hold office hours now –- my students are banging down the door with questions about Newtonian physics. But I’ll be back tomorrow to answer any further comments and queries.

    Also, look for Matt Pusey’s and Shan’s talks on the same topic. We can continue the discussion there.

    Thanks again,
    Max

    #955

    Dear Bob,

    Re your #949. Your question is essentially about how to understand probability assignments. I.e., what does a 75% chance of success mean if there’s just a single trial? And there are lots of different interpretations and ways to motivate that (see QBism for a radical and interesting take). I would say that for an ensemble person, the only meaningful interpretation of “75% chance of success” is based on relative frequencies of outcomes of repeated trials. And the quantum state of your quantum computer then reflects that property; you are welcome to assign a quantum state to an individual machine, but its operational meaning will only be cashed out in terms of statistics of many “trials.”

    Well, I may be leading myself down a rabbit hole here, so I better stop.

    Best,
    Max

    #950

    Dear Bob,

    I certainly agreee that there is some probability that you
    will kick the system into a different energy eigenstate (assuming energy
    provides the protection), and in that case you will draw a wrong conclusion.

    Sure, and from a practical point of view, if I can make that probability small enough I might well be perfectly happy. I have no problem with that. But I don’t think that’s enough to draw fundamental conclusions about the meaning of the wave function, which is the conclusion I’m challenging.

    As a physicist may be a bit more bold and rush the result off to Phys. Rev. Letters (which publishes plenty of mistakes).

    You are bold indeed — Aharonov, Anandan, and Vaidman only rushed it off to Phys. Rev. A!

    Thank you for all your comments. I’m glad Shan and I motivated you to think about protective measurements again.

    Best,
    Max

    #946

    Bob has raised an interesting question — namely, to what extent does protective measurement challenge the viewpoint that the wave function only describes ensembles, like Ballentine et al. once suggested?

    My own sense is that there is no real challenge, just as doing quantum experiments on single quantum systems does not amount to a challenge. The reason is that even when we deal with single systems, there always remains an irreducibly random (indeterministic) element in the quantum description, which is ultimately cashed out in statistical terms. And such statistical terms may always be interpreted as only meaningful in the sense of statistics of measurements on identically prepared systems, i.e., ensembles. At least this is how I imagine Ballentine might reply.

    What do you think?

    #945

    Dear Bob,

    Thank you for sharing these fascinating recollections about Jeeva.

    Nonetheless, wouldn’t you allow that protective measurement is an
    interesting idea … ?

    Absolutely! I think it’s a brilliant measurement scheme, just as brilliant as weak measurement (though the latter one seems to have received more attention and gained more experimental traction). I very much admire Aharaonov, Vaidman, and their co-workers for coming up with it. It has definitely enriched our conception of a quantum measurement.

    So I’m not here to dismiss protective measurement itself –– I’m just trying to point some cautioning fingers at the bolder foundational claims that have been associated with protective measurement. But even if none of those claims goes through, I think protective measurement is an important contribution that deserves attention and will hopefully, some day, be implemented experimentally as well.

    Best,
    Max

    #943

    Dear Bob,

    Responding to your #933, you do agree that part of the time, maybe 90% of the time, I will get some information about the initial state. Well, that suggests that you would agree that there was an initial state to get information about.

    Yes, certainly there “is” an initial state, in the sense that the system has been prepared in some (unknown) eigenstate of the Hamiltonian. So now I can go and try to extract information about that state using protective measurement. But in doing so, I will always disturb the state, in the sense that an entangled system-apparatus state will be created. Now, in the final readout of the pointer, I may happen to project the system back onto its initial state (and in the process get information about that state). But this, of course, is an indeterministic process, and in the end there’s no way for me to know whether whatever I have extracted is information about the initial state or information about some other state. Moreover, wouldn’t you agree that this uncontrollable back-projection is different from a situation in which the system has remained in its initial state at all times and I have learned something about it? The latter is what I would say we’d need to establish the reality of the wave function, but QM won’t let us have it, and protective measurement can’t change that fact, since it’s just a (clever) application of QM.

    Best,
    Max

    #942

    Hi Ken,

    Thanks! As far as my own view of the wave function is concerned, well, this may be opening a can of worms indeed. I’m happy to say, however, that I’m partial to what people call “epistemic” approaches. That is, I like to think of the wave function as a calculational tool for organizing information/knowledge/beliefs (I’m not exactly sure which notion is the least troublesome) about future measurement outcomes (or perhaps “experiences,” as QBism puts it). But I try not to be dogmatic about it.

    Ah, I should have never said anything.

    Best,
    Max

    #939

    Richard,

    Thanks, that’s a good question. In the usual formulation, protective measurement follows a von Neumann-style description of measurement. That is, Eq. (11) is taken to describe a situation in which information about the different energy eigenstates |n> is transferred to the mean position of the pointer wave packet, encoded in the corresponding quantum correlations. This, of course, calls for a second measurement stage, the actual readout of the pointer (which in turn is subject to the usual QM indeterminism, something that Dass and Qureshi discuss in their PRA).

    So protective measurement is essentially silent on its particular interpretation of the measurement process — once the correlations are established, it defers to the usual rules of QM. The e-e link would only come into play once we have made a secondary, projective measurement on the pointer, collapsing the entangled state on the rhs of Eq. (11). But protective measurement does not explicitly deal with that stage.

    I hope I understood your question correctly; if not, please follow up!

    Best,
    Max

    #935

    Bob,

    Now regarding your second comment. I agree with your drawing of an analogy between protective measurements on a single system (assumed to remain in the initial state throughout) and tomographic measurements on an ensemble of identically prepared systems. There’s certainly a formal similarity here: we’re trading impulsive single measurements on a collection of systems, as in tomography, for a collection of weak measurements on a single system.

    There’s also another connection here. In protective measurement, we could reliably determine the expectation values if each measurement was infinitely long and weak. Similarly, in tomography, if we had infinitely many systems in the ensemble, then we could reliably determine the quantum state of each system. So why does nobody claim that the latter fact implies the reality of the wave function? Advocates of protective measurement to foundational problems would say, I think, that it’s because we’re dealing with an ensemble, not a single system. But as far as the limiting procedures go (infinite ensemble size vs. infinite measurement times), they strike me as similarly questionable, and therefore their ability to justify sweeping foundational conclusions about the wave functions strike me as similarly questionable, too.

    Best,
    Max

    #933

    Dear Bob,

    Many thanks for your comments.

    First, let me reply to your suggestion that a “FAPP” solution may well be acceptable. I’m happy with FAPP solutions to most problems, even some foundational ones. But I’m not sure that protective measurement falls into that category. As mentioned in my reply to Shan above, in protective measurement we’re not determining the initial quantum state with a certain degree of precision, in the sense that the measured state will always be close to the initial state. (In which case I’d happily endorse such a FAPP solution.)
    Rather, in some measurements we’re obtaining information about the initial state, and in others we’re obtaining information about an orthogonal state (with the system then projected onto that state). There is an irreducible randomness in this process, which is simply a reflection of the unavoidable randomness in the outcome of any quantum measurement of a state that is not already known. In this way, protective measurement is just like any other quantum measurement, including the trade-off between disturbance and information gain. So given this, how could we get any foundational mileage out of protective measurement — which is essentially just a long weak measurement?

    I’ll respond to your second point in a moment.

    Best,
    Max

    #930

    Hi Matt,

    Thanks for the clarification. But I’m not sure if the conclusion from the reality of one expectation value to all expectation values is so straightforward. Suppose I identify “reality” with “can be measured.” So the first expectation value would be real because I was able to measure it (somehow). But if I cannot subsequently measure any other expectation values, then why should I consider them real as well? I think QM tells us many cautionary tales of why counterfactual reasoning may fail. Unless I can actually (simultaneously or consecutively) measure all the required expectation values, I don’t see why measuring one value (and therefore deeming it real) should allow me to say that the expectation values of all other observables should be real too. And in protective measurement we have precisely a situation where such multiple measurements may be fundamentally impossible (if we take the limit of an infinitely long measurement) or practically impossible (because the number of required measurements increases exponentially with the dimension of the Hilbert space — that’s the scaling problem I mentioned).

    Best,
    Max

    #925

    Hi Shan,

    Thank you for your comments. Here are my responses.

    First of all, in your paper you say that measurability of an unknown quantum state is a condition for the reality of the wave function that is “too stringent to be true.” But it was precisely this condition (albeit with reference to an unknown eigenstate of the Hamiltonian) that the architects of protective measurement have used to define the reality of the wave function. For example, Aharonov and Vaidman (1993) write: “We show that it is possible to measure the Schroedinger wave of a single quantum system. This provides a strong argument for associating physical reality with the quantum state of a single system.” They made no qualification of the term measurement: what can be measured is considered real, and protective measurement (they claim) allows us to measure the wave function, and so the wave function must be real.

    You also say that if the condition (measurability of an unknown quantum state) were true, “then no argument for the reality of the wave function including the PBR theorem could exist.” But you can have conditions that do not refer to measurability. Indeed, the PBR theorem is precisely of this kind — it does not infer the reality of the wave function from some notion of measurability, but rather from a clash between an “overlap” assumption (the possibility that a single real state is associated with two distinct nonorthogonal quantum states), a preparation independence assumption, and the predictions of QM.

    In my paper, I have argued that protective measurement does not allow one to measure the wave function in a sense needed to establish its reality simply because you can never transcend the statistical (indeterministic) aspect. There’s always a nonzero probability for projecting the system into an orthogonal subspace, and moreover there’s no way of telling whether this has happened (i.e., whether your measurement has failed). You suggest to circumvent this objection by introducing a new criterion for reality (“if, with an arbitrarily small disturbance on a system, we can predict with probability arbitrarily close to unity the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity”). I don’t find this terribly convincing, for two reasons:

    First, the criterion seems ad hoc, tailored precisely to what protective measurements can accomplish, and just so that the objection mentioned above can be circumvented. There is a well-established and well-motivated notion of reality based on (realiable) measurability, and I just don’t see how your criterion could be motivated in a similar way.

    Second, we are not talking about making an arbitrarily precise measurement of some physical constant, like adding more and more digits to improve numerical precision. What we are talking about in the context of protective measurement is an unpredictable, unavoidable disturbance of the system (no protective measurement leaves the initial state unchanged) that may lead to a complete new (indeed orthogonal) quantum state. These are two fundamentally different types of disturbance (or lack of “precision”). If protective measurement allowed me to measure the initial wave function every time, plus or minus an arbitrarily small deviation from its initial value (something like |psi> + delta|phi>, where delta can be made arbitrarily small and |psi> is the inital state), then your criterion might become sensible. But protective measurement doesn’t do that.

    As a side note, to reconstruct the wave function, we need to carry out many protective measurements. If, to become a meaningful condition for reality, a single measurement needs to be made essentially infinitely long, then how can we ever carry out more than one such measurement?

    Finally, I do not understand your reply to the “scaling problem” I pointed out. (The problem, to recap, is about the extraordinary large number of protective measurements required to reconstruct the wave function.) You say that “in order to argue for the reality of the wave function in terms of protective measurements, it is not necessary to directly measure the wave function of a single quantum system, and measuring the expectation value of an arbitrary observable on a single quantum system is enough.” Why? The expectation value is just a number and, on its own, doesn’t tell us anything about the state of the system.

    So far my thoughts. Perhaps they will also help spark discussion among the participants.

Viewing 22 posts - 1 through 22 (of 22 total)