Quantum Oblivion and Hesitation

Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
  • #2483
    Eliahu Cohen

    Alongside with the work on the TSVF with Aharonov, I have been investigating during the last 2 years the foundations of Interaction-Free-Measurement (IFM) together with A.C. Elitzur. We have shown that the phenomena underlying IFM, which we coined Oblivion and Hesitation are in fact ubiquitous in Nature. We suggest these phenomena are best understood in a time-symmetric formulations of QM such as the TSVF and the transactional interpretation. This is still a work in progress, comments and criticism are most welcome.

    1. E. Cohen, A.C. Elitzur, Voices of silence, novelties of noise: Oblivion and Hesitation as origins of quantum mysteries (2015) arXiv preprint 1504.06241. http://arxiv.org/abs/1504.06241.

    Eliahu Cohen, Avshalom C. Elitzur
    2. Elitzur, Avshalom C., and Eliahu Cohen. “Quantum oblivion: A master key for many quantum riddles.” International Journal of Quantum Information 12 (2014): 1560024. http://arxiv.org/abs/1411.2278.

    Ken Wharton

    Hi Eliahu and Avshalom,

    Thanks for linking to the interesting papers! I need to process this Oblivion stuff a bit more before I respond directly to that aspect. But I do have a question concerning something you wrote in section 5.3 of the “Voices” paper:

    “If, in contrast, one allows causal relations to somehow go back and forth between the past event and its successors, much of the mystery dissolves. What appears to be nonlocal in space, becomes perfectly local in spacetime.”

    I was quite happy to read this – that last sentence, which you emphasized in the text, seems to me to be right on the mark.

    But I don’t understand how this view meshes with the Two State Vector formalism. In that formalism, for entanglement experiments, both the pre-selected *and* the post-selected wavefunctions live in configuration space. They are *not* spacetime-local in the sense you describe here.

    I guess I’ll leave my question there. You highlighted these two seemingly opposing ideas: the TSV formalism and the importance of spacetime-locality. How do you see these ideas fitting together in the multi-particle sector, perhaps down the road in some future theory?



    Mark Stuckey

    I read both papers, Eliahu. Thanks for making them available for discussion in this workshop. Let me begin by inquiring about your view of the block universe implications of this work.

    On page 1 of “Voices,” you seem open to the block universe ontology when you write, “It reformulates Oblivion within time-symmetric interpretations of QM, mainly Aharonov’s Two-State-Vector Formalism (TSVF).” TSVF explains space-like correlated outcomes that violate Bell’s inequality by allowing information about experimental outcomes (or detector settings) to be available to the entire history of the experimental process. Of course, this implies the “co-reality” or “co-existence” of the past, present and future, i.e., block universe. Cramer notes that the backwards-causal elements of his transactional interpretation (TI), for example, are “only a pedagogical convention,” and that in fact “the process is atemporal” (1986, 661). You, on the other hand, seem to adopt a meta-time view of the block universe when you write (p 2 of “Voices”), “Such ‘unphysical’ values are assumed to evolve along both time directions, over the same spacetime trajectory, eventually making some interactions ‘unhappen’ while prompting a single one to ‘complete its happening,’ until all conservation laws are satisfied over the entire spacetime region.”

    What is your view of the block universe implications of TSVF? Do you subscribe to meta-time? Or, do you view these QM processes in a block universe “atemporally” a la Cramer? Or, are you considering other options?


    Eliahu Cohen

    Dear Ken and Mark,

    Many thanks for the constructive remarks and for highlighting these two key issues within the TSVF and probably within any other time-symmetric formulation of QM. I know your questions are strictly related to your contributions in this workshop, so I’ll try to study them. In the meanwhile, rather than decisive answers, I’d like to share with you a few reflections:

    With regards to Ken’s question:
    Relying primarily on http://arxiv.org/abs/1206.6224 (a variant of an EPRB experiment is analyzed where a set of weak measurements is performed before the preparation and final projective measurements) our claim here was that the entangled particles were in fact “aware” of their actual future (separately, in spacetime, rather than configuration space) which determined their weak values in the past. At this point, Aharonov and Elitzur differ. The first believes that this provides a full local description of experiments performed at intermediate times (please see “Measurements on EPR States” in http://arxiv.org/pdf/1305.1615v1.pdf), while the latter thinks that the transactional zigzag is essential, i.e., that the future of the first particle comes back to the joint past through which it affects the future of the second particle (locally all the way).
    Other way to tackle this difficulty with the configuration space, could be analyzing this problem in a time-symmetric Heisenbreg representation where each particle has a set of time-dependent deterministic operators, but then the notion kinematic nonlocality is just altered to dynamic nonlocality.
    Which of the three accounts do you prefer?
    By the way, this problem was recently discussed in:

    With regards to Mark’s question:
    At first I admit that block universe and the TSVF seem to walk hand in hand. However, my colleagues and I try to avoid the block universe approach (I guess for psychological rather than physical reasons), i.e. there are definite pre- and post-selected states to the universe, but also some freedom in the present resulting from quantum uncertainty. In the Oblivion and Hesitation papers, we endorsed indeed the meta-time approach which seems to resonates better with the idea of self-cancelling set of events. We can’t really imagine this happening in an atemporal universe, while using meta-time this hesitant behavior becomes more intuitive.
    However, we did not provide a mechanism for a meta-time. One of the reasons for that is the same divergence again:
    Avshalom holds for many years now the “becoming approach”:
    http://a-c-elitzur.co.il/uploads/articlesdocs/Becoming7-mod.pdf, on which I believe he would elaborate in reply to your questions about the “Too Late Experiment”.
    I (on the other hand?) tend to accept Yakir’s approach in http://arxiv.org/pdf/1305.1615v1.pdf – all instants are linked via a long chain of pre- and post-selections where the post-selection of the N instant is maximally correlated to the N+1 instant’s pre-selection.
    I believe that in both accounts the arrow of time originates from the direction in which correlations are created, but the true “emergence” is related to this elusive meta-time which we still do not fully understand. What do you think?

    Sorry for the lengthy replies and thanks again,

    Mark Stuckey

    I do recall reading Avshalom’s paper. I’ll respond to him about his view. I hadn’t seen Yakir’s paper, thanks for sharing it. Since that represents your view, let’s discuss it.

    On pp 7-8, he writes:

    The standard way however is non-covariant as far as the state description is concerned. Indeed, the collapse occurs both at Alice and at Bob at time T, i.e. simultaneously in the reference frame in which we chose to work. Had we chosen a different reference frame, the moment at which the collapse occurs for Bob’s particle could have been different. On the other hand, in our description, nothing happens to Bob’s particle when Alice performs a measurement, so no covariance problems arise.

    When he says, “nothing happens to Bob’s particle when Alice performs a measurement” is he speaking subjectively from Alice’s perspective a la RQM? Or, is he thinking objectively as in the preferred frame of a single universe? I don’t see how the next paragraph answers this question:

    We would like to emphasize however that the relativistic covariance at the level of wave-functions does not necessarily require to consider each moment of time a new universe; it is already present in a simpler version of time evolution, with a “single universe” but with two
    wave-functions, one propagating forward and the other backward in time [4].

    I’m missing something, hopefully in an exchange or two you can fix that 🙂



    Thank you for your helpful comments and the important issues they raise.
    As Eli and I are now commenting separately, we can allow ourselves to diverge somewhat from the essential points of agreement on which our papers are based. So this is also a dialogue between the authors.
    Indeed, both TI and TSVF seem to be more compatible with the Block Universe view. However in both cases their originators explicitly distant themselves from this view, even sympathizing, to a lesser or greater degree, the notion of Becoming, namely some Bergsonian/Whiteheadian picture within which the “now” moving from past to future is not a subjective illusion but rather an essential aspect of time which still lies outside of physical theory. I already mentioned a recent paper of Cramer http://arxiv.org/pdf/1503.00039 which presents this shift in his thinking. And of course Ruth Kastner’s version of TI suggest a revision in this direction. As for Yakir Aharonov, with whom we are in close contact, he often says that his TSVF is equally compatible with Block Universe and Becoming, but his own sympathy is strongly with the latter. He is actually planning a paper on this issue.
    I am adding a PPT file of my talk on these issues
    – see especially slides 9-10.
    More comments are to follow.

    Mark Stuckey

    Thanks for sending the slides, Avshalom. It was difficult to see exactly what they mean in the absence of your corresponding presentation, so my questions and comments may miss the point. I appreciate your willingness to engage on this issue.

    The slides you point to (9-10) indicate a “revision of history,” akin to Cramer’s TI “pseudo-time” process for forming a transaction. The questions in my response to Cramer’s 2015 paper (posted in a Reply to your paper) are therefore relevant. Specifically, where is the “pseudo-time” process taking place? Certainly not in spacetime or we wouldn’t be talking about “pseudo-time.”

    I ask again, Why is there any “sequence of stages” in “pseudo-time” at all? Any wave coming from a future absorber (in TI or TSVF) knows whether or not it received the emitted photon. So, why are all the possible absorbers sending advanced waves into the past to the Source emission? Why not have the actual absorber of the received photon send an advanced wave back in time to the Source? Then, the Source knows exactly what to emit, because it knows how the particles will be measured and better yet, it knows what the outcome will be!

    It seems to me that Ruth Kastner’s Possibilist TI has an answer for these questions, yet Cramer dismisses PTI as “unnecessarily abstract” while not providing an alternative. Do you have an alternative? Have you considered PTI? If not, why not?

    Thanks again for the discussion 🙂

    P.S. I’m biased towards PTI because it strikes me as the “perspectival” (Price’s term) counterpart to RBW. That is, I believe you can choose to do physics in either the 4D view or the time-evolved, embedded “perspectival view” of the Block Universe. In the 4D view, there is no need for “time-evolved” or “retro-time-evolved” explanation if you rather provide a “global constraint” as in Price’s Helsinki (toy) model or the adynamical global constraint of RBW. See for example, the Geroch quote and paragraph thereafter on p 136 of https://ijqf.org/wp-content/uploads/2015/06/IJQF2015v1n3p2.pdf. If you don’t want to lose a preferred present moment (Now), then you must somehow incorporate the subjective uncertainty of the future. A “pseudo-time” is not necessary for doing this, since relativity admits spacetime foliation via proper time on a family of geodesics (in all spacetimes of known relevance anyway). That’s what PTI does. As I point out in my recent General Block Universe Discussion post http://www.ijqf.org/forums/topic/general-block-universe-discussion, there is a compromise associated with either view. The 4D view loses completeness with its necessary omission of Now, while the perspectival view loses coherence per relativity of simultaneity, i.e., you need a preferred frame for the globalization of Now from individual worldlines. In my opinion, the 4D and perspectival views are complementary views of one and the same reality a la the figure-ground illusion, neither contradicts or refutes the other.


    Dear Mark,
    I am suggesting what most people try to avoid: Let there be a higher time parameter along which history may be rewritten.
    Let’s deal the immediate concern first: Wouldn’t that higher time necessitate yet another higher time and so on ad infinitum? I suggest not to worry too much at this stage. Here is a soothing hint from cosmology: The big bang model says that spacetime was created, right? Here too you have a temporal notion ascribed to spacetime itself. People avoid it either by invoking some primitive “pre-geometry,” or dismissing as meaningless questions like “what happened before time was created”, “what lies outside of space”, etc.
    An infinity of times can be similarly avoided when you invoke Becoming. Consider a moving universal “now”-front, proceeding from past to future. On one side you have fixed past events, world-lines, curvatures, etc., just like in Block Universe. On the other side of this “now” front, however, not only there are no events, but there is no spacetime either! Recall Mach: Where there are no events, there are no and time either. So you add this growth of spacetime into the future, to the standard expansion of the universe according to the Big Bang.
    Now let’s deal with this “now”-front. Does it have to be extended along some absolute simultaneity plane? Certainly not. Different regions may have various rates of Becoming. Very likely, gravity plays an important role here, as well as special-relativistic effects. How exactly? I don’t know, and again we don’t have to worry about it yet. Suffice it that SR and GR offer us new degrees of freedom that may be helpful not only for a better quantum theory, but, no less important, better understanding of the relativity itself!
    Moreover, this “now”-front is by no means smooth. At the microscopic level, there may be several narrow “cracks” extending backwards to the past. These are quantum particles which remain isolated and haven’t yet interacted with the rest of the universe. In other words, “superposition” is now defined as the state of a particle or even something larger like Schrodiger’s cat, that, not having yet interacted with its environment, did not undergo Becoming yet. “Collapse”, then, would be the interaction with the environment that would make Becoming go backwards and fill the “crack”.
    The transactional account of EPR now naturally follows. These are two connected cracks that remain empty while the rest of the environment undergoes the normal becoming. Upon measurement, becoming proceeds backwards into these cracks, even a few times back and forth, until all boundary conditions are satisfied, a-la Cramer.
    One more suggestion, which I offered both to Cramer and Kastner, and I hope they’ll consider it. Consider a case in which a photon goes through a beam-splitter towards two detectors: One close and other distant. The first one does not click, namely IFM. There is still plenty of time until the other half of the wave-function reaches the distant detector. What can you say about the photon’s state during this time interval? According to Cramer’s initial Block Universe version, there are two confirmation waves coming from the future, somehow negotiating between them in some “meta-time” initial agreeing which detector will remain silent and which is going to click. However during the time interval you still seem to have the freedom to interfere with the future interaction. So you have to invoke “no free will” and other maneuvers to avoid that. My suggestion, in contrast, is much simpler: Just as there is a confirmation wave, there should be a rejection wave which accounts for non-clicks.This makes TI compatible with Becoming.
    More will follow shortly.
    Yours, Avshalom

    Mark Stuckey

    Thanks for the detailed reply, Avshalom.

    It’s a statement of ignorance of course, but I don’t know how to think about “pseudo-time” processes relative to our experience. A meta-time notion of “change” strikes me as absolutely meaningless. In contrast, the individual proper time frames of PTI are quite apprehensible and have everything you want (maybe). Is there something you don’t like about PTI?


    P.S. In Cramer’s 2015 paper, he complains that PTI contains needless abstraction and ends up with what, I assume, he considers to be much simpler, i.e., his “pseudo-time” process. You occasionally offer insightful adages, so you may appreciate Murphy’s Law No. 15: Complex problems have simple, easy-to-understand wrong answers. Here is my corollary: If you want to capture a meaningful notion of Becoming/Now/change in a retrocausal account, you can’t go cheap.

Viewing 9 posts - 1 through 9 (of 9 total)
  • You must be logged in to reply to this topic.

Comments are closed, but trackbacks and pingbacks are open.