Mark Stuckey

Forum Replies Created

Viewing 46 posts - 1 through 46 (of 46 total)
  • Author
    Posts
  • #5655
    AvatarMark Stuckey
    Participant

    Here is an 8-min video making the point. It’s the last episode in a 10-part video series for a general audience explaining the result https://youtu.be/kV3zGgdLJuw “Conclusion: Modern Physics is Comprehensive and Coherent”

    #5588
    AvatarMark Stuckey
    Participant

    Hi Jeff,

    On p. 7 you write, “This intrinsic randomness allows new sorts of nonlocal probabilistic correlations for ‘entangled’ quantum states of separated systems.” We offer an explanation (couched in spacetime) of this fact in our post, Mysteries of QM and SR Share a Common Origin: No Preferred Reference Frame.

    For example, in the simple case of the spin singlet state, Bob and Alice must both measure +/– 1 (in units of hbar/2) no matter their SG magnet orientation (no matter their reference frame). The fundamental constraint (explanans) at work is conservation per NPRF (angular momentum in this case, but whatever the +/– 1 outcomes represent physically). If Alice were to measure +1 and Bob measured –0.3, say, in some other reference frame (some other angle for his SG magnets), then Alice would be in the “right” frame (outcome “represents” the full value of the angular momentum) and Bob would only be measuring a component of the “right/hidden/underlying” angular momentum of his particle in this particular trial (“representationally”). There is no “intrinsic randomness” here, we can account for the outcomes of this particular trial (“how these elements evolve dynamically over time”) via a classical, dynamical mechanism (torque from magnetic interaction). But, NPRF says that Bob must also measure +/– 1, which uniquely distinguishes the quantum probability from the classical probability creating the “intrinsic randomness” (responsible for the Tsirelson bound in this case). In other words, this means conservation per NPRF can’t hold on a trial-by-trial basis (“intrinsic randomness”), but only on average (“probabilistically”).

    As viewed spatiotemporally, this becomes the origin of your view that “quantum mechanics should be understood probabilistically, as a new sort of non-Boolean probability theory, rather than representationally, as a theory about the elementary constituents of the physical world and how these elements evolve dynamically over time.” You might say that our view is “Bubism” in spacetime.

    Mark

    #5284
    AvatarMark Stuckey
    Participant

    Everyone,
    Have a look at this Physics Forums Insight to see our take on Wigner’s friend.

    Richard,
    When you get a chance, let us know if we have correctly characterized your view in that PF Insight. I can make changes anytime.

    Mark

    #5277
    AvatarMark Stuckey
    Participant

    Thnx for the detailed response, Richard. Let me see if I totally understand it.

    The state given by your Eq 13 applies to any of the three possibilities for the definite, single outcomes recorded by Xena and Yvonne in one world, i.e., heads- or tails- or tails+, respectively, prior to Zeus and Wigner making their measurements.

    Eq 13 says that if Zeus measures z he can obtain OK, which is compatible with any of the three single recorded values heads- or tails- or tails+. If that happens and Wigner subsequently measures y, Wigner will obtain + (due to quantum interference). But, that means Yvonne’s single recorded outcome is and always was + (no rewriting history, no retrocausality). But, that violates our assumption that Eq 13 and Zeus’s OK outcome apply to ANY of Xena/Yvonne’s three definite, single outcomes in one world because two of the three single recorded values heads- or tails- or tails+ contain – for Yvonne’s recorded outcome.

    You avoid this conclusion by saying Zeus could measure z and obtain OK followed by Wigner measuring y and obtaining + while Yvonne’s recorded outcome is actually -. In other words, it’s simply the case that Wigner’s measurement outcome of Yvonne’s y measurement record does not agree with Yvonne’s y measurement record.

    Is that your claim?

    #5272
    AvatarMark Stuckey
    Participant

    Hi Richard,
    I much prefer your presentation of FR in Quantum Theory and the Limits of Objectivity (2018), so I will refer to that. Looking at your Eq (13) and understanding that there exists an objective fact of the matter about what Xena and Yvonne have recorded for their measurements (h or t and + or -, respectively), it seems unavoidable that what Wigner and Zeus decide to measure bears on Xena and Yvonne’s records (what you refer to as “retrocausality”). Suppose we prepare many states Psi per your Eq (13) and collect all of the same X and Y cases together for W and Z to measure. Eq (13) says we could have collected h- or t+ or t- and it’s up to W and Z to figure out which one we picked (of course, this could be self-selected by X and Y, too). Because it’s an objective fact of the matter, we can use this to replace counterfactual measurements. Now, suppose Z decides to measure z and obtains OK. Then W decides to measure y, so he has to obtain +. From Eq (13) we then know (as you point out) that our fact of the matter for this collection of trials is t+. They then make measurements on another Psi in the collection, just to make sure. This time W measures w and obtains OK then Z decides to measure x, so he has to obtain h. From Eq (13) we then know (as you point out) that our fact of the matter for this collection of trials is h-, contrary to what W and Z determined by deciding to make different measurements. It looks like what Z and W decide to measure determines an otherwise objective fact established before Z and W started making measurements. Do you agree? If not, why not?

    #5258
    AvatarMark Stuckey
    Participant

    Thnx, Richard. I figured that was the answer, but I wanted to make sure before I fashioned a response (forthcoming).

    #5247
    AvatarMark Stuckey
    Participant

    No one in the topic on Frauchiger and Renner (FR) “Quantum theory cannot consistently describe the use of itself” (2018) answered this question, so I’ll post it here.

    FR talk about a measurement of |h> – |t> by Wbar on the isolated lab Lbar. What does this measurement mean? If Lbar is a quantum system for Wbar, then all possible Hilbert space bases obtained via rotation from the basis |h>,|t> correspond to some physical measurement and the eigenvalues correspond to the physical measurement outcomes. I understand what such rotated bases and outcomes for spin measurements mean in terms of up-down results for relatively rotated SG magnets. Would someone please describe the measurement process and outcomes corresponding to the Wbar measurement of |h> – |t> on Lbar? Clearly, it’s not merely “opening the door and peaking inside,” as that would simply be a measurement in the original |h>,|t> basis. Right?

    Thnx in advance for the answer,
    Mark Stuckey

    #5239
    AvatarMark Stuckey
    Participant

    I read Frauchiger and Renner (FR) “Quantum theory cannot consistently describe the use of itself” (2018) and I’ve read several responses in this workshop, but I have a question that has not been answered.

    FR talk about a measurement of |h> – |t> by Wbar on the isolated lab Lbar. What does this measurement mean? If Lbar is a quantum system for Wbar, then all possible Hilbert space bases obtained via rotation from the basis |h>,|t> correspond to some physical measurement and the eigenvalues correspond to the physical measurement outcomes. I understand what such rotated bases and outcomes for spin measurements mean in terms of up-down results for relatively rotated SG magnets. Would someone please describe the measurement process and outcomes corresponding to the Wbar measurement of |h> – |t> on Lbar? Clearly, it’s not merely “opening the door and peaking inside,” as that would simply be a measurement in the original |h>,|t> basis. Right?

    Thnx in advance for the answer,
    Mark Stuckey

    #3317
    AvatarMark Stuckey
    Participant

    Thnx for the reply, Yehonatan. You don’t need to concern yourself with the details of our approach, as you noted it doesn’t bear directly on your specific motives. I just wanted you to be aware of the fact that your 4D global perspective has company 🙂

    #3309
    AvatarMark Stuckey
    Participant

    Yehonatan, I read your paper. Is it published someplace, so we can reference it? Your approach shares many of the values found in the Relational Blockworld. See https://ijqf.org/wp-content/uploads/2015/06/IJQF2015v1n3p2.pdf.

    #3295
    AvatarMark Stuckey
    Participant

    Shan, there is no superdeterminism in retrocausality with global constraints. Superdeterminism entails a time-evolved story per the Newtonian schema (NS), i.e., invoking a physical mechanism that “causes” the experimentalist to make certain choices. In Wharton’s Lagrangian schema (LS), the explanation is spatiotemporally holistic, e.g., Fermat’s Principle. Time-evolved causal stories of the NS explanation are secondary to the 4D global constraints of the LS explanation in retrocausality. Price & Wharton address the issue of free will and “interventionalist causality” in retrocausality in Price, H., & Wharton, K.: Disentangling the Quantum World. Entropy 17, 7752-7767 (2015) http://arxiv.org/abs/1508.01140.

    #3291
    AvatarMark Stuckey
    Participant

    Ken, why do you think “realist psi-epistemic view” is “an interesting choice of words”? The title of our last RBW paper in IJQF was “Relational blockworld: Providing a realist psi-epistemic account of quantum mechanics” which we wrote after extensive correspondence with you.

    #3287
    AvatarMark Stuckey
    Participant

    As Price & Wharton point out, once you consider QM to be giving 4D distributions in spacetime (Lagrangian schema), rather than time-evolved distributions in configuration space (Newtonian schema), mysteries like the MP are resolved trivially. This is a psi-epistemic view.

    #2934
    AvatarMark Stuckey
    Participant

    Hi Ian,

    What’s wrong with HV’s? What do you find objectionable about the spacetimesource element and adynamical global constraint of RBW, for example? Or Ken’s classical fields and L = 0 constraint?

    Just curious 🙂

    #2900
    AvatarMark Stuckey
    Participant

    Like Ken, I don’t understand why people use (tacitly or explicity) a Block Universe (BW) for retrocausality then add a “pseudo-time” or meta-time to artificially create a dynamical notion of “causation.” If you want a robust Now/Becoming, it will cost you lots more formal machinery than a bare BW. But, if you’re willing to pay the price, you can have it, e.g., PTI.

    Ian, I don’t understand why you think randomness is difficult to account for in a BW. I suspect you see something I don’t. I picture identical subsets of the BW corresponding to the repeated trials of an experiment. The goal is then to account for the distribution of outcomes shown in those subsets. The BW distribution function is found using probability amplitudes computed with the path integral (a BW computation). Can you explain what my understanding lacks that makes the process mysterious/problematic for you?

    #2831
    AvatarMark Stuckey
    Participant

    Hi Ruth,

    I’ve seen Cramer’s “pseudo-time” process of TI mentioned in several topics of this forum being proposed as a way of introducing change and Becoming to the BW (blockworld). I have no idea what meta-time change means empirically and neither it seems do they. In my opinion, if you want an empirically/experientially meaningful notion of change and Becoming in a retrocausal account, you want PTI. I’ve said as much several times in other topics without once having it acknowledged. I can only assume there is something they dislike about PTI, but they’re being too polite to tell me. Do you know what that is?

    Sorry, I’m a bit autistic so I need people to be direct.

    Mark

    #2829
    AvatarMark Stuckey
    Participant

    Thanks for the detailed reply, Avshalom.

    It’s a statement of ignorance of course, but I don’t know how to think about “pseudo-time” processes relative to our experience. A meta-time notion of “change” strikes me as absolutely meaningless. In contrast, the individual proper time frames of PTI are quite apprehensible and have everything you want (maybe). Is there something you don’t like about PTI?

    Best,
    Mark

    P.S. In Cramer’s 2015 paper, he complains that PTI contains needless abstraction and ends up with what, I assume, he considers to be much simpler, i.e., his “pseudo-time” process. You occasionally offer insightful adages, so you may appreciate Murphy’s Law No. 15: Complex problems have simple, easy-to-understand wrong answers. Here is my corollary: If you want to capture a meaningful notion of Becoming/Now/change in a retrocausal account, you can’t go cheap.

    #2824
    AvatarMark Stuckey
    Participant

    Hi Bob,

    I read Ch 24 and I don’t see what you’re proposing for an ontology that accounts for the Mermin device outcomes. All I see are principles of quantum mechanical formalism, which don’t provide any ontology. What physically, not formally, explains the correlations?

    Thanks,
    Mark

    #2800
    AvatarMark Stuckey
    Participant

    Hi Pete,

    I think I’m close to understanding faithfulness. Let me respond to your last paragraph so you can correct me as necessary.

    Retrocausality avoids the non-locality conclusion of Bell inequality violations by denying statistical independence (SI). It does this by providing a causal mechanism that hides a true statistical dependence (where we thought we had SI), thus Wood and Spekkens claim that such causal mechanisms are fine-tuned a la superdeterminism (and therefore, undesirable). Is this correct?

    #2799
    AvatarMark Stuckey
    Participant

    Hi Pete,

    I agree completely with your premise that “since it is also the case that we occupy this reality and we have been able to provide rather successful causal and dynamical models representing the phenomena around us, then there must be some sort of story explaining how we can do this given that reality is actually a 4D block obeying said constraints.” This is essentially (and formally) the claim that correspondence with successful existing theories is required of all new theories. However, this is a bit tricky for a new theory (like RBW) that underwrites quantum physics (successful existing theory) because it’s precisely quantum physics that we want to interpret. So, if Bob is looking for a dynamical interpretation of quantum physics and is given an adynamical theory underwriting quantum physics, then Bob is not going to be satisfied.

    I also agree completely that “we don’t do this by giving some competing dynamical story in terms of time evolving laws. What we’re after is a dynamical story that arises as a result of a combination of the global constraints and our spatiotemporally embedded perspective,” unless of course you’re using the formalism of that competing dynamical story to make formal correspondence with existing theories. In that case, you might then also satisfy Bob.

    Finally, I agree completely with you about the epistemic (rather than ontic) view of retrocausality in a BW with a global constraint: “That this story is dynamical is a feature of how we tell it, not as a result of some objective ‘dynamicism’ in reality. And I think this is the most fruitful way to understand retrocausality.” But, I suspect Bob would disagree, as he seems to desire an ontic basis for retrocausality.

    Thus, there doesn’t seem to be any disagreement between us, but does Bob have to abandon his desideratum? I don’t think so. I think it may be possible to construct a corresponding (ontic) dynamical model that isn’t superfluous relative to the 4D adynamical global constraint model, e.g., PTI and RBW. These two models aren’t ontologically equivalent, but they are complementary. It just comes down to how you view consciousness and experience.

    For example, if you adopt an “Eastern” worldview a la Nisargadatta Maharaj (author of “I Am That”), the dynamical, time-evolved experience is not fundamental, as seen in this quote (442, 1973):

    ‘Who am I’. The identity is the witness of the person and sadhana consists in shifting the emphasis from the superficial and changeful person to the immutable and ever-present witness.

    In that view, accounts in 4D with adynamical global constraints are (probably) fundamental to 3D time-evolved accounts with dynamical laws. However, in the “Western” worldview, physics is done by 3D time-evolved beings who can imagine a 4D perspective (and alter their perceptions through meditation, for example). But, just because we can imagine it doesn’t make it ontic. Thus, in the “Western” worldview, the 3D time-evolved accounts with dynamical laws are fundamental to accounts in 4D with adynamical global constraints. Note: A mere dynamical dressing for a 4D account a la “pseudo-time” processes, doesn’t constitute a fundamental dynamical account. I’m thinking, again, of something like the relationship between PTI and RBW where the formal mechanisms of PTI aren’t superfluous relative to those of RBW. That’s the sense in which I mean they’re complementary and not equivalent.

    Hopefully, I haven’t strayed too far from rigorous discourse 🙂

    #2784
    AvatarMark Stuckey
    Participant

    The arXiv paper has been rewritten in the form it was just today submitted to New J Phys (where Aharonov published his quantum Cheshire Cat proposal and Correa et al. published their qCC paper). Yesterday, Nature Comm said they would not publish our Brief Communication Arising on the refutation because it’s based on an unpublished technical point (and so could not be conveyed in the 600-word limit of a BCA). Rather, they advised us to seek publication elsewhere of the full 15-page explanation of our claim — linear interaction is required to make the qCC inference from the weak values, i.e., quadratic interaction kills qCC. Here again is the link to the arXiv paper http://arxiv.org/abs/1410.1522

    #2771
    AvatarMark Stuckey
    Participant

    Hi Pete,

    If you don’t see the connection between my footnote and faithfulness, it’s likely because I don’t properly understand faithfulness and there is no such connection. I was thinking faithfulness implies no ad hoc causal mechanisms, such as fine-tuned future boundary conditions. A time-like causal link that isn’t directed constitutes fine tuning. That’s what I was thinking, but I’m clearly confused 🙂

    Concerning your comments directed at the footnote per se, I agree that an adynamical explanation in 4D doesn’t preclude a corresponding 3D time-evolved explanation in general. However, in the case of EPRB correlations, it’s not time-evolved but retro-time-evolved explanation that is invoked to “save the appearances” (of dynamism). Given the co-reality of the present and future required to render retrocausal explanation in, for example, TI and TSVF, one must invoke “pseudo-time” processes to create a dynamical story. Where is this process taking place? If you rather have a “global constraint” (Huw’s language and ours) that explains the distribution of the 4D ontological entities in the BW (e.g., TI’s completed transactions), then the “pseudo-time” process is certainly superfluous from a physics standpoint. That is, the desire for dynamical explanation based on our dynamical perspective, when that dynamical explanation requires extraneous mechanisms relative to an empirically equivalent adynamical explanation, is superfluous. As Ken points out in his essay, and you acknowledge in your post, the desire for dynamical explanation is based on our biased dynamical perspective. Nature doesn’t seem to care about our biases, e.g., our Earth-bound perspective clearly indicates Earth is the center of the universe and our low-velocity perspective clearly indicates that velocities add without limit. I haven’t seen TI or TSVF mention a corresponding “global constraint,” but RBW provides one (c.f., the RBW and the TSVF explanations of the Danan et al. experiment starting on p 131, section 2, of https://ijqf.org/wp-content/uploads/2015/06/IJQF2015v1n3p2.pdf ).

    That’s not to say one can’t construct a robust time-evolved counterpart to RBW, I think PTI might be exactly that. And, you do actually gain something explanatory in PTI that you don’t have in RBW, so its additional mechanisms aren’t superfluous, i.e., PTI contains a robust model of Now (experience of a preferred present moment). Some (most?) physicists argue that physics needn’t bother attempting to model Now, but I think the lack of a robust Now in the BW creates a serious objection to BW that is arguably justified, as I posted in the General Block Universe Discussion.

    I’ll stop prattling on and let you respond 🙂

    #2728
    AvatarMark Stuckey
    Participant

    Ken,
    On p 8 of http://arxiv.org/pdf/1503.00039.pdf Cramer writes, “The transaction that forms after the emitter-absorber offer-confirmation exchange process goes to completion is the real object, what we would call the ‘particle’ that has been transferred from emitter to absorber.” Now look at Figures 3 and 5, and you’ll see that he’s using a BW.

    The way I see it, he’s just like us in that he assumes a fundamental ontological entity that spans space and time then seeks an explanation for its distribution in the BW. In your case, that fundamental ontological entity is the classical field and in RBW it’s the spacetimesource element. In your case, the explanatory mechanism is L = 0, for RBW it’s the adynamical global constraint, and for TI it’s the “pseudo-time” process. [In the classification scheme I used above, I’d say you’re in the “global constraint” camp.] So, we have different fundamental ontological entities and different explanatory mechanisms, but we’re all playing in the BW.

    #2716
    AvatarMark Stuckey
    Participant

    Ken,
    I was trying to be exhaustive concerning the view of QM in a BW. To do that I have to include TI with its “pseudo-time” processes. You agree TI is in a BW, right? And having mentioned TI with its “pseudo-time” processes, I had to mention PTI with its proper time evolving BW (of sorts). This is precisely the point of PTI’s departure from TI.

    Bob,
    To follow up on Ken’s post, rather than think about multiple BW’s each corresponding to a different experimental outcome, I just imagine subsets of one BW, each subset representing a different trial (with outcome) in an experiment that’s been repeated many times. Thus, the probability of QM is just giving a spatiotemporal frequency of occurrence for these subsets per normal physics.

    Hope I’m not confusing the issue.

    #2714
    AvatarMark Stuckey
    Participant

    Hi Bob,

    Retrocausal approaches were designed to explain (among other things) space-like separated correlated experimental outcomes that violate Bell’s inequality without resorting to superluminal mechanisms. I don’t see how such outcomes would be explained by CH from reading your attachment. For example, how does CH explain the outcomes of the Mermin device attached?

    Thanks,
    Mark

    #2697
    AvatarMark Stuckey
    Participant

    Hi Bob,

    First let me say that I’ve been using term “blockworld” (BW) for years and was only this year told by my philosopher of science colleague that it’s now “block universe.” Since I’m speaking with a fellow physicist, I’m going to revert to BW 🙂

    I received my PhD in general relativity (GR) and have taught it many times, so I perhaps take for granted that many (most?) physicists have no formal training in GR. But, what you said is right on the money – GR assumes a spacetime manifold with a certain topological structure upon which one associates a metric and stress-energy tensor (SET). Einstein’s equations then provide a “consistency criterion” that the metric and SET must jointly satisfy. Accordingly, there are many such “self-consistent” combinations of metric and SET, while physics only uses a few. Special relativity (SR) then applies in the locally flat (M4) regions of the curved spacetime manifold of GR (so that’s where you can apply your Lorentz transformations).

    SR is where one encounters the relativity of simultaneity (RoS), although it can also be introduced to curved GR spacetimes that allow for global foliations (not all do). [In the case of curved spacetime however, it may be that observers on a surface of simultaneity are moving with respect to each other, e.g., surfaces of homogeneity in big bang cosmology models. And, such observers occupy different M4 frames, so SR can’t be used between them.] It’s RoS that implies (but not entails) BW, as I explain in my handout attached above (SR-Example-Phy200.pdf). That example is written for intro physics students, so if you read it carefully, I’m sure you’ll understand how RoS implies BW. If you want a layperson’s intro to BW, just watch the 11-min segment from 17:55 to 28:55 of https://www.youtube.com/watch?v=NcOBtnU-zSA

    There are at least three general ways people accommodate the stochastic nature of QM in a BW, two are retro-time-evolved stories told from the perspective of an observer inside the BW (“dynamical” approach, Price’s “perspectival view”) and the other is an adynamical approach per a view “outside” of 4D spacetime using a “global constraint” of some sort. Examples of the two dynamical approaches are Cramer’s Transactional Interpretation http://arxiv.org/pdf/1503.00039 that invokes “pseudo-time” processes (processes in a time that isn’t housed in spacetime, whatever that means) and Kastner’s Possibilist TI http://www.ijqf.org/forums/topic/possibilist-transactional-interpretation which modifies TI so that the processes refer to proper time along observers’ worldlines, i.e., a time that actually resides in spacetime. RBW with its adynamical global constraint is an example of the adynamical approach, as is Price’s Helsinki (toy) model. I’m not sure how to classify the Two State Vector Formalism, they talk as if they’re using a “pseudo-time” a la Cramer, but Aharonov has proposed a different notion of time that he’s hoping is more along the lines of Becoming rather than BW http://arxiv.org/pdf/1305.1615v1.pdf.

    Let me know if you require further clarification!

    Mark

    #2686
    AvatarMark Stuckey
    Participant

    Thanks for sending the slides, Avshalom. It was difficult to see exactly what they mean in the absence of your corresponding presentation, so my questions and comments may miss the point. I appreciate your willingness to engage on this issue.

    The slides you point to (9-10) indicate a “revision of history,” akin to Cramer’s TI “pseudo-time” process for forming a transaction. The questions in my response to Cramer’s 2015 paper (posted in a Reply to your paper) are therefore relevant. Specifically, where is the “pseudo-time” process taking place? Certainly not in spacetime or we wouldn’t be talking about “pseudo-time.”

    I ask again, Why is there any “sequence of stages” in “pseudo-time” at all? Any wave coming from a future absorber (in TI or TSVF) knows whether or not it received the emitted photon. So, why are all the possible absorbers sending advanced waves into the past to the Source emission? Why not have the actual absorber of the received photon send an advanced wave back in time to the Source? Then, the Source knows exactly what to emit, because it knows how the particles will be measured and better yet, it knows what the outcome will be!

    It seems to me that Ruth Kastner’s Possibilist TI has an answer for these questions, yet Cramer dismisses PTI as “unnecessarily abstract” while not providing an alternative. Do you have an alternative? Have you considered PTI? If not, why not?

    Thanks again for the discussion 🙂

    P.S. I’m biased towards PTI because it strikes me as the “perspectival” (Price’s term) counterpart to RBW. That is, I believe you can choose to do physics in either the 4D view or the time-evolved, embedded “perspectival view” of the Block Universe. In the 4D view, there is no need for “time-evolved” or “retro-time-evolved” explanation if you rather provide a “global constraint” as in Price’s Helsinki (toy) model or the adynamical global constraint of RBW. See for example, the Geroch quote and paragraph thereafter on p 136 of https://ijqf.org/wp-content/uploads/2015/06/IJQF2015v1n3p2.pdf. If you don’t want to lose a preferred present moment (Now), then you must somehow incorporate the subjective uncertainty of the future. A “pseudo-time” is not necessary for doing this, since relativity admits spacetime foliation via proper time on a family of geodesics (in all spacetimes of known relevance anyway). That’s what PTI does. As I point out in my recent General Block Universe Discussion post http://www.ijqf.org/forums/topic/general-block-universe-discussion, there is a compromise associated with either view. The 4D view loses completeness with its necessary omission of Now, while the perspectival view loses coherence per relativity of simultaneity, i.e., you need a preferred frame for the globalization of Now from individual worldlines. In my opinion, the 4D and perspectival views are complementary views of one and the same reality a la the figure-ground illusion, neither contradicts or refutes the other.

    #2671
    AvatarMark Stuckey
    Participant

    I see physics as trying to construct a single model (called reality) to account coherently for disparate subjective experiences; not all experiences of course, just those that are (assumed) common, (approximately) repeatable and can be represented by laws (rules of regularity). What does special relativity (SR) have to tell us about this endeavor?

    Consider the two boys and three girls in the example attached (you don’t have to read it, just look at Figures 1, 4, 5, & 7). The boys say they are twins, but must conclude the girls are different ages, while the girls say they are triplets and must conclude the boys are different ages. Likewise, the boys and girls disagree as to how far apart they are in space. If you constructed a model to include a meaningful Now per each of these M4 foliations, the two foliated sets of experiences would be incongruous. The lesson of SR is — if you want to include Now in a meaningful way, you have to give up on the goal of doing so with a single, coherent model (of reality). If you don’t want to give up on that goal, you must abandon a meaningful notion of Now to accommodate both sets of foliated experience (relativity of simultaneity, Block Universe). So, SR forces you to choose between coherence and completeness. Most physicists avoid this dilemma because they don’t believe Now is of concern to physics. I think it’s one basis for peoples’ objection to Block Universe. Those who recognize Now as germane to subjective experience and believe physics is in the business of modeling such common elements of experience are going to object to Block Universe as incomplete.

    #2669
    AvatarMark Stuckey
    Participant

    I hadn’t seen Cramer’s latest paper, thanks for sending it. If that paper reflects your current view on time and the Block Universe, maybe we could discuss it here? My response is rather protracted, so I’ve attached it as a pdf file here.

    #2668
    AvatarMark Stuckey
    Participant

    I do recall reading Avshalom’s paper. I’ll respond to him about his view. I hadn’t seen Yakir’s paper, thanks for sharing it. Since that represents your view, let’s discuss it.

    On pp 7-8, he writes:

    The standard way however is non-covariant as far as the state description is concerned. Indeed, the collapse occurs both at Alice and at Bob at time T, i.e. simultaneously in the reference frame in which we chose to work. Had we chosen a different reference frame, the moment at which the collapse occurs for Bob’s particle could have been different. On the other hand, in our description, nothing happens to Bob’s particle when Alice performs a measurement, so no covariance problems arise.

    When he says, “nothing happens to Bob’s particle when Alice performs a measurement” is he speaking subjectively from Alice’s perspective a la RQM? Or, is he thinking objectively as in the preferred frame of a single universe? I don’t see how the next paragraph answers this question:

    We would like to emphasize however that the relativistic covariance at the level of wave-functions does not necessarily require to consider each moment of time a new universe; it is already present in a simpler version of time evolution, with a “single universe” but with two
    wave-functions, one propagating forward and the other backward in time [4].

    I’m missing something, hopefully in an exchange or two you can fix that 🙂

    Mark

    #2613
    AvatarMark Stuckey
    Participant

    I read both papers, Eliahu. Thanks for making them available for discussion in this workshop. Let me begin by inquiring about your view of the block universe implications of this work.

    On page 1 of “Voices,” you seem open to the block universe ontology when you write, “It reformulates Oblivion within time-symmetric interpretations of QM, mainly Aharonov’s Two-State-Vector Formalism (TSVF).” TSVF explains space-like correlated outcomes that violate Bell’s inequality by allowing information about experimental outcomes (or detector settings) to be available to the entire history of the experimental process. Of course, this implies the “co-reality” or “co-existence” of the past, present and future, i.e., block universe. Cramer notes that the backwards-causal elements of his transactional interpretation (TI), for example, are “only a pedagogical convention,” and that in fact “the process is atemporal” (1986, 661). You, on the other hand, seem to adopt a meta-time view of the block universe when you write (p 2 of “Voices”), “Such ‘unphysical’ values are assumed to evolve along both time directions, over the same spacetime trajectory, eventually making some interactions ‘unhappen’ while prompting a single one to ‘complete its happening,’ until all conservation laws are satisfied over the entire spacetime region.”

    What is your view of the block universe implications of TSVF? Do you subscribe to meta-time? Or, do you view these QM processes in a block universe “atemporally” a la Cramer? Or, are you considering other options?

    Mark

    #2612
    AvatarMark Stuckey
    Participant

    I love your twist on EPRB, Avshalom. Entangling the detector direction via beam splitters and post selection is interesting. [Aside: I think you have a typo on p 5, second paragraph. Don’t you mean to “Place detectors on the remaining three ‘up’ SGM exits” instead of “the remaining three ‘down’ SGM exits” in order to complete the measurement for each particle? Also, check “Sourc” in Figure 2.]

    Let me inquire concerning your view of the block universe implications of this work.

    On page 1, you seem open to the block universe ontology when you characterize TI and TSVF as “novel interpretations of quantum mechanics.” Both of these interpretations explain space-like correlated outcomes that violate Bell’s inequality by allowing information about experimental outcomes (or detector settings) to be available to the entire history of the experimental process (and in some cases, beyond!). Of course, this implies the “co-reality” or “co-existence” of the past, present and future, i.e., block universe.

    On page 1, you say these interpretations “render non-temporality the key for understanding QM’s other unique features.” That seems to agree with Cramer who notes that the backwards-causal elements of his theory are “only a pedagogical convention,” and that in fact “the process is atemporal” (1986, 661). But, immediately thereafter you write, “Indeed, once effects are allowed to go sometimes backwards in time, …” and on page 8 you speculate that “causal effects go on both time directions” in the quantum realm. These comments sound like you’re viewing the process in some meta-time.

    What is your view of the block universe implications of these interpretations? Do you subscribe to meta-time? Or, do you view these QM processes in a block universe “non-temporally?” Or, are you considering other options?

    Uncle Mark

    #2582
    AvatarMark Stuckey
    Participant

    Glad to see you here, Ken!

    Of course, I believe 4D spacetime can be used to model classical causality. However, the phenomena under consideration here are space-like separated correlations that violate Bell’s inequality and those phenomena violate classical causality (as articulated by Wood and Spekkens, for example). Obviously, I’m not “denying any role of future boundary conditions as a constraint on what’s happening now.” That is germane to the path integral approach which we use in RBW. And, I agree that if I change an equipment setting, I definitely acted “causally” in a 4D situation (experimental process). But, the causation in that case is simply to instantiate a particular trial. I don’t take that as causally related to the correlated outcomes per se. In other words, if someone asked me, “Why did you get agreement in 25% of the EPRB trials in which the SG magnets were 120 degrees apart?” I wouldn’t answer*, “Because I choose to set the SG magnets 120 degrees apart.” My decision to set up that particular configuration is just not relevant to explaining the outcomes. So, I agree with term “deflationary intervention” in that case.

    That being said, I do acknowledge that the classification of RBW as “retrocausal” is not for me to decide. That is decided by the “retrocausal school” where you are a leader. So, if you say RBW is retrocausal, then it’s retrocausal. The bottom line is, we should definitely acknowledge we’re in the same camp and avoid destructive infighting. We’re in a small group who believes that future boundary conditions are explanatory for violations of Bell’s inequality. Exactly how the use of future boundary conditions is viewed should not keep us from supporting each other. Constructive criticism is of course acceptable 🙂

    Hopefully my voicing a difference of opinion is taken as constructive criticism, not destructive infighting.

    *The answer to that question would be the adynamic global constraint for the relevant spacetimesource element. Since RBW’s fundamental ontological entities for modeling QM phenomena are 4D spacetimesource elements, I don’t see RBW as containing any robust sense of causation at the fundamental level.

    #2558
    AvatarMark Stuckey
    Participant

    I understand you, Silberstein and Wharton discussed this point and concluded that RBW is, in the sense you state, retrocausal. That’s why we say RBW is retrocausal (however deflationary) in the paper. Frankly, I agree with the referee. As we state in the paper, why bother with retro-time-evolved causal stories in a block universe when the entire 4D pattern is already explained by the adynamical global constraint? The “perspectival view from within the block universe” (Price’s term) is what motivates us to tell time-evolved causal stories. But, GR simply provides a spacetime metric and stress-energy tensor on the spacetime manifold that are “self consistent” per Einstein’s eqns. In most cases, you can read off several consistent time-evolved causal stories from a GR solution. But, in some cases, the time-evolved story is left wanting (or even seemingly inconsistent, e.g., as can arise due to the relativity of simultaneity). For example, someone who wants dynamical explanation might ask, “Where did the big bang come from?” when viewing GR cosmology solutions. It’s a faux mystery that arises simply because they want a time-evolved causal story. So, in my opinion, retro-time-evolved (or even time-evolved) stories are superfluous in the context of any 4D solution, as the referee states. But, I’ll leave it to the community to decide its semantics 🙂

    #2533
    AvatarMark Stuckey
    Participant

    Thanks for your question, Peter. Causality and change can certainly be represented in 4D spacetime, so I didn’t mean to imply otherwise. The Geroch quote is, as you say, pointing out the fact that the 4D perspective *itself* doesn’t change. We use that “changeless” 4D perspective for two reasons. First, 4D spacetime lacks a Now in the sense that there is no “movie projector for creating preferred moments in M4” per Brian Greene’s analogy https://www.youtube.com/watch?v=GpgGJaQfrgE. [At the 29:43 mark, just after David Albert says, “Physics does radical violence to this everyday experience of time” at the 29:05 mark.] If you want to see how one might account formally (and robustly) for a Now, see Ruth Kastner’s PTI. Second, QM is not compatible with classical causality per Wood and Spekkens (perhaps because QM requires directionless links in its causal diagrams, see Evans’ paper in this forum). So, the notion of retrocausality in a block universe is, as an anonymous referee stated (footnote 3 of our paper in this forum), “superfluous at best, and inconsistent at worst.” If you want an example of non-deflationary retrocausality, again see Ruth’s PTI.

    Since RBW’s fundamental ontological entities for modeling QM phenomena are 4D spacetimesource elements, I don’t see RBW as containing any robust sense of change and causation at the fundamental level. Perhaps Silberstein will have more to say.

    I look forward to your further comments and questions!

    #2531
    AvatarMark Stuckey
    Participant

    I imagine local properties existing on p — B and q — A of Figure 3. So, on S — p and S — q you have holism and thereafter you have retrocausality. The two situations (holism and retrocausality) are, as you point out, distinct concerning local properties, but can be mixed (Figure 3) and transformed smoothly and continuously from one to another. So, in considering a term for the general situation I chose “spatiotemporal holism” with limits of synchronic and diachronic holism because, as you say, you have “a temporal whole” in retrocausality. And, I have an adynamical bias 😉

    #2527
    AvatarMark Stuckey
    Participant

    Let me start by saying I believe the main claim of your paper is correct and perfectly in keeping with the conventional understanding of holism. What I’m proposing is a change to the conventional understanding of holism based on the example in your paper. Sorry if you thought I was trying to refute your main claim. Attached is a candidate for the taxonomy you requested. [I’m thinking out loud here, so correct me as necessary.]

    #2513
    AvatarMark Stuckey
    Participant

    Thanks for your reply, Peter. Please see attached a clarification of my point.

    #2503
    AvatarMark Stuckey
    Participant

    Hi Bob,

    Thanks for your input, it’s much appreciated and not the least bit “offensive.”
    We were concerned about just this sort of reaction, that’s why we tried the outline. Since that didn’t work, let me attach the slides for a short talk I had planned to give in Vaxjo last month (I instead gave the quantum Cheshire Cat talk posted in the “Other Topics” forum). In these slides, we provide a Relational Blockworld explanation of the standard EPR-Bell experiment and do not include a mathematical articulation of the adynamical global constraint. Instead, we focus on conveying the basic ontological concepts of RBW. Hopefully, this talk is clear enough (conceptually) that you can now understand section 4 of our posted paper where we provide RBW explanations of the standard QM foundational issues. If we still haven’t succeeded in answering your questions, please let us know and we’ll gladly try something else 🙂

    Thanks again for your interest,
    Mark

    #2497
    AvatarMark Stuckey
    Participant

    Peter,
    Your understanding of SEPRB agrees with mine, so I’m hoping you’ve got it right 😉

    In the context of Evans’ paper in this forum, we might depict the holism of EPRB you describe as an undirected space-like link between its space-like separated outcomes (a la the undirected links in Evans’ Fig 4). One could attribute a direction to that link perspectivally/subjectively from within the block universe, depending on whether Alice’s detection or Bob’s detection occurred first per one’s particular Lorentz frame. Since some observers would have that link directed from Alice to Bob, while others would have it reversed, the “objective” version of the link would be undirected. This is analogous to the subjectively ambiguous direction of the retrocausal links in Evans’ Fig 4, which are likewise undirected in their objective form. With this depiction of retrocausality in EPRB, one might say the retrocausal links of Evans’ Fig 4 depict “diachronic holism” (as Silberstein suggested).

    In either case, the key to explaining EPRB is that classical causality as characterized by Wood and Spekkens is violated per undirected links in the relevant explanatory graph. That’s why RBW (https://ijqf.org/wp-content/uploads/2015/06/IJQF2015v1n3p2.pdf) employs fundamental explanation via an adynamical global constraint over spatiotemporally extended fundamental ontological entities we call “spacetimesource elements.” In that sense, RBW is “4D holism.”

    #2461
    AvatarMark Stuckey
    Participant

    The argument that retrocausality in a block universe violates faithfulness is essentially reflected by an anonymous referee in footnote 3 of our paper https://ijqf.org/wp-content/uploads/2015/06/IJQF2015v1n3p2.pdf:

    “I do not see how anything truly ‘retrocausal,’ in a dynamical sense, can occur given global time-symmetric constraints on spacetime. The authors seem to me to be too charitable here, a future boundary condition implies an adynamical block world, in which talk of dynamics or intervention is superfluous at best, and inconsistent at worst.”

    We really need a direction for the undirected link in Evans’ Fig 4 to have “objective” causality. I think it’s best to view causality as a matter of perspective within the block universe (“subjective” causality per Price, as Evans explains). Given the 4D perspective of the block universe, an “objective” explanatory mechanism need not involve causation in “retro-time-evolved” form or otherwise, as we argue on pp 128-130 of our paper.

    Thus, in RBW, we have proposed a formal counterpart to the “nomological dependence” of Hausman and the “non-causal dependency constraints” of Woodward that we call the adynamical global constraint (AGC). A paragraph explaining the AGC conceptually is on p 130 and a mathematical explanation is provided on pp 144-145. An application to the twin-slit experiment is on pp 146-154. The AGC brings Price’s Helsinki toy model with its “global constraints” to fruition.

    #2381
    AvatarMark Stuckey
    Participant

    Raul told me today that he doesn’t see any connection between Aharonov qCC and Denkmayr (alleged) qCC because Denkmayr doesn’t have a displaced pointer state. A weak values theorist (who asked not to be cited) agreed with our analysis saying “weakly enough” in Denkmayr should have read “linearly” because weak measurement requires linear interaction. Therefore, he said the quadratic interaction of Denkmayr “would seem fatal.” That’s why I think the parallel between Aharonov qCC and Denkmayr qCC would be, as I stated above, Correa’s Eq 5 giving his Eq 7.

    This is an interesting issue for weak values in general, because in Denkmayr <SzP1> = 1, <P1> = 0, <SzP2> = 0, and <P2> = 1. If the Bz interaction had been linear, these “observables” would account directly for a reduction in the intensity at detector O (Io) for the absorber in path 2 <P2> = 1, but no change in Io for the absorber in path 1 <P1> = 0. And, there would have been an increase in Io for Bz in path 1 <SzP1> = 1, but no change in Io for Bz in path 2 <SzP2> = 0. As it turns out, the quadratic Bz interaction leads to a decrease in Io for Bz in path 2, which is accounted for by <P2> = 1. This is where we claim qCC is violated, it’s what Correa refers to when he writes, “the quadratic term of the Bz interaction is connected to the ‘where the neutron is’ weak value [<P2> = 1], and hence means they can’t look for the Cheshire Cat with this interaction.” However, these “observables” are used properly to account for what they *did* observe. Stephan Sponar (experimentalist on Denkmayr et al) attended my presentation in Vaxjo two weeks ago and said he understood those four “observables” entailed qCC. But, if that were true, you would have two very different empirical results associated with qCC, one of which makes sense (linear Bz interaction) and one of which doesn’t (quadratic Bz interaction). So, in our paper, we have a section where we explain what <SzP1> = 1, <P1> = 0, <SzP2> = 0, and <P2> = 1 mean in Denkmayr.

    Thus, while Denkmayr doesn’t give us qCC, it does show us empirically that weak value “observables” have different meanings in different weak scenarios, which is interesting. I would like to hear from more of the weak values community on this issue.

    #2366
    AvatarMark Stuckey
    Participant

    I’ve had extensive contact with Raul Correa since last fall about his paper and ours. He failed to tell me that he got his paper accepted in New J Phys, but he apologized this week 🙂

    Correa et al explain how Aharonov’s proposed quantum Cheshire Cat (qCC) experiment with photons can be understood via interference. When they discuss the Denkmayr et al alleged qCC experiment with neutrons, Correa points out that Denkmayr’s so-called “qualitative result” is easily explicable via interference. We point out that Denkmayr’s “qualitative result” is not used to obtain their weak values and does not establish qCC, regardless of how you might explain it. As Correa wrote to me, “And surely you point different things. We don’t address the inconsistencies in their arguments, like the diminished counts even when the field is on the arm that has ‘no spin’. And of course the very interesting remark that the quadratic term of the Bz interaction is connected to the “where the neutron is” weak value, and hence means they can’t look for the Cheshire Cat with this interaction — very clever indeed!”

    To link Correa’s explanation of Aharonov’s original qCC proposal with photons to our work showing how Denkmayr failed to instantiate qCC with neutrons, you need to look at how Correa obtains the photon amplitude for the qCC experiment, i.e., the approximate photon amplitude at detector D1 when the transverse pointer displacements are much smaller than the beam width (a weak measurement). That approximate amplitude (his Eq 7) is obtained via expanding the exact photon amplitude (his Eq 5) to first order (linear terms). If you keep second order (quadratic terms) in the expansion of the exact amplitude, you won’t get the proper form for the qCC amplitude. Correa points out that this is also key in Danan et al’s experiment, “Asking photons where they have been.” A first-order (linear) interaction is crucial for doing a weak measurement.

    So, what we point out in our paper is that Denkmayr et al have a quadratic Bz interaction that cannot be avoided in their experimental approach with neutrons. It is this quadratic term that gives rise to a 3% decrease in the neutron intensity at detector O when Bz is in path II, which is just as pronounced as the 3% increase in the neutron intensity at detector O when Bz is in path I (that increase is due to the linear term in the Bz interaction). Thus, you can never make Bz weak enough to get rid of the effect in path II without also getting rid of the effect in path I (where you need to have it). Bottom line: it’s impossible to get the qCC effect with this experimental set-up.

    Hope that answers your question, Miroljub!

    Thanks for asking,
    Mark

    #2254
    AvatarMark Stuckey
    Participant

    The hidden variables in RBW are the (graphical) spacetimesource element and the adynamical global constraint. We take a God’s eye (4D) view, so our hidden variables don’t “bring about events.” Per Geroch,

    “There is no dynamics within space-time itself: nothing ever moves therein; nothing happens; nothing changes. In particular, one does not think of particles as moving through space-time, or as following along their world-lines. Rather, particles are just in space-time, once and for all, and the world-line represents, all at once, the complete life history of the particle.”

    So, in the God’s eye view, we’re just trying to explain the 4D patterns, e.g., the relative frequency of occurrence for spin up and down outcomes in the many trials of the 3-particle GHZ experiment. Thus, each spacetimesource element relating source (emission event) to sink (specific detector events) in the 4D experimental configuration has a probability amplitude providing its frequency of occurrence in the overall 4D pattern of outcomes for that particular experiment. The adynamical global constraint provides a rule for computing that probability amplitude in the context of the path integral formalism (think lattice gauge theory, since the spacetimesource element is graphical). I wouldn’t call this probability amplitude a “wave function,” since it’s computed via the path integral using future boundary conditions (specific outcomes), not the SE. But, that’s semantics. There is a conceptual overview in the Introduction of the paper (9 pp).

    #2250
    AvatarMark Stuckey
    Participant

    We’d be interested in how you classify RBW per your taxonomy, Dieter. Along those lines, Silberstein and I will try to explain the principle of superposition per RBW, as an example of your 3b (hidden variable, psi-epistemic) classification. [Sorry, we’re not sure how to do this for a general 3b case.]

    In our view, the fundamental ontological entity is a 4-dimensional spacetimesource element that corresponds to a particular experimental configuration from beginning (emission event) to end (detection event). The game of physics is then to find the probability amplitude for the 4D distribution of spacetimesource elements. Naturally, the path integral is our choice for computing this probability amplitude and, obviously, there is no superposition of possible outcomes in this God’s eye (4D) view because the outcome is know. But, we could collect all possible outcomes with their amplitudes and write them collectively as a superposition state (called the wave function). This would be the natural way to think about an experiment as 3D time-evolved beings, since we don’t know which outcome will obtain in any given trial of the experiment. Further, the amplitudes might be time dependent, given that we might not know when exactly they will occur. Of course, one uses the SE for obtaining this wave function and it can be derived as a ‘time foliation’ of the path integral.

    Does this adequately explain the principle of superposition as it is understood in our particular 3b case? Or, have we missed your point?

    #2217
    AvatarMark Stuckey
    Participant

    I agree with Zeh, as I posted elsewhere http://www.ijqf.org/archives/2144, that “It is even more unfortunate that this confusion seems to be accompanied by a certain amount of prejudice (for or against some kinds of proposals).” I also agree “that we cannot decide between all these possibilities without any novel empirical evidence.” So, my particular “prejudice” is that interpretations lead to new physics, e.g., new approaches to quantum gravity, which can then be tested. Otherwise, it is really just “a matter of [metaphysical] taste.”

    As for where to classify Relational Blockworld per Zeh’s taxonomy, I would start with 3b, as a realist psi-epistemic account http://www.ijqf.org/archives/2087. I’m not sure where its adynamical global constraint is located within his further subcategories. Perhaps Dieter would make that assessment for us?

Viewing 46 posts - 1 through 46 (of 46 total)