Are retrocausal accounts of nonlocality conspiratorial? A toy model.

Home Forums 2015 International Workshop on Quantum Foundations Retrocausal theories Are retrocausal accounts of nonlocality conspiratorial? A toy model.

Viewing 20 posts - 1 through 20 (of 20 total)
  • Author
    Posts
  • #2493
    Dustin Lazarovici
    Participant

    I’d like to contribute a paper of mine in which I discuss a toy-model exploring the possibility to account for nonlocal correlations violating Bell’s inequality by advanced (retrocausal) interactions on the microscopic scale. You can find the abstract below.

    To me, this model is quite interesting and instructive because

    a) it allows for a rather explicit statistical analysis, showing that nonlocal correlations arise very naturally from time-symmetric (advaced+retarded) interactions
    b) it shows that such an account need not be conspiratorial (in the sense of Bell) as is often assumed.

    One should note that this is really a toy-model, whose scope is limited to the few points I’m trying to make. It is not a full-blown theory with claims to empirical accuracy. It is not even very “quantum” and, in particular, doesn’t actually explain entanglement. However, it’s a good starting point when thinking about the possibility of retrocausal account of (quantum-) nonlocality.

    Looking forward to your questions and comments.

    Abstract:
    We present a simple model demonstrating that time-symmetric (advanced + retarded) relativistic interactions can account for statistical correlations violating the Bell inequalities while avoiding conspiracies as well as the commitment to instantaneous (direct space-like) influences. We provide an explicit statistical analysis of the model while highlighting some of the difficulties arising from retrocausal effects. Finally, we discuss how this account fits into the framework of Bell’s theorem.

    #2667
    Ken Wharton
    Member

    Hi Dustin,

    Very interesting paper! It was very useful to read for a fuller account of your views, after only seeing your presentation in Cambridge last summer.

    Some thoughts that occurred to me as I read through the paper:

    – After using time-symmetry as a motivation for this general style of approach, I was surprised to see (3) in your model which looked quite time-asymmetric, essentially distinguishing past and future. Can you comment on whether you see any tension here?

    – Model step #5 also seemed strangely time-asymmetric. “Towards the orientation antipodal to its own” seems to require a preferred direction of time to parse properly. (When time-reversed, this is a rotation *away* from the antipodal orientation.) This leads to the problem you explicitly note at the beginning of section 2.1, and even though I strongly agree that going to second order equations is the way to go, the other option is just to flip one of the signs in Equation (5). Have you tried this? Does this not work properly?

    – Typically the concepts “retarded” and “advanced” have some subjectivity to them, at least when you’re talking about the fields. In E+M, it’s always possible to reframe any retarded field as free+advanced fields, and possible to reframe any advanced field as free+retarded fields. And yet you’re using these terms as if they have a clear objective meaning. Actually, since you’re just using retarded and advanced *times* in this work, I think you’re generally okay on this front, but something to be careful about.

    – Putting the interaction between the two particles on light-like lines is nice, but raises the possibility that such interactions can be *shielded*, destroying the entanglement via some intermediate blockage. Are you worried about this, or do you see this as such a general toy model that this isn’t a big concern?

    – In 4.1, you are (rightly!) worried about the state space measure… Have you considered looking the whole history for a natural “history space” measure, rather than at instantaneous slices?

    – If you’d like to start actually *solving* such two-time-boundary problems, the best framework I’ve found is something called the Gerchberg-Saxton algorithm in E+M. Let me know if you want more details on how it might apply to such cases as these.

    – I’ve found it useful, in separating retrocausal accounts from conspiracy/superdeterministic accounts, to outline the superdeterministic story on the ontological level, not just at the level of correlations. Sure, at the level of observable correlations, these two accounts look quite similar, but “under the hood” they are wildly and essentially different.

    Thanks again for a useful and well-argued paper!

    #2681
    Dustin Lazarovici
    Participant

    Dear Ken,

    thank you very much for your comments. It’s nice to get feedback from someone who is so skilled in this area.

    Let me try to briefly respond to your points:

    1) You are right, the model – as it stands with the 1st order dynamics – is actually not time-symmetric! That’s why I also had to change the title btw. 😉

    I don’t think the (appearent) asymmetry in the measurement process is the problem, though, since I believe that, when properly analyzed, (quantum-)measurements turn out to be irreversible in a thermodynamic sense.

    Concerning the dynamics, the “solution” that I have in mind – rather than going 2nd order – is that, in the end, you will still need something like a wave-function or quantum state to manifest the structure of entanglement. And this object, whatever it is, may itself have a nontrivial transformation under time-reversal. E.g. in Bohmian mechanics, the guiding equation is first order but the wave-function get’s complex conjugated under time-reversal, compensating for the sign.

    Anyway, in order to make the points I was trying to make, it was more convenient to work with a toy model that involves advanced and retarded actions in an asymmetric way. But of Course I believe that if one considers a more serious retrocausal theory, it should be motivated by time-symmetry.

    2) You’re right, if you have free fields, the distinction between the advanced and retarded part is somewhat arbitrary. However, I don’t believe in free fields. 😉

    3) Yes, absolutely! As I said, the toy-model doesn’t actually explain or account for “entanglement”, i.e. why a pair of particles should be able to interact over arbitrary distances without being disturbed by others. The problem is not the light-cone structure, though. The light-cone structure is a good thing, as it makes the interactions intrinsically relativistic. However, I believe that any more serious theory will need additional ingredients, to account for a structure of entanglement.

    4) A colleague of mine is working on such “history space” measures in a somewhat different context. I agree that this is probably the way to go for a statistical analysis of time-symmetric theories, but it’s not that easy. If you have more references on that I’d be very interested.

    5) I wasn’t familiar with Gerchberg-Saxton, I’ll look it up! References are very welcome.

    6) I agree. The ontological level matters most when assessing whether a theory is “conspiratorial”. In my brief discussion, I was trying to make the connection between the ontological level and the formal “no conspiracy assumption” which enters the derivation of Bell’s inequality. I’m not sure how well I succeeded, though.

    Thanks again for your comments!

    #2691
    Nathan Argaman
    Participant

    Hello Dustin,

    I read your contribution with much interest. Your motivations seem to overlap with mine to a very large extent (see my own contribution in this conference, also available at http://arxiv.org/abs/0807.2041). Nevertheless, the technical details are quite different. I like your use of the Wheeler-Feynman interaction along light-cones – where the proper distance between the particles vanishes.

    I was also dismayed by the fact that retrocausation had been charaterized as conspiratorial (when I learned about it). Clearly, those who adopt this charaterization are taking the causal arrow of time for granted. Apparently their intuition is so firmly grounded in the macroscopic world that they just can’t really think in any other way. Unfortunately, even in those discussions which aim to carefully lay out all the assumptions involved in Bell’s theorem, this simple fact – that they take the causal arrow of time for granted – is not pointed out. However, if you ask them – which I am bent on doing – whether or not they’re assuming the causal arrow, they’re usually happy to admit that they are.

    I would like to ask you the following: in what ways do you think that our contributions overlap, and in what ways do you think that they complement each other?

    Thanks, Nathan.

    #2727
    Dustin Lazarovici
    Participant

    Dear Nathan,

    thank you very much for your feedback and for pointing out your paper that I’ve read with great interest. I relize that I should have referenced your paper – I just didn’t know about it before!

    Anyway, I think we’re definitely on the same page. Maybe your argument is more general, while my model can help to illustrate your point.

    However, I’m not sure to what extent it’s correct that the “causal arrow” is never spelled out as an assumption of Bell’s theorem. At least formally, it appears ecplivitely in what you called “causality” and I called “no conspiracy” assumption. I think many (though not all) people realize that this assumption can be logically denied by assuming a retrocausal influence of the control parameters a and b on the lambda. However, most of them will immediately dismiss any such account as conspiratorial or even absurd.

    If my contribution succeeded in adding anything to yours (which it better should, otherwise it would be quite superfluous) it’s to spell out and statistically analyze a specific retrocausal toy-model rather than ‘postulating’ a particular statistical dependence between lambda and a and/or b. I think that this can be quite helpful as an intuition pump.

    Moreover, it then turns out that, in fact, \lambda IS statistically independent of the control parameters (unless one also admits lambda’s in the future light-cones of the measurement events). That’s why I hope to convince people that such a retrocausal account need not be conspiratorial (as they usuall assume) in the sense that the retrocausal effects do not need to infringe on the freedom of the experimentalist to prepare a system as she likes.

    To be honest, though, I’m not sure if I convinced anyone. Usually, people who are open to retrocausality are somewhat sympathetic to my paper, while people who are hostile usually remain just as hostile even after reading it…

    Best, Dustin

    #2735

    Hi Nathan and Dustin —

    Nathan, when I read this paragraph…

    “I was also dismayed by the fact that retrocausation had been charaterized as conspiratorial (when I learned about it). Clearly, those who adopt this charaterization are taking the causal arrow of time for granted. Apparently their intuition is so firmly grounded in the macroscopic world that they just can’t really think in any other way. Unfortunately, even in those discussions which aim to carefully lay out all the assumptions involved in Bell’s theorem, this simple fact – that they take the causal arrow of time for granted – is not pointed out. However, if you ask them – which I am bent on doing – whether or not they’re assuming the causal arrow, they’re usually happy to admit that they are.”

    …I couldn’t help but suspect you were thinking of the scholarpedia article on Bell’s theorem here! =) Maybe it will help Dustin and other readers to explain the situation there a bit? So Shelly Goldstein, Nino Zanghi, Daniel Tausk, and I wrote, at some point a few years back now, a big review article on Bell’s theorem for the website scholarpedia. We were lucky enough to get Nathan as a referee, and he made (among other comments/suggestions) the point that we don’t acknowledge, as explicitly as we might, that we are assuming no retro-causation. One of the effects of this (un- or at least under-acknowledged) assumption is then that the types of retro-causal models you guys are discussing here would violate what we call the “no conspiracies” assumption in that article — even though, as I think you are both entirely correct to point out here, such models may in fact not involve anything “conspiratorial” in the everyday sense of that term.

    In any case, the sad truth (for which I can only apologize rather than offer any good explanation) is that going back and making a few small wording changes to that scholarpedia article, in response to Nathan’s good suggestions, has been on our joint to-do list for, well, about 5 years now. Somehow we just never got around to tweaking it (after some good in-person conversations about this stuff in Sesto a few years ago) and then it sort of fell completely off the back burner and is now hidden completely in a pile of dust and dead bugs behind the stove. Anyway, that little piece of history/sociology maybe explains what might otherwise appear as a slightly puzzling tone in Nathan’s paragraph that I quoted above. Is that fair Nathan? =)

    Regarding the actual issues under discussion here, as I said, I agree with you guys (1) that there need really be nothing conspiratorial about a correlation between “lambda” and the “settings”, in the context of a retro-causal model and (2) that the assumption of no retro-causation should be made more explicit when it is being made, to avoid miscommunication and false impressions. That being said, I think I am among the “hostile” people that Dustin mentioned in his previous comment. That is, I just can’t really get myself to take retro-causality very seriously. Part of the stumbling block, for me, is that I can only even really understand what retro-causality *means* in the context of these sorts of toy models that treat different kinds of variables (like “settings”) in different ways. My sense is that somehow the very idea of retro-causation sort of crumbles away to dust in your hands as soon as you imagine instead a true “quantum theory without observers”, i.e., a theory that treats the whole universe in a consistent and uniform way (without, for example, any ad hoc “settings” that are treated as outside the system being described by the theory). I’d be happy to elaborate the thinking (half-baked though it remains) behind this sense, if it’s not at all clear why I’d say something like that.

    And if it is at least somewhat clear, but you disagree with it, I’d be interested in hearing arguments intended to persuade me out of my hostility…

    #2745
    Dustin Lazarovici
    Participant

    Hi Travis,

    thanks for shedding some light on this background story. I know your scholarpedia article on Bell’s theorem – which is a great article btw – but I didn’t know about the discussion you had with Nathan, of course. I guess Nathan could cite many other sources who neglect the possibility of retrocausal explanations, but I wouldn’t be surprised if he had you in mind, as well. 🙂

    Anyway, concerning your other points, I like to emphasize once again that the violation of the Bell inequality in my toy-model does not simply come down to a violation of “no-conspiracy”, i.e. correlation between “lambda” and the “settings”. The issue turns out to be somewhat more subtle (and I think somewhat more interesting).

    The relevant lambdas in the (causal) past of the measurement events are not sufficient to “screen off” the correlations – and they are not correlated with the parameter choices. Hence, if you consider only lambdas in the past, the no-conspiracy assumption is formally satisfied, but the locality assumption is violated. It is only when you admit “future common causes” that you can screen off the correlations while (formally) violating no-conspiracy.

    Moreover, as primitive as my toy-model may be, it is actually “ontological”. And while the parameter choices are somewhat “outside the system”, I don’t believe that the consistency of the account depends on it.

    So it would be very helpful (at least for me) if you could elaborate on your objection to retrocausal accounts of nonlocality. Then I’ll know if I can say anything to help you overcome your hostility. 🙂

    By the way: I’d like to emphasize that I’m not an advocate of retrocausation per se. I have some sympathy for it because it is suggested by time-symmetry. Mostly, though, I understand that there is certainly some price to pay if we want to reconcile nonlocality and relativity. And I think that, in the end, retrocausation may not be that much worse than the alternatives. That’s why we should stay open-minded.

    #2750

    Hi Dustin. I read some earlier version of your paper about this a year or so ago (probably when it appeared on arxiv or something??). I found it interesting but now don’t remember the details. Your comments above motivate me to want to understand it better, so I’ll try to take a look at the paper you posted on this thread (which I haven’t looked at yet) in the next couple of days. And then I’ll be happy to try to elaborate some of the reasons for my hostility to retrocausation — unless your model refutes those reasons, in which case perhaps I’ll keep them to myself instead!

    #2761
    Dustin Lazarovici
    Participant

    Thanks, Travis. If you find the time to revisit my paper, I’d be honored to receive your feedback. The model is essentially the same as in the arxiv version, but the discussion has been corrected and refined in certain important aspects. That reminds me that I should probably update the arxiv version…

    #2842

    Hi Dustin — I finally got around to reading through the new version of your paper. I again found it very clear and very thought-provoking. Here are some questions and half-baked thoughts, in no particular order:

    (1) Now I understand better why there was a little bit of confusion/disagreement in our earlier comments (above) about whether your model violates “locality” or rather “no conspiracies” (in the usual, anti-retro senses of these terms that, e.g., we elaborate in the scholarpedia article). I think it depends on whether one is thinking in terms of Bell’s 1976 formulation of “locality” (where the “lambda” lives in the overlapping past light cones of the measurement events) or instead in terms of Bell’s 1990 formulation (where the “lambda” lives in a slice across the past light cone(s) that shields off the measurement events from where the past light cones overlap). I think you were saying that the model violates “locality”, but respects “no conspiracies”, because you’re thinking of the lambda as the initial state, the anti-correlated spins you describe in your equation (3). That state is, obviously, independent of the settings a and b, so you are right. But one could also note that, in your model, the state of a given particle — not initially, but at some random intermediate time prior to its being measured — *is* correlated with the setting that is later used to measure it. So, from that point of view, (this other thing that one might quite reasonably mean by) “lambda” is indeed not independent of the settings, and the model would hence count as “conspiratorial” in that sense. (It is of course *also* nonlocal, in the 1990 sense!) I have some further half-baked thoughts / concerns about what’s going on here, but I’ll separate those into comment (3) below and end here by saying: does that seem right to you, or have I misunderstood something?

    (2) So one of my big (but admittedly slightly fuzzy) worries about retro-causation, generally, is that it kind of defeats itself, in the following sense. Basically the whole point is to avoid spooky/antirelativistic action at a distance, by confining causal influences to (inside?) the light cones, but allowing influences to go both ways in time. Of course everybody understands that, if you allow this, then you can string a few such influences together to get multi-step (“zig-zag”) influences across spacelike separations. So, in a rather obvious sense, one does not actually *avoid* the scary sort of non-local influences by introducing retro-causality — rather, the most one could possibly hope for is to *explain* those non-local influences in a less scary (more relativity-friendly) way. So far so good? The worry I have about all this is just that it seems to make the new, temporally symmetric notion of “locality” extremely, uh, fragile. This comes up in your article at the end when you say that “a more natural desideratum … would simply be the absence of *direct* influences between space-like separated events.” (Emphasis added.) The worry, then, is that you could always add stuff to a theory that had *direct* influences between spacelike separated points, to convert those direct influences into indirect (zig-zag) influences. And so it seems like — to ever actually reject a theory as “not consistent with this time-symmetric notion of locality” — you would have to interpret that theory as being in some sense “the final word”, i.e., not subject to additions/revisions. By comparison, what is to me so interesting about Bell’s theorem, at least as it is understood outside the context of discussions of retro-causality, is that it rules out local theories *period*. If Bell’s result were instead along the lines of “if you understand ordinary QM as the final word, then you are stuck with nonlocality” it would be far less interesting. It would just be: nonlocality — unless you add hidden variables in which case you can get rid of the nonlocality. (Indeed, then it would really just be equivalent to the old EPR argument.) So the worry is something like this: as soon as you allow retro-causality, you basically *guarantee* that it will be possible to explain anything you want in a “local” (meaning now time-symmetrically-local) way — just keep adding more hidden variables to convert any *direct* spacelike influences into acceptable, zig-zag/indirect influences. Now I admit it would be unfair to just abandon the whole program on the grounds that, in some sense, it is obvious from the beginning that it should be able to succeed. If, for example, somebody comes up with a really simple and elegant and relativistic retro-causal “quantum theory without observers”, I would definitely sit up and pay attention. But still perhaps you can see how at some level I feel like the whole exercise is slightly silly, on the grounds that basically the thing you are trying to achieve (namely, locality in the time-symmetric sense) can never really be defined in a way that actually rules something out: anything that appears to be in violation of that sort of locality can always be converted into something that respects it by adding more hidden variables and zigs and zags.

    (3) So in a sense what I meant to be saying in (2) is that the idea of saving locality by allowing retro-causality doesn’t really make sense to me, because retro-causality undermines (or at least seems to threaten to undermine) any clean distinction between locality and nonlocality. And then I have exactly that same worry about the other — “no conspiracies” — assumption in Bell’s theorem. That is, I worry that the idea of retrocausality undermines (or threatens to undermine) any clean distinction between a theory that is conspiratorial and one that isn’t. So that, as you can now see, is kind of what I was starting to get at in (1) above. Is your theory conspiratorial? Well, it depends… But just in general, stepping back, whereas there is some kind of strong reason to believe (if one excludes the possibility of retro-causation) that the states of systems which have been “causally isolated” (to some reasonable extent) in the past should be uncorrelated, there is no reason at all to think that systems which *have* had intimate causal contact should be uncorrelated. And if you try to time-symmetrize that notion, then obviously you end up saying that the states of systems which will interact in the future should be expected to be correlated. And isn’t that just why you say your model isn’t conspiratorial? Sure, the state of a particle (at some intermediate time) is correlated with the setting of the device that will measure it, but there is a clear and non-conspiratorial causal chain to explain that correlation: the post-measurement state of the particle is affected by the device, and affects the spins earlier in time. So again the worry is just: isn’t it obvious, from the beginning, that literally anything that would be diagnosed as conspiratorial (using the no-retro sense of the concept) could be given a non-conspiratorial explanation if one introduces retro-causation? For example, I just read an article about how the price of tea in China last year correlates perfectly with the mosquito population in Boston… Conspiracy? No! Both sets of events were causally influenced by the article I just read, so the correlation is explained in a happy local non-conspiratorial way. So, again, the worry is along these lines: it seems important to you to be able to say “this model is interesting because it respects a certain time-symmetric notion of locality and a certain time-symmetric notion of ‘no conspiracies’.” But I’m not really clear that either of those time-symmetric notions even means anything (i.e., even cleanly rules out anything). So (despite being quite interested and definitely not feeling certain about any of this) I remain somewhat unimpressed. Can you help convince me that the proposed time-symmetric notions of locality and “no conspiracies” really mean something?

    (4) And then finally, the thing I kind of mentioned in the earlier comment and which is, unfortunately, an even bigger and more sprawling and more philosophical thought than the previous ones: I am concerned at the extent to which this model (and every other retro-causal model I’ve ever seen) seems to suffer from a kind of measurement problem, just in the sense of introducing special dynamical rules that apply to the preparation and measurement ends of the process considered. Let me put it this way. One of the things that I appreciate most about Bohmian mechanics (in contrast to ordinary QM) is that it is possible (and indeed, in some sense, as has come up in the exchange with Prof. Werner, *mandatory*) to think of Bohmian mechanics as a theory of the whole universe, with observers included as part of what’s described by the theory. Basically I want to demand that theories should be understandable in that way — as “quantum theories without observers” (to use Shelly’s term) — or I won’t want to bother taking them seriously. Now of course in building theories it’s fine to start with toy models about single particles, but my point is we should always keep in the back of our minds the question: is it going to be possible to scale this up into a “quantum theory without observers”? And, after a kind of sprawling email discussion I’ve had in recent months with Ken Wharton and Rod Sutherland, I have really started to worry about retro-causal theories in general with respect to this issue. I don’t want to try to speak for them here (hopefully they will chime in!) but the kinds of things they kept saying kept hitting my ears like this: “well of course you’ll never be able to achieve that — the whole point of these sorts of models, the whole thing that’s going to make this maybe work, is that you have to impose these measurement/preparation-related boundary conditions on subsystems all the time”. So it just started to feel like what they were trying to tell me was that one could never scale up these toy retro models into a “qtwo” in the way I would be hoping for. Maybe they’ll tell me I misunderstood, but maybe I can just pose the question to you (Dustin) in a way that relates it to your model. So, for example, you assume that when your spinning particles get measured, they get annihilated, and this (as you acknowledge) plays an important role in your analysis: if the particles were still around after the measurement (and the same dynamical laws that apply in the middle continue to apply!) then their spins in the distant future would continue to influence their spins in the past (or whatever the right time-neutral way to say that is), and (probably??) all the predictions and analysis you do so nicely in the paper would come out completely different and, well, who knows what would happen. And then I think there are similar worries on the front end as well: you just impose a certain kind of t=0 condition on the state of the particle pair, but would this even be consistent with the dynamics if you allowed the particles to have existed prior to t=0? To me at least none of this is obvious, and it starts to feel like, as soon as you move even an inch in the direction of “pushing back the boundary conditions” (so that more stuff — here I’m focussing on “the same two particles but over a bigger spacetime region”, but I’m also similarly concerned about including “more particles, e.g., those composing the preparation/measurement equipment” — is included in “the system we analyze”) everything you thought you had established in the simpler case is totally out the window and you have to start over from scratch without any particular reason for optimism. So, uh, does this kind of worry make any sense to you and, if so, can you say anything that would give me hope that interesting little toy models (like yours) should be expected to be able to be scaled up into serious, viable “qtwo”s? Because at present I’m not convinced there’s any basis for hope there.

    OK, phew, I’ll stop there and look forward to any comments you (or others) have!

    #2868
    Dustin Lazarovici
    Participant

    Hi Travis,

    thank you very much for your very detailed feedback! That’s a lot to think about and I certainly won’t be able to give a satisfying answer to all of your questions/objections right away. For now, I can just add a few remarks.

    You’re right that the discussion in the final section would have been different with Bell’s 1990 picture in mind. That’s a good point! I guess I just used the definition of locality/conspiracy most suitable to the point I was trying to make. Anyway, in the end, the relevant question is not whether “no-conspiracy” is formally violated, but whether the physical account is morally conspiratorial in a way that undermines its explanatory power or even the whole idea of empirical tests. I believe that as far as my toy-model is concerned, this is rather not the case or at least it’s up for debate.

    My goal is not to save or restore locality, at least I wouldn’t put it that way. Let’s agree that nonlocality is a fact of nature. Period. Advanced interactions are merely one possible way to implement/understand/explain nonlocality. And it may be a way to implement nonlocality that is more compatible with relativity than other approaches. That’s one good reason to keep an open mind about it.

    Your second point is a very good one, too. You’re right, as a metatheoretical concept, nonlocality à la Bell is a much more fruitful than the “absence of direct space-like interactions”. I didn’t mean to dispute that, though. People who are sympathetic to retro-causation rather like to point out that when you assume time-symmetry as an a priori, some form of nonlocality were to be expected. They/we then usually see that as one (very important) explanatory success of the hypothesis. But then again, not everyone is impressed.

    I agree that, in the end, “measurements” should be part of the theory. For a time-symmetric/retro-causal theory, it would thus be ok to require final boundary conditions for the entire universe. It would not be ok to assign a special status to individual measurement results. On this issue, I might stand closer to you than to Rod and Ken (as far as we understand their position correctly).

    I believe, however, that in the end, when you have a complete theory, the process of measurement and preparation will turn out to be somewhat special in that they’re irreversible in a thermodynamic sense. And I believe (although my ideas on this are still very vague) that they will turn out to have a special role in defining “macro-causality”, because they have to do with our sense of agency. Under this premise and under this premise alone I think it’s legitimate to treat “measurement” and “observation” on a somewhat different footing when discussing simplified models (such as mine).

    In other words, to the degree that “measurements” have a special status in retrocausal models, this should not prevent the models from being – at least in principle – embedded into a complete qtwo. Rather, a complete qtwo should be able to explain (away) the specialness.

    In any case, I would never claim that, as of today, any retrocausal model is superior to or less mysterous than nonlocality in Bohmian mechanics (let’s say.) I would only maintain that such an approach is not a priori absurd or doomed to fail. And unless we have a complete relativistic qtwo, we should stay open minded towards different possibilites. There is no cheap way of reconciling quantum non-locality and relativity, that’s for sure.

    Best, Dustin

    #2887
    Ken Wharton
    Member

    Hi Travis and Dustin,

    Thanks for the interesting discussion! I thought I’m chime in with a few points of my own…

    Travis, on your point #2, you never mentioned another aspect of locality (really, a pre-requisite!): restricting models to those that have exclusively spacetime-local beables. To me, *that’s* the important thing that retrocausality offers, and I know this is important to you as well. Retrocausality is not the only path to such beables, as you well know, but one of the few paths still open. There are plenty of models that can be rejected on the grounds of having nonlocal beables, even if one considers retrocausality.

    Once you’ve restricted models to this subset, I see further distinctions between models with superluminal-causation and zig-zag causality. You seem to be worried that the mere option of retrocausation blurs these accounts together in a way that can’t be properly distinguished. But differences include: 1) the different places in space-time where such a causal influence could in principle be blocked, 2) A different relationship with Lorentz invariance, 3) in an EPR-geometry, the ability to treat Alice and Bob on an equal footing (rather than having one of them influencing the other, they could both mutually cause the past hidden variables).

    One last point: in that last sentence in (2) it almost sounded like you were asking retrocausal models to “rule out” certain *phenomena* as generally impossible. Obviously, a specific model can do this, but is it really so bad that the general concept of retrocausal theories doesn’t obviously (a priori) rule out any particular phenomena? (Sure, the “initial conditions + inside-lightcone-dynamics = future” framework rules out some phenomena, but this is not a plus, this is a terrible minus, because some of the ruled-out phenomena actually occur!)

    Moving to the conspiracy issue (3), I have pretty strong feelings about this because (as you’ve both already discussed) there’s a widespread and unfair confusion between superdeterministic and retrocausal explanations already. Here what you’re ignoring in your China/Boston example is the influence of intermediate and external confounding factors.

    Look at it from the forward-time perspective. Suppose I mailed identical letters to Boston and China, and then claimed that I am a past-common cause that might explain the correlation in *next year’s* Chinese tea price with *next year’s* Boston mosquitoes. This model could be ruled out on conspiratorial grounds having nothing to do with whether there was a possible past-common-cause. There are just too many confounding factors and too little past influence to explain any extensive or predictable correlation.

    In your retrocausal example, the confounding factors are even more severe. Sure, there might be a tiny future-common-cause, but no non-conspiratorial model could use it to explain such large correlations in the past, for basically the same reason as the previous paragraph.

    Now, sure, if there are *no* confounding factors, if two photons are beamed directly towards the same detector from different sources, retrocausal accounts generally call for correlations between the past hidden variables of those separate photons. But hopefully this is evidently more plausible (far less conspiratorial) than your example.

    Your biggest point, (4), is of course the real outstanding problem. (Well, unless you throw in the problem that hardly anyone is willing to take retrocausality seriously in the first place!) But maybe part of this latter problem is the problem you mention: if doesn’t seem like there’s a way to explain the special status of future boundary conditions in a supposedly-universal theory, fewer people are going to consider it as a live option.

    I think Dustin is right, though, that’s when analyzing simple few-particle toy models, it’s reasonable (in the framework of such models) to treat external interactions with much larger entities as effective boundary conditions on the smaller subsystem. But you’re worried about how such models scale up, and think that such stories smack of the usual measurement problem. And you’ve been writing long and detailed explications of this problem to me and Rod, while I haven’t properly been making the case that there might be an eventual solution. So I’ll try to write something up in the next day or so, to this effect, and see if it helps. At the very least, it might help refresh some of the key points in my own mind, since I’ve kind of put these issues on the backburner in my own research. So — more soon!

    #2920
    Nathan Argaman
    Participant

    Hi Travis and Justin,

    I’ve been meaning to get aroud to responding to you for quite a few days now, and I feel that perhaps I should apologize for taking so long. But then one of the first items in my response is to say that I liked your description, Travis, with “falling off the back burner.” So perhaps I’ll leave it at that.

    I agree essentially with everything you said, Justin, in your reply to my previous post. The only exception is that I would like to follow Bell’s use of lambda, even though he assumed local causality and I am now discussing retrocausal models. And I would like to use retrocausality to save locality. Thus, I would say that lambda is a set of parameters or functions that describes everything that happens to the particles before they are detected, and not just their properties or attributes at the source. As Travis points out, that is not the only possibility (the way Bell used them, different possibilities may be equivalent, but now they aren’t).

    Next, let me respond to Travis’ post. Although I clearly found the discussion of the scholarpedia article quite frustrating (and I still think you should at the very least mention the fact that you’re assuming the causal arrow of time in the main text, and not just in a footnote), at least you know what you’re doing! The foremost example of dismay for me was when I read the original piece by Shimony, Horne and Clauser on this [Dialectica 39,97 (1985)]. I had just given a talk at the Perimeter Institute on this (pirsa.org/06110017), the first time I had described my vague ideas to an audience with experts on the matter, and they directed me to this. The article briefly describes a situation where Alice and Bob do not care to decide for themselves which settings to use, and instead consult their secretaries, and somebody has prearranged with the manufacturer of their measurement devices to give the secretaries lists of parameter settings according to a data table of his choice, and to have the devices reproduce results from additional lists. Of course, you can then get anything you want! You can see the original reproduced in this link:
    https://books.google.co.il/books?id=Aee0XfBUDZQC&pg=PA163&lpg=PA163&dq=Shimony+Bell+secretary&source=bl&ots=ByiclUvzfl&sig=GlTX4igWLrMmBMpBITjPLUGEVPk&hl=iw&sa=X&ved=0CEEQ6AEwBGoVChMIi6i3gLPlxgIVRLwUCh1megCb#v=onepage&q=Shimony%20Bell%20secretary&f=false
    Now that is what earned this type of description the title “conspiratorial,” and for good reason. If you read through their discussion, you find that they were taking the causal arrow of time for granted all along, and in that context, indeed correlations between the device settings and the hidden variable must be conspiratorial. Bell responded, saying that such conspiracies would undermine any scientific inquiry. And that is absolutely right. I was very annoyed to find the idea of retrocausation essentially branded as unscientific in this way. The line of thought seems to be roughly this:
    (a) It is suggested that retrocausation could lead to certain unexpected correlations, but
    (b) Retrocausation itself is unexpected.
    (c) Assume, then, the causal arrow of time, i.e., the absence of retrocausation.
    (d) One finds that in this case the said correlations can only occur if something conspiratorial and unscientific occurs.
    (e) The upshot is that retrocausation is branded as conspiratorial.
    Am I wrong, Travis?

    I really think that the word “conspiratorial” should only be used to describe theories or descriptions in which it is denied that the device settings are free variables (free variables are a well-defined mathematical notion, and therefore this conforms with the principle of not bringing in human beings; unfortunately, it seems that most of the community has proceeded to bring in the notion of free will, which is philosphically complicated, and thus does not improve our chances of making progress in quantum theory). But, of course, every author is entitled to his choice of words, so what can I do?

    One more thing, Travis – if you want to take a look at another toy-model which, although it doesn’t necessrily reproduce anything of QM, at least would seem to alleviate your fear that the very definition of things would “crumble away,” I suggest you take a look at Ken’s work:
    http://www.mdpi.com/2078-2489/5/1/190 , referenced in his post # 2834 within the discussion of his own contribution here.

    Now, you have gone on and continued to have a very interesting discussion to which I would also like to respond, but let me end this post here, and then begin another one soon.

    Best, Nathan.

    #2925
    Nathan Argaman
    Participant

    OK, friends,

    Now I’ve read the rest of the discussion again, and I see that Ken has already given good answers to your questions, Travis. I’d like to add just a few more words:

    (a) First, it seems that we will all remain unhappy unless we have a quantum theory without observers. That was the main point in my discussion back when I wrote my article: We had Bohmian mechanics, which was represented in Bell’s 1964 article by a simplistic nonlocal toy model; subsequently, simplistic retrocausal toy models were found to work as well, but we haven’t found a retrocausal analogue of Bohmian mechanics yet. We should strive to find one (or else, prove that it’s impossible, which isn’t going to be easy to do – as you say, there’s no obstacle in sight).

    (b) I think locality should be understood as synonymous with local causality (those are the words Bell himself used in his later publications), i.e., it is a notion which is to be defined only in the context of discussions in which the causal arrow of time has already been accepted.

    (c) For retrocausal models, in order to describe the physics of the world in which we live (i.e., in order to develop a qtwo) we will need to break time-reversal symmetry: we will need to somehow include the fact that the entropy was low in the past in our description. It is perhaps possible that this will be accomplished just by treating initial conditions differently from final conditions, as in classical mechanics. One could then hope to develop a theory in which “no signalling” and “information causality” are manifest. That would rule out the absurdities you allude to in your Boston/China example (I don’t call them conspiratorial, because you haven’t denied the free-variable status of any device settings). So here my reply is different from Ken’s – I would not rely on time symmetry in this case.

    Overall, I don’t think it’s going to be that easy to find a retrocausal qtwo with strictly local beables. Just constructing a description of a dissipative measurement would require a lot of work, perhaps akin to the well-known Legget-Caldeira description. But I certainly see it as well worth pursuing.

    Travis, are you by any chance persuaded? Can you bring up further points, or do you just feel outnumbered?

    Best, Nathan.

    #2928

    Responses to the excellent responses you all gave to my previous post…

    Dustin: I think I agree with everything you wrote about measurement being special in some thermodynamic sense, this being bound up with our sense of agency and macro-causality, etc. Obviously this is tricky stuff and nobody would claim to have a solid grasp on how it all fits together. In any case, I agree that in general it’s fine for a toy model to treat measurement/observation differently. My worry is that it “feels” to me (somewhat vaguely) as if it should be considerably harder to generalize from a toy model to a qtwo in a theory with retro-causation. I’m not sure I can say what that’s based on. Maybe it’s just the sheer mathematical issues associated with even defining appropriate initial data / boundary conditions.

    Ken: yes, constructing a theory of exclusively local beables is a goal we share, but I’m really not at all convinced that retro-causality is going to somehow help on this front. (In the discussion with Rod S, we never even really got into this part, but I’ll just say that I couldn’t understand at all how he thought his theory was going to achieve this.)

    Then re: your “one last point”, I guess I am pretty impressed by how Bell’s (time-asymmetric) notion of local causality rules out a certain class of phenomena, namely, those that violate Bell’s inequality. So you’re right to pick up on some element of that kind of hope in my comments. But really I didn’t mean to insist on something like that. I’m more just concerned that (if — like Nathan but apparently unlike Dustin? — you want to use retro-causality to preserve some notion of locality) you can’t use that notion of locality to cleanly and finally diagnose different candidate theories as either consistent, or inconsistent, with the notion in question. So it seems rather empty.

    Good point about confounding factors. So maybe my China/Boston example isn’t probative. But then it was just kind of a joke anyway. I’ll have to think more about whether your response here really undercuts my whole worry, or just shows that this silly example isn’t the best one for expressing it.

    I look forward to any further elaborations/comments you have on my point (4)!

    Nathan: thanks for chiming in and nice to hear from you. I am, as I said before, genuinely sorry that our final edit of the scholarpedia thing fell off the back burner. (I also quite liked that expression when it occurred to me!) And I was sorry to hear that you felt frustrated by all the discussions about this stuff. Probably you can appreciate that we also felt somewhat frustrated by them (and that this probably contributed to our non-excitement to make final changes afterwards). I guess it would be fair to summarize by saying that, for you, retro-causality is a really important and central issue that you thought should be addressed and acknowledged and made into a really important and central thread in the article… whereas, for all of us, really, it’s not that important and basically the kind of thing that, sure, ought to be explicitly acknowledged, at least once (but maybe just in a footnote), but needn’t be made a big deal of. In any case, given this basic disagreement over what is really crucial and important and what isn’t, there was bound to be some mutual frustration there. And I bet if you had asked for less you would have ended up getting more, if that makes sense. But, that’s all behind us (at least until we fish the thing out from behind the stove and put it back on the back burner).

    As to your actual comments here… I didn’t really understand your parsing of the old Bell vs Shimony et al thing. I think it’s clear that everybody was just taking for granted that there was no backwards-in-time causation. That’s a pretty standard and normal and reasonable assumption, in most contexts, you have to admit! And then given that assumption, the discussion was about the assumption that Bell makes explicitly in deriving the inequality (the independence of settings and lambda. I think Shimony et al didn’t really understand at first that there was a separate assumption here (separate from local causality) so they thought they were refuting the claim that violation of the inequality proved violation of local causality, period. Whereas in response Bell had the opportunity to clarify that there is just an additional assumption here. Anyway, I think — assuming no retro-causation — calling this additional assumption “no conspiracies” is completely reasonable (certainly better than “free will”). So I guess I see that exchange as just a normal and perfectly reasonable and comprehensible working out of the fact that there’s an additional assumption here. You are obviously upset that none of these guys ever bothered to question the even-further-in-the-background assumption of no retro-causation. But your comments almost read as if you see this whole thing as something like a deliberate attempt to wrongly diagnose retro-causation as conspiratorial. I just don’t see that at all. Retro-causation (often or maybe) violates the assumption that gets called “no conspiracies”, and quite reasonably so, by people who don’t really take retro-causation too seriously. When you press us we’re happy to acknowledge that that classification is somewhat misleading since there may not be anything “morally” conspiratorial (to use Dustin’s apt description) in such models. But that doesn’t necessarily make us take retro-causality any more seriously than we did before.

    I’m glad to hear we’re in agreement about the ultimate need for a qtwo.

    I was confused by your remark that “locality should be understood as … a notion which is to be defined only in the context of discussions in which the causal arrow of time has already been accepted.” Did you say, two posts up, that (unlike Dustin) you “would like to use retrocausality to save locality”? Did you mean a time-symmetric notion of “locality” in the earlier comment (which, for maximum confusion, I quoted second just now)? I’d be very interested to know how you might define/formulate this symmetric notion of “locality”.

    And then finally, you asked me if I’m persuaded? Of what, exactly? I certainly didn’t find anything in all these comments (nice and thought-provoking though they were) that fundamentally changes my attitude about retro-causality, if that’s what you meant.

    #2929

    Since the issue of a retro-causal qtwo came up, I thought this might be a good place to mention Rod Sutherland’s model. I’m not at all convinced by a number of the claims that he’s made with that, but I still find his model interesting, in the sense that it provides a simple example of a retrocausal qtwo. Just have two universal wave functions, the usual one evolving forwarad in time from some big-bang-ish initial condition, and then the second one, evolving backwards in time from some ???- (maybe heat-death-) -ish final condition. And then I think the dynamics of the particle configuration can be defined to depend in a kind of symmetrical way on the two wave functions.

    (Note that, as I finally learned after lots of discussion and confusion, this is not actually how his model works. But I think you can generalize his one-particle toy model into a qtwo in this way.)

    Two quick points about this idea:

    1. I’m pretty sure that the picture will give nonsense for all but the earliest and latest times. The particle configuration in the middle will be a big mess. Nothing like the macro-world we actually observe. So I think that, while interesting in so far as it’s maybe an example of a true retrocausal qtwo (but see 2.), it’s not an empirically viable theory.

    2. I’m not sure it’s even retro-causal. This is really the main point I wanted to get at. Despite how I described it above, I don’t see why one should think of this theory as any more retro-causal than ordinary BM. It’s just that there are two wave functions, which jointly influence the particle velocities, instead of one. But so what? Whatever differential equation the new, supposedly backwards-evolving wave function obeys could just as easily be read as normal, forward-in-time evolution. Just because you solve some problem by specifying a boundary condition at t_final, doesn’t make it retro-causal. You’d have, I think, exactly the same theory if you instead had two wave functions that jointly influence the particles, one evolving forward in time from some kind of thermodynamically low entropy initial condition, and the other evolving forward in time from some high entropy initial condition. So my point here is: I literally lose sight of what it’s even supposed to mean to call a theory like this retro-causal. And so part of the big worries that I meant to be expressing before, is the worry that, at the end of the day, and in the context of something like a qtwo where you don’t sneak things in by treating observation/measurement in a special way, I’m really not even sure what retro-causality means. At some very abstract level, the theory just says what happens “in the block” (or maybe gives some kind of probability distribution over possible histories of what happens “in the block”). It makes me a little uncomfortable that not only the idea of retro-causation, but also the idea of forward-causation, kind of crumbles to dust from this point of view. But it seems to me that it does. Or might. So that’s the other thing I’m worried about. Is there anything left for retro-causality to even mean, if we zoom out from the toy models to something like a qtwo, where there are no longer any “external interventions”??

    #2930
    Ken Wharton
    Member

    Hi Travis,

    There are many aspects of the measurement problem, but I think the one that is hardest for retrocausal theories is that there should be no difference between an *interaction* and a *measurement*, since one can’t define a hard distinction between the two. Standard QM of course has this problem. It incorporates mere interactions into multi-particle unitary dynamics, while external measurement is treated with a separate part of the theory. The other, related problem in standard QM is that if one takes a bigger view, and encompasses the measurement device as part of a bigger system, one gets a different story: now it’s back to unitary dynamics for the whole bigger system. Both of these accounts can’t be correct, so your preferred solution to this logical mismatch is a QTWO, like Bohmian mechanics.

    In small-scale versions of retrocausal accounts, for which we have simple toy models, it may seem like we’re heading towards the same problem. A few-particle system might have self-interactions, treated via one framework, but the retrocausality is almost always imposed at the very end of the experiment, at some final “measurement”. This final measurement is usually imposed as a special boundary condition on the previous system, unlike the few-particle interactions that might precede it. So interactions and measurements look different, the very same problem from QM, and one might think that all the other related measurement problems of standard QM would arise for these models as well.

    But, at least for retrocausal models which are couched in spacetime (in a Block universe), the other problems don’t follow in anything close to the same way.

    The ambiguity of how measurements work in standard QM , combined with the use of large configuration spaces to deal with multi-particle systems, means that there are always two different ways to apply QM to a measurement device (MD) interacting with a system S. One can either include MD+S in a large, unitarily-evolving configuration space, or apply the usual measurement rules on S alone.

    But if everything is couched in terms of spacetime-local beables (the biggest selling point of retrocausal approaches), there is no ambiguity of how to represent MD+S. MD fills some region of spacetime, S fills another region of spacetime, and in these models there is no ontological configuration space into which one can combine the two. Sure, you can come up with a larger spacetime region, MD+S, and call this larger region a new system S’. But thanks to the Block universe, everything about S’ will exactly map to MD and S, and vice-versa. The two ontologies are guaranteed to be compatible, unlike the situation in standard QM where configuration spaces can’t generally be unpacked into its local pieces.

    But that’s just the ontology; what about the *theory*? It must be true that any ultimate theory capable of analyzing a finite system must be applicable to both S and S’. Given that the theory is retrocausal, MD imposes a future boundary condition on S (at least, that’s how all the small-scale toy models work). But Block-Universe-consistency means that when the theory is applied to S’, it must reveal an intermediate boundary constraint on S at the S/MD interface. And this interface isn’t an external boundary of S’, it’s just some internal “interaction”. Nevertheless, for consistency’s sake, it must be true that some internal interactions are treated as boundary constraints, of one subsystem on the other. That’s the only possible conclusion: effective boundary constraints cannot just be at the beginning and end of the universe, they have to occur periodically throughout. (Travis: perhaps this helps with your point #2 when discussing Rod’s model, just now?)

    Still, this observation brings us right back to the initial problem: in these toy models, few-particle interactions and final measurements are *not* treated the same way; the latter are imposed as boundary conditions and the former are not.

    But unlike standard QM, which has other irresolvable issues due to the ambiguity of configuration space, this problem does not appear to be fatal in Block-Universe retrocausal models. There are several paths forward, at least two:

    1) Rod Sutherland’s approach (I think?): *Every* interaction is effectively a measurement, and therefore a boundary condition. There are other issues that come up here, of course, but I’m going to set them aside for now because it’s not my favored solution. (But I do think it’s a viable path that should be explored.)

    2) My suggestion: Effective boundary conditions are imposed on lower-entropy systems (such as S) where they interact with higher-entropy systems (such as MD). But when two comparable-entropy systems interact, there’s no broken symmetry, and instead of a directed boundary constraint (MD constrains S, not vice-versa) one simply finds mutual correlations.

    This is actually what happens in stat mech. Infinite-entropy systems are just thermal reservoirs, and they act as boundary constraints on low-entropy systems that come into contact with them. To the extent that there is a large asymmetry between their number of possible internal states, the one with many more possible states effectively imposes a boundary condition on the other. But for comparable-entropy systems in stat mech, that boundary constraint disappears, and one is merely left with a correlation between two systems.

    It also is what happens in my most explicit analysis of this, in arXiv:1301.7012. In a simple 0+1D history-counting framework, it falls out of the math that higher-energy density systems constrain lower-energy-density systems, in just the right way to impose a measurement-like boundary constraint. But for two comparable systems they just get correlated, with no measurement-like boundary.

    It seems to me that given this account, the retrocausal toy models go through. The few-particle interactions aren’t imposed as boundary conditions, because they’re interactions (correlations) between comparable-entropy systems. Then, at the end of the experiment, some (presumably massive) MD interacts with these small systems, imposing a (future) boundary constraint. That’s exactly how the toy models work. And now, you can see how it would scale up: If the MD was a buckyball absorbing a photon, it would impose a boundary on the photon. But the buckyball might itself be measured by a much larger detector, which would impose an external boundary condition on MD, at some later time.

    Instead of an infinite regress, you eventually hit a cosmological boundary condition, which is probably the best analog to an infinite-entropy system one could come up with in 4D-stat-mech. So that cosmological boundary might be an ultimate cause, but there would be plenty of other, smaller, proximate causes on low-entropy systems. And many of those would be imposed exactly at this microscopic-macroscopic interface, where “quantum measurements” are generally assumed to occur. All of these boundaries throughout the universe would of course have to be solved “all at once” rather than dynamically, but the larger-entropy pieces would still constrain the smaller-entropy pieces, all the way down to the loneliest little particles passing through carefully-shielded vacuum chambers.

    There’s a lot more to say here — a nice retrocausal resolution of delayed-choice and quantum-eraser scenarios, etc. — but I’ll leave it there and see what you and others think. Does my favored approach offer any hope for solving the biggest measurement problems in retrocausal theories?

    Best, Ken

    #2935

    Hi Ken. I think I agree with your description of the situation and the problem and even your statement of what the solution must be:

    “Nevertheless, for consistency’s sake, it must be true that some internal interactions are treated as boundary constraints, of one subsystem on the other. That’s the only possible conclusion: effective boundary constraints cannot just be at the beginning and end of the universe, they have to occur periodically throughout.”

    But I would put it this way (and I’m not sure if this is just exactly what you meant to be saying, or something else entirely): it better turn out that the boundary conditions on S’ *imply*, via the application of the basic dynamical postulates of the theory, the same sort of “internal facts” that were instead imposed by hand, as boundary conditions, when you analyzed S. That’s, I think, the “block universe consistency” that’s needed.

    But then, I think unlike you, I’m not at all optimistic that this can be achieved. You mention Rod’s theory as one example of how it could be achieved, but to me Rod’s theory is instead a very clear cut example of this *not* working out, at all, in the desired way. But then I admit that I remain fuzzy on what his theory is supposed to be saying, exactly — none of the stuff about getting rid of config space wave functions in favor of “spacetime local beables” ever made any sense to me, and it is still not clear to me how he intends to resolve the interaction/measurement problem that you described so clearly (my sense is that he wants to say that every interaction is a measurement, but that can’t be right — you can’t make literally everything in the block a measurement boundary at which you simply stipulate, by hand, what’s going on… there’d be literally nothing left for the theory to say).

    And then your other proposed solution (treating large-entropy systems as dynamically privileged w.r.t. small-entropy systems) also strikes me as not too promising — it seems like just another way of trying to hide the standard vagueness and ambiguity pea, from ordinary QM, under some macro/micro terminological shell. That is, I don’t see how all of your prose descriptions of how it might work out (which sound nice, but to me are riddled with exactly the same sort of vagueness and ambiguity that one finds in standard OQM talk about how quantum systems evolve one way when they’re not interacting with a measuring device, but another way when they are) can possibly be realized in clean mathematical terms.

    I mean, it’s not like I’m claiming to see with certainty that this’ll never work. It’s just that it doesn’t look promising to me. But I’ll of course be interested to hear about any progress that is made.

    #2943
    Ken Wharton
    Member

    Hi Travis,

    Wow! It’s very nice to hear we’re in agreement about both the framing of the problem and what the solution must look like. Apart from a single word, I’m also in agreement with:

    “…it [had] better turn out that the boundary conditions on S’ *imply*, via the application of the basic dynamical postulates of the theory, the same sort of “internal facts” that were instead imposed by hand, as boundary conditions, when you analyzed S. That’s, I think, the “block universe consistency” that’s needed.”

    My one-word-nitpick, unsurprisingly, is the word “dynamical”. The sort of account I’m discussing is only going to make sense when analyzed “all-at-once”, not where causation is “flowing in” via some differential equations. I know you’re skeptical that there’s an essential difference, but that means you don’t really have anything against “all-at-once” accounts, you just don’t see that they’re needed.

    And the problem of achieving Block Universe consistency, in the way we both agree is necessary, is precisely where an all-at-once account *is* needed. If I could get you thinking along the lines of “history counting” as a way to determine effective dynamics, I think you’d be a lot more optimistic about a solution. In fact, given such a framework, solving your biggest concerns here would almost be trivial.

    Take the best analog to my proposed 4D-history-counting: 3D-state-counting in stat. mech. Consider a sequence of systems, all touching, with each new system much smaller than the last. At each system interface, the smaller system would effectively be constrained by a boundary condition from the neighboring larger system, as if the larger system was effectively a thermal reservoir. Zooming into some small part of this story, one could find this result by simply state-counting. (The more ways something could happen, the more probable.) Zooming out, one gets the same answer, for the same reason. This sort of logic is consistent at all scales.

    This works in 3D, and I see no reason why it won’t work in 4D as well, if one gives up law-like dynamics. In stat. mech. the likelihood of each macrostate is just given by counting microstates. Similarly, for my 4D Block Universe extension, if you get rid of law-like dynamics and just count microHistories, you automatically get exactly the consistent account that we both agree is needed.

    A much harder task is to address the other point at which you’re skeptical: showing that one can use retrocausality to get all the beables back into spacetime, for any possible entangled state. (That’s getting all my attention at the moment, and I’m cautiously optimistic I’ll either have it solved soon, or show that it can’t be done.) This measurement-problem issue, in comparison, I think is going to be relatively straightforward.

    One last comment: I guess I sort of understand your concerns with comparing my nice-sounding-prose to perhaps similar arguments people make about decoherence or other proposed measurement-problem-solutions in standard QM. But standard QM is *doomed* to fail to address the measurement problem, because of how systems get blurred together into configuration-space. For all your interest in spacetime-local-beables, I’m still not sure you see how many of the central problems of QM are solved, in one fell swoop, by separating out everything in spacetime and getting rid of action-at-a-distance. (Maybe that last part is the key, because in your approaches there’s still action-at-a-distance going on…)

    Thanks for the great discussion!

    #2969
    Nathan Argaman
    Participant

    Hi again,

    I’m sorry for not paying attention for so long, but looking back now I think I still need to respond to some of Travis’ comments of a week ago. There are two things I would like to clarify. One is the issue of “locality.” I said I would like to see retrocausation used in a way which restores some kind of locality to QM, and as I also said that Bell used “locality” as a short form of local causality (of the forward-in-time variety), that’s obviously not going to work. What I mean is that, like Ken, I would like to have a QTWO where all the ontic variables are “local beables,” i.e., defined in space-time, and not in some configuration space or Hilbert space. I hope that resolves it.

    The second issue is really just a choice of words, but words can be important I think, and from your paranthetic remark ‘(certainly better than “free will”),’ it seems that you agree. Words are especially important for the novices who might be reading a review such as a scholarpedia article, and much less so for the experts who know precisely what each phrase stands for. Now, I agree with most of what you said. It is natural to make the causal-arrow-of-time assumption, and even though I might find it frustrating that it’s not mentioned explicitly, with the causal arrow of time taken for granted, it is completely reasonable for the assumption that the device settings are independent of lambda to be called “no conspiracies,” and its negation may be called “conspiratorial.” In the disussion which ensued at the time, such conspiracies (those that deny the status of free variables from device settings which are ostensibly free) were said to be of the type that would undermine any scientific enquiry, and rightly so.

    Now, things become completely different in contexts in which one is explicitly considering retrocausality, i.e., the assumption of the causal arrow of time is not made. In this case, it is not the device settings but lambda which becomes a dependent variable (the device settings are still free variables). I think it is highly misleading (albeit not to the experts who know all of this well) to continue to use the same name for that. Clearly, you should just call this option retrocausal, or denial of the arrow of time or something like that. Even if you don’t like it and are completely skeptical about the possibility of it leading to something useful, you shouldn’t want to get away with hinting that this option can be disposed of by the same arguments that were used to argue against conspiratorial descriptions. You realize, of course, that retrocausality requires a completely different discussion in order for a novice to be able to decide whether or not to adopt the opinion that it is or isn’t worth pursuing.

    Given that retrocausality is supposed to apply to variables which are hidden also in the usual quantum description, such as which-path variables (e.g., those which determine which slit a particle went through in a delayed-choice interference experiment), the assumption of no retrocausality is clearly the less obvious one. Thus, in a brief discussion, the assumption which it makes sense to take for granted is not the arrow of time assumption but the “no conspiracies” assumption (in the sense that the device settings are indeed free variables, a sense which is meaningful in both causal and retrocausal discussions).

    It is this argumentation about the wording which I hope you find convincing. And I hope that Dustin too finds it convincing… Of course, there is also a lot about which we must agree to disagree, and there’s no reason to be particularly frustrated about that.

    Cheers, Nathan.

Viewing 20 posts - 1 through 20 (of 20 total)
  • You must be logged in to reply to this topic.

Comments are closed, but trackbacks and pingbacks are open.