Forum Replies Created
-
AuthorPosts
-
July 28, 2015 at 9:56 pm #2969
Nathan Argaman
ParticipantHi again,
I’m sorry for not paying attention for so long, but looking back now I think I still need to respond to some of Travis’ comments of a week ago. There are two things I would like to clarify. One is the issue of “locality.” I said I would like to see retrocausation used in a way which restores some kind of locality to QM, and as I also said that Bell used “locality” as a short form of local causality (of the forward-in-time variety), that’s obviously not going to work. What I mean is that, like Ken, I would like to have a QTWO where all the ontic variables are “local beables,” i.e., defined in space-time, and not in some configuration space or Hilbert space. I hope that resolves it.
The second issue is really just a choice of words, but words can be important I think, and from your paranthetic remark ‘(certainly better than “free will”),’ it seems that you agree. Words are especially important for the novices who might be reading a review such as a scholarpedia article, and much less so for the experts who know precisely what each phrase stands for. Now, I agree with most of what you said. It is natural to make the causal-arrow-of-time assumption, and even though I might find it frustrating that it’s not mentioned explicitly, with the causal arrow of time taken for granted, it is completely reasonable for the assumption that the device settings are independent of lambda to be called “no conspiracies,” and its negation may be called “conspiratorial.” In the disussion which ensued at the time, such conspiracies (those that deny the status of free variables from device settings which are ostensibly free) were said to be of the type that would undermine any scientific enquiry, and rightly so.
Now, things become completely different in contexts in which one is explicitly considering retrocausality, i.e., the assumption of the causal arrow of time is not made. In this case, it is not the device settings but lambda which becomes a dependent variable (the device settings are still free variables). I think it is highly misleading (albeit not to the experts who know all of this well) to continue to use the same name for that. Clearly, you should just call this option retrocausal, or denial of the arrow of time or something like that. Even if you don’t like it and are completely skeptical about the possibility of it leading to something useful, you shouldn’t want to get away with hinting that this option can be disposed of by the same arguments that were used to argue against conspiratorial descriptions. You realize, of course, that retrocausality requires a completely different discussion in order for a novice to be able to decide whether or not to adopt the opinion that it is or isn’t worth pursuing.
Given that retrocausality is supposed to apply to variables which are hidden also in the usual quantum description, such as which-path variables (e.g., those which determine which slit a particle went through in a delayed-choice interference experiment), the assumption of no retrocausality is clearly the less obvious one. Thus, in a brief discussion, the assumption which it makes sense to take for granted is not the arrow of time assumption but the “no conspiracies” assumption (in the sense that the device settings are indeed free variables, a sense which is meaningful in both causal and retrocausal discussions).
It is this argumentation about the wording which I hope you find convincing. And I hope that Dustin too finds it convincing… Of course, there is also a lot about which we must agree to disagree, and there’s no reason to be particularly frustrated about that.
Cheers, Nathan.
July 27, 2015 at 3:38 pm #2968Nathan Argaman
ParticipantHi Ken,
I’ve finally read not only your “information” article but also your 1307 arXiv preprint, which indeed required some “wading.” I must say I think you’re on the right path, with the most appropriate motivations I’ve seen yet (that is, of course, to the best of my judgement). And there’s a lot to do. I wonder why there aren’t more people working along such lines. For example, do you know what Rob Spekkens thinks about your line of argument? Does he accept now that retrocausation is worth pursuing?
Two things on the technical level: (a) I think you will need a measure for paths, as in Feynman path integrals; they’re not discrete, and you can’t just count them with integers; (b) Even accepting your idea of an entropy for 4D histories, I still think we’ll need the 3D concept in addition, e.g., so as to be able to identify low-entropy past boundary conditions, etc.
And one more thing: I think that by the time we’ve learned how to describe a measurement in a retrocausal theory of local beables, we will see that our ontic variables describe waves (with quantum noise, not just a solution of a PDE), but our epistemic variables are nevertheless “corpuscular,” in the sense that once something has been measured irreversibly, even just a single click in the detector, it’s either there or it isn’t. So the “it from bit” ideas will still apply in some sense, but only in the limited sense relevant to the epistemic variables. (And, of course, unitary evolution will be natural – you simply can’t change the information in your epistemic state between updates).
OK, I guess that’s it. Thank you very much for your efforts in oganizing this, and especially for inviting me to join in. Please tell me if you’d like to look at the stochastic quantization ideas and discuss them as well.
Cheers, Nathan.
July 26, 2015 at 5:59 am #2966Nathan Argaman
ParticipantHello Aurelian and Rod,
Thanks very much for your replies.
I will need to take a look at the papers of Miller and of Aharonov and Gruss.
The way I see it, it is completely OK for the Born rule to be stipulated, rather than derived. Newton also stipulated the universal law of gravitation, even though he disliked the idea of action at a distance. Of course, having a derivation from some “basic” physical principles is nice, but it is not strictly required.
What I was looking for a few years ago, as Ken Wharton also was and still is, is a reformulation of QM in terms of exclusively local beables. We know from Bell’s theorem that such a formulation must be retrocausal. The non-local wavefunction can then be understood as an epistemic tool, and it cannot affect the “paths of the particles” or whatever the ontic variables may be. Needless to say, such a formulation has not been found yet.
My paper on this was published in 2010, and is available in
http://scitation.aip.org/content/aapt/journal/ajp/78/10/10.1119/1.3456564
and in
http://arxiv.org/abs/0807.2041
(I thought you would see that I linked to it in my contribution to this conference). I was able to include a simplistic retrocausal toy model in it, designed to be as simplistic as the nonlocal toy model which Bell himself included in his original paper. You will not find a derivation of Born’s rule there, or anything as general as that, but it seems that it is the earliest place in which one can find an explicit retrocausal model which reproduces Bell-type correlations in terms of only local beables.In the discussion, I wanted to compare with other retrocausal models, and I cited your model but did not include it in the comparison because it did not provide a different route to obtaining the probabilities of the measurement outcomes. In this sense it is similar not only to Aharonov et al, but also to the Transactional Interpretation, which uses the same math, but nevertheless introduces the novel concept of retrocausality.
Thanks again, Nathan.
July 23, 2015 at 6:12 am #2956Nathan Argaman
ParticipantHello again Rod,
I’m sorry I didn’t respond to your reply at the time, but better late than never. First I want to thank you for it, but then I want to clarify what I meant.
I wanted to find out in what sense you claim that your model explains the phenomena Bell’s theorem identifies as perplexing. In my mind, the first thing a model must do to achieve that is to give formulae which generate the correct probabilities for the outcomes. Standard QM does that, and Bohmian mechanics does that in a different way, provided you assume the “equilibrium” distribution for the initial positions. I mentioned the possibility of other distributions only as a reminder to this – if you choose a “wrong” distribution, you can even get wrong results! You say that your model can also accommodate “wrong” distributions, but if the final boundary conditions are supplied in the “usual” manner, i.e., with the same probabilities, a “wrong” distribution won’t lead to “wrong” results, will it?
In my work, I provided a retrocausal toy model which gives the “correct” results in a different way, and I wanted to compare this with other publications discussing zig-zag causation, but I couldn’t make a meaningful comparison with your work. I think that’s not a surprise, because as you say your work is an “add-on,” and uses the standard QM formulae to get the outcome probabilities.
In this sense, I think it’s quite different from Bohmian Mechanics (BM). True, you have the particle path going down the corresponding “finger” in your measurement device, as in BM, but you’ve supplied the corresponding final boundary condition, so the probability for this or that outcome is predetermined, unlike BM. And if I want to compare and ask how the probabilities are determined, I’m back to comparing with standard QM.
Am I right?
Thanks, Nathan.
July 20, 2015 at 7:15 pm #2925Nathan Argaman
ParticipantOK, friends,
Now I’ve read the rest of the discussion again, and I see that Ken has already given good answers to your questions, Travis. I’d like to add just a few more words:
(a) First, it seems that we will all remain unhappy unless we have a quantum theory without observers. That was the main point in my discussion back when I wrote my article: We had Bohmian mechanics, which was represented in Bell’s 1964 article by a simplistic nonlocal toy model; subsequently, simplistic retrocausal toy models were found to work as well, but we haven’t found a retrocausal analogue of Bohmian mechanics yet. We should strive to find one (or else, prove that it’s impossible, which isn’t going to be easy to do – as you say, there’s no obstacle in sight).
(b) I think locality should be understood as synonymous with local causality (those are the words Bell himself used in his later publications), i.e., it is a notion which is to be defined only in the context of discussions in which the causal arrow of time has already been accepted.
(c) For retrocausal models, in order to describe the physics of the world in which we live (i.e., in order to develop a qtwo) we will need to break time-reversal symmetry: we will need to somehow include the fact that the entropy was low in the past in our description. It is perhaps possible that this will be accomplished just by treating initial conditions differently from final conditions, as in classical mechanics. One could then hope to develop a theory in which “no signalling” and “information causality” are manifest. That would rule out the absurdities you allude to in your Boston/China example (I don’t call them conspiratorial, because you haven’t denied the free-variable status of any device settings). So here my reply is different from Ken’s – I would not rely on time symmetry in this case.
Overall, I don’t think it’s going to be that easy to find a retrocausal qtwo with strictly local beables. Just constructing a description of a dissipative measurement would require a lot of work, perhaps akin to the well-known Legget-Caldeira description. But I certainly see it as well worth pursuing.
Travis, are you by any chance persuaded? Can you bring up further points, or do you just feel outnumbered?
Best, Nathan.
July 20, 2015 at 4:54 pm #2920Nathan Argaman
ParticipantHi Travis and Justin,
I’ve been meaning to get aroud to responding to you for quite a few days now, and I feel that perhaps I should apologize for taking so long. But then one of the first items in my response is to say that I liked your description, Travis, with “falling off the back burner.” So perhaps I’ll leave it at that.
I agree essentially with everything you said, Justin, in your reply to my previous post. The only exception is that I would like to follow Bell’s use of lambda, even though he assumed local causality and I am now discussing retrocausal models. And I would like to use retrocausality to save locality. Thus, I would say that lambda is a set of parameters or functions that describes everything that happens to the particles before they are detected, and not just their properties or attributes at the source. As Travis points out, that is not the only possibility (the way Bell used them, different possibilities may be equivalent, but now they aren’t).
Next, let me respond to Travis’ post. Although I clearly found the discussion of the scholarpedia article quite frustrating (and I still think you should at the very least mention the fact that you’re assuming the causal arrow of time in the main text, and not just in a footnote), at least you know what you’re doing! The foremost example of dismay for me was when I read the original piece by Shimony, Horne and Clauser on this [Dialectica 39,97 (1985)]. I had just given a talk at the Perimeter Institute on this (pirsa.org/06110017), the first time I had described my vague ideas to an audience with experts on the matter, and they directed me to this. The article briefly describes a situation where Alice and Bob do not care to decide for themselves which settings to use, and instead consult their secretaries, and somebody has prearranged with the manufacturer of their measurement devices to give the secretaries lists of parameter settings according to a data table of his choice, and to have the devices reproduce results from additional lists. Of course, you can then get anything you want! You can see the original reproduced in this link:
https://books.google.co.il/books?id=Aee0XfBUDZQC&pg=PA163&lpg=PA163&dq=Shimony+Bell+secretary&source=bl&ots=ByiclUvzfl&sig=GlTX4igWLrMmBMpBITjPLUGEVPk&hl=iw&sa=X&ved=0CEEQ6AEwBGoVChMIi6i3gLPlxgIVRLwUCh1megCb#v=onepage&q=Shimony%20Bell%20secretary&f=false
Now that is what earned this type of description the title “conspiratorial,” and for good reason. If you read through their discussion, you find that they were taking the causal arrow of time for granted all along, and in that context, indeed correlations between the device settings and the hidden variable must be conspiratorial. Bell responded, saying that such conspiracies would undermine any scientific inquiry. And that is absolutely right. I was very annoyed to find the idea of retrocausation essentially branded as unscientific in this way. The line of thought seems to be roughly this:
(a) It is suggested that retrocausation could lead to certain unexpected correlations, but
(b) Retrocausation itself is unexpected.
(c) Assume, then, the causal arrow of time, i.e., the absence of retrocausation.
(d) One finds that in this case the said correlations can only occur if something conspiratorial and unscientific occurs.
(e) The upshot is that retrocausation is branded as conspiratorial.
Am I wrong, Travis?I really think that the word “conspiratorial” should only be used to describe theories or descriptions in which it is denied that the device settings are free variables (free variables are a well-defined mathematical notion, and therefore this conforms with the principle of not bringing in human beings; unfortunately, it seems that most of the community has proceeded to bring in the notion of free will, which is philosphically complicated, and thus does not improve our chances of making progress in quantum theory). But, of course, every author is entitled to his choice of words, so what can I do?
One more thing, Travis – if you want to take a look at another toy-model which, although it doesn’t necessrily reproduce anything of QM, at least would seem to alleviate your fear that the very definition of things would “crumble away,” I suggest you take a look at Ken’s work:
http://www.mdpi.com/2078-2489/5/1/190 , referenced in his post # 2834 within the discussion of his own contribution here.Now, you have gone on and continued to have a very interesting discussion to which I would also like to respond, but let me end this post here, and then begin another one soon.
Best, Nathan.
July 16, 2015 at 9:07 pm #2810Nathan Argaman
ParticipantHi Ken,
There’s one point which has been nagging at the back of my mind these last few days: When I said “good” retrocausal models, what I meant is that they should be clear about what the ontic variables and the epistemic variables are, and that there would be a natural way to take the log of the number of possible ontic states and associate it with an entropy. Intuitively, I think in my model lambda does not represent an ontic variable – it is an angle which seems to divide the available phase-space into parts which lead to different outcomes. When the parameter settings are the same, there are just two relevant parts. When Alice and Bob choose different settings, apparently the structure of the available phase-space is different. You could think that it’s only the way the phase-space is subdivided, but you can’t go too far in that direction and here’s why: If the structure of the phase space is not changed then there’s no apparent reason for the probability density to change, and for such models Bell’s original analysis works (with lambda representing the phase-space variable), so they cannot violate the inequality and won’t explain anything.
Now I’m not saying it’s going to be simple, if the parameter settings affect the structure of the phase space, but I think that’s a thing to explore. Also, in this sense your model is different, because you do have an explicit description of something rotating along the path, so it looks very much like an ontic variable. Again, you can think of my lambda as the value of that variable at the point along the path which correponds to the source. I think that’s just one way that you can subdivide a phase-space: the space of functions is clearly divided into classes which share the same value at a point. But intuitively I think that that’s not the relevant subdivision. I would think that the entropy would refer to the phase space of the values of the ontic variable at the source, at just one instant. So we have to keep looking.
Best, Nathan.
July 14, 2015 at 9:31 am #2723Nathan Argaman
ParticipantHi Daniel,
I agree that retrocausation is a good axiom to use. At least it’s negation – the assumption that the causal arrow of time applies to microscopic degrees of freedom – should not be used.
But I think you’ve overstated your point. From your description, it sounds like there’s a potential causal loop – couldn’t Jim ask Alice and Bob whether they’ve observed a violation of Bell’s inequality, and if they have, decide to jam it? The answer is, as you know, that even if Jim “steers” them into an entangled state, they must know the results of his measurement in order to bin their data in a way that would exhibit violations of the inequality. I think that even in a brief description, you need to mention that, in order to avoid more controversy than necessary.
To me, this is indeed an indication that the wavefunction is epistemic rather than ontic. In a stochastic theory where future as well as past boundary conditions affect the probabilities of microscopic degrees of freedom, and at the same time a low-entropy past condition is imposed, there is a chance that information causality will be a consequence, and that all of QM will follow. In such a theory, there will be a need for something like the wavefunction to represent the known (epistemic) probabilities up to some time, and it will not be surprising that its complexity is exponential in the number of particles (like the Liuoville equation in classical many-body phase space). I do find it surprising that the exponential complexity issue is hardly ever brought up as another indication in this direction.
All the best, Nathan.
July 13, 2015 at 6:33 pm #2691Nathan Argaman
ParticipantHello Dustin,
I read your contribution with much interest. Your motivations seem to overlap with mine to a very large extent (see my own contribution in this conference, also available at http://arxiv.org/abs/0807.2041). Nevertheless, the technical details are quite different. I like your use of the Wheeler-Feynman interaction along light-cones – where the proper distance between the particles vanishes.
I was also dismayed by the fact that retrocausation had been charaterized as conspiratorial (when I learned about it). Clearly, those who adopt this charaterization are taking the causal arrow of time for granted. Apparently their intuition is so firmly grounded in the macroscopic world that they just can’t really think in any other way. Unfortunately, even in those discussions which aim to carefully lay out all the assumptions involved in Bell’s theorem, this simple fact – that they take the causal arrow of time for granted – is not pointed out. However, if you ask them – which I am bent on doing – whether or not they’re assuming the causal arrow, they’re usually happy to admit that they are.
I would like to ask you the following: in what ways do you think that our contributions overlap, and in what ways do you think that they complement each other?
Thanks, Nathan.
July 13, 2015 at 4:41 pm #2689Nathan Argaman
ParticipantHello Rod,
So far, I only skimmed through your present article, but I did read the 2008 (or rather, the 2006) version thoroughly at the time. My understanding was that it gives a very interesting description of what happens between a preparation and a measurement, but it does not give a definite prescription for calculating the probabilities for different outcomes of the said measurement (for this reason, I did not include it in the comparison table in my article on Bell’s theorem, as noted there). It seemed to me that, implicitly, one is to calculate these probabilities by the usual rules of QM. That is quite distinct from Bohmian mechanics, where there is a clear independent prescription for evaluating probabilities for measurement results (and, in fact, if the original density distribution is taken as non-standard, one may obtain non-QM results).
Let me ask: Have I understood correctly? Is the current updated version different in this sense?
Of course, even if the answer is negative, your work does accomplish a lot, and is quite impressive. And also, it is in good company – the two-time formalism of Aharonov et al shares this attribute – it provides no way of predicting the outcome probabilities of the final measurement (other than using standard QM).
Thanks, Nathan.
July 12, 2015 at 4:26 pm #2665Nathan Argaman
ParticipantHi again, Ken,
Regarding the “extra” time, it was there in the original Stochastic Quantization paper and subsequent works, but it is also possible (as they occasionally note) to simply stipulate an “equilirium distribution” to begin with, and then there’s no need for the “equilibration” to “occur” as this “extra” time tends to infinity (of course, the “equilibration” here is distinct from that of Bohmian mechnics). So if you don’t like the “extra” time, you can simply do without it. I myself would also prefer it that way (of course, that begs the question how the different parts of space time “know” about each other, but I don’t think we should allow ourselves to be bothered by that; think of Newton, who disliked the long-range instantaneous character of gravitational forces, but developed his theory anyway).
Regarding the model, I was trying to stay as close as possible to Bell’s analysis, so I initially used $\lambda$ to denote all of the relevant hidden variables. I then focused on the one which represents the photons’ polarization as they leave the source (or, if you like, the direction of the angular momentum of the intermediate state of the emission cascade, assuming a source of doubly-excited Ca-40 atoms). The remaining variables are then essentially redundant. Thus, I would look at your model as a more detailed version, where you describe the whole sequence of angles. You could have, say, a distinct angle for every picosecond of photon flight, but in the gamma-to-0 limit there’s only one dominant rotation, so they’re mostly redundant. My lambda is then the angle corresponding to the moment the photons leave the source. It was just not necessary to give a more detailed description.
I like the details of your model which you stressed – the fact that it’s not “collapsey,” and the fact that it’s clearly determined by the boundaries at the time of measurement. I did mention in my work that there’s something special about irreversible measurements that must be a determining factor, because if you think of Alice just letting her photon go through a polarizing cube, then she may later recombine (with another cube) the two partial beams to recreate the original photon polarization (i.e., she may construct an interferometer and regain the original photon state, or a rotated one), and then she can measure with a different orientation by using yet another cube. In order for the toy-model to work, only the irreversible measurement counts, with the proper rotations taken into account.
Best, Nathan.
July 11, 2015 at 9:01 pm #2640Nathan Argaman
ParticipantThanks, Ken. I now read your work with Price, and indeed my point above largely overlaps with your discussion there.
Regarding consciousness, please don’t bring that into the discussion – I’m sure it won’t help, just like the introduction of the concept of “free will” led to much discussion, with only a fraction pertaining to the relevant quanum phenomena. These notions are too human, and therefore much harder to understand than physical phenomena, even quantum phenomena! And for the purposes of discussing and studying quantum phenomena, measuring devices which irreversibly register the results (in their memories) are completely sufficient (in the other case, “free variables” is a completely well-defined mathematical concept which plays the relevant role in the discussion, without leading one astray from quantum physics into human affairs).
That said, I completely agree with you that one needs to spend some time clarifying these issues for oneself. I did so recently, re-reading parts of Price’s book in the process. The upshot is that the fact that we can only remember the past and not the future (or, to be more careful, our computer memories can only register information from their past…) is yet another instance of the Principle of Independent Incoming Influences, and is tightly related with the fact that there are sources of low entropy in the past. For example, if you want to store a bit A in memory, you can take a blank bit M=0 of memory, and perform a controlled-not operation, with A your control. After this, M will “remember” A. The controlled-not operation is reversible, but you can’t run the procedure in reverse because there’s no way to bring in a blank (low entropy) bit of memory from the future.
I hope this helps. Nathan.
July 9, 2015 at 9:37 pm #2583Nathan Argaman
ParticipantHowever, your planned attempt to generalize this to partially entangled states leads me to think that the symmetry principle may not always work: it appears that introducing asymmetric states will be easy.
More likely, the relevant physical principle is the increase of entropy, or the fact that the entropy was low in the past (I say this partly because of the relation between “information causality” and Tsirelson’s bound). After all, isn’t that always what prevents us from signalling into the past? If we didn’t have a “resource” of low entropy in the past, we wouldn’t be able to signal to the future either, in the following sense.
Think of a protocol where Alice sends a spin-1/2 particle to Bob (who is to her future), and they both perform measurements on it. The usual thing is that Alice can encode one bit per particle, by “pre-selecting” the output of her measurement. That means that she acts upon the result of her measurement, passing the particle to Bob only if its spin is in the direction which corresponds to the data she wants to send. But we can design a protocol where she is prohibited (by fiat) from doing so – the particle is passed to Bob regardless of her output. In that case all she can control is the measurement she makes. Thinking in terms of causation, it would appear that she can still signal to Bob, but in fact all Bob will be able to measure are EPRB-type correlations with her.
In fact, it is clear why: in order to convey information she has to pass to Bob a system or particle which has a large phase-space (or Hilbert space), and to encode the information by restricting the state of the particle to be in part of that Hilbert space, with the different parts corresponding to different messages. By pre-selecting, she does just that. If you think of the entropy of the particle or system she is sending in order to convey one bit of information, it must be smaller by at least one bit relative to its entropy were she to send a random signal (or the overall log of the size of the space of its possible states). However, if she can’t act on the output of her measurement, she can’t reduce the entropy – it remains at least as large as it was prior to her measurement.
This type of entropic analysis works in both a classical or a quantum description of a system, and I guess it will have to work in any “good” retrocausal model as well. I don’t think that the retrocausal toy-models we worked with so far are “good” in this sense.
What do you think?
I want to make one more minor point, regarding your discussion of Tsirelson’s bound. First, you use the term in two distinct ways, one which I think accords with the usual usage, where the Tsirelson bound is fixed at $ 2 \sqrt{ 2 }$, and one where you imply that it may change when you vary $\gamma$. What changes is the maximum value that the relevant combination of correlators can take in your model, and not the Tsirelson bound itself. In fact, I don’t think it is a coincidence that your model always conforms to the bound (by some finite margin for non-zero values of $\gamma$). But I’m no longer clear as to how one could best hope to demonstrate that that’s necessary for such models – by symmetry or by entropy considerations.
July 9, 2015 at 5:36 am #2551Nathan Argaman
ParticipantNo. In high-energy physics it is typically the mass of a particle which is protected. Because of renormalization, the parameters of the Lagrangian change their values (“running coupling constants”), so the observed mass of a particle should “naturally” be on the order of the energy scale of the theory. Some masses are much smaller. The prime example is the photon mass. It is zero because of gauge symmetry (a mass term for the photon would break gauge symmetry). Another typical example is the pion, which has a small mass because of an approximate symmetry.
July 6, 2015 at 5:00 pm #2502Nathan Argaman
ParticipantGreat work! In my mind, this is just the right sort of reply to Wood and Spekkens.
It is worth mentioning that our colleagues seeking foundational theories of nature in the field of high-energy physics, generally consider “protection by symmetry” to be a legitimate, and in fact standard, form of fine tuning. -
AuthorPosts