Weekly Papers on Quantum Foundations (49)

Publication date: Available online 7 December 2018

Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics

Author(s): Jonathan Bain

Abstract

Intrinsic topologically ordered (ITO) condensed matter systems are claimed to exhibit two types of non-locality. The first is associated with topological properties and the second is associated with a particular type of quantum entanglement. These characteristics are supposed to allow ITO systems to encode information in the form of quantum entangled states in a topologically non-local way that protects it against local errors. This essay first clarifies the sense in which these two notions of non-locality are distinct, and then considers the extent to which they are exhibited by ITO systems. I will argue that while the claim that ITO systems exhibit topological non-locality is unproblematic, the claim that they also exhibit quantum entanglement non-locality is less clear, and this is due in part to ambiguities associated with the notion of quantum entanglement. Moreover, any argument that claims some form of “long-range” entanglement is necessary to explain topological properties is incomplete if it fails to provide a convincing reason why mechanistic explanations should be favored over structural explanations of topological phenomena.

Publication date: Available online 30 November 2018

Source: Physics Letters A

Author(s): Atul Singh Arora, Kishor Bharti, Arvind

Abstract

We construct a non-contextual hidden variable model consistent with all the kinematic predictions of quantum mechanics (QM). The famous Bell–KS theorem shows that non-contextual models which satisfy a further reasonable restriction are inconsistent with QM. In our construction, we define a weaker variant of this restriction which captures its essence while still allowing a non-contextual description of QM. This is in contrast to the contextual hidden variable toy models, such as the one by Bell, and brings out an interesting alternate way of looking at QM. The results also relate to the Bohmian model, where it is harder to pin down such features.

Leonard Susskind, a pioneer of string theory, the holographic principle and other big physics ideas spanning the past half-century, has proposed a solution to an important puzzle about black holes. The problem is that even though these mysterious, invisible spheres appear to stay a constant size as viewed from the outside, their interiors keep growing in volume essentially forever. How is this possible?

In a series of recent papers and talks, the 78-year-old Stanford University professor and his collaborators conjecture that black holes grow in volume because they are steadily increasing in complexity — an idea that, while unproven, is fueling new thinking about the quantum nature of gravity inside black holes.

Black holes are spherical regions of such extreme gravity that not even light can escape. First discovered a century ago as shocking solutions to the equations of Albert Einstein’s general theory of relativity, they’ve since been detected throughout the universe. (They typically form from the inward gravitational collapse of dead stars.) Einstein’s theory equates the force of gravity with curves in space-time, the four-dimensional fabric of the universe, but gravity becomes so strong in black holes that the space-time fabric bends toward its breaking point — the infinitely dense “singularity” at the black hole’s center.

According to general relativity, the inward gravitational collapse never stops. Even though, from the outside, the black hole appears to stay a constant size, expanding slightly only when new things fall into it, its interior volume grows bigger and bigger all the time as space stretches toward the center point. For a simplified picture of this eternal growth, imagine a black hole as a funnel extending downward from a two-dimensional sheet representing the fabric of space-time. The funnel gets deeper and deeper, so that infalling things never quite reach the mysterious singularity at the bottom. In reality, a black hole is a funnel that stretches inward from all three spatial directions. A spherical boundary surrounds it called the “event horizon,” marking the point of no return.

Since at least the 1970s, physicists have recognized that black holes must really be quantum systems of some kind — just like everything else in the universe. What Einstein’s theory describes as warped space-time in the interior is presumably really a collective state of vast numbers of gravity particles called “gravitons,” described by the true quantum theory of gravity. In that case, all the known properties of a black hole should trace to properties of this quantum system.

Indeed, in 1972, the Israeli physicist Jacob Bekenstein figured out that the area of the spherical event horizon of a black hole corresponds to its “entropy.” This is the number of different possible microscopic arrangements of all the particles inside the black hole, or, as modern theorists would describe it, the black hole’s storage capacity for information.

Bekenstein’s insight led Stephen Hawking to realize two years later that black holes have temperatures, and that they therefore radiate heat. This radiation causes black holes to slowly evaporate away, giving rise to the much-discussed “black hole information paradox,” which asks what happens to information that falls into black holes. Quantum mechanics says the universe preserves all information about the past. But how does information about infalling stuff, which seems to slide forever toward the central singularity, also evaporate out?

The relationship between a black hole’s surface area and its information content has kept quantum gravity researchers busy for decades. But one might also ask: What does the growing volume of its interior correspond to, in quantum terms? “For whatever reason, nobody, including myself for a number of years, really thought very much about what that means,” said Susskind. “What is the thing which is growing? That should have been one of the leading puzzles of black hole physics.”

In recent years, with the rise of quantum computing, physicists have been gaining new insights about physical systems like black holes by studying their information-processing abilities — as if they were quantum computers. This angle led Susskind and his collaborators to identify a candidate for the evolving quantum property of black holes that underlies their growing volume. What’s changing, the theorists say, is the “complexity” of the black hole — roughly a measure of the number of computations that would be needed to recover the black hole’s initial quantum state, at the moment it formed. After its formation, as particles inside the black hole interact with one another, the information about their initial state becomes ever more scrambled. Consequently, their complexity continuously grows.

Using toy models that represent black holes as holograms, Susskind and his collaborators have shown that the complexity and volume of black holes both grow at the same rate, supporting the idea that the one might underlie the other. And, whereas Bekenstein calculated that black holes store the maximum possible amount of information given their surface area, Susskind’s findings suggest that they also grow in complexity at the fastest possible rate allowed by physical laws.

John Preskill, a theoretical physicist at the California Institute of Technology who also studies black holes using quantum information theory, finds Susskind’s idea very interesting. “That’s really cool that this notion of computational complexity, which is very much something that a computer scientist might think of and is not part of the usual physicist’s bag of tricks,” Preskill said, “could correspond to something which is very natural for someone who knows general relativity to think about,” namely the growth of black hole interiors.

Researchers are still puzzling over the implications of Susskind’s thesis. Aron Wall, a theorist at Stanford (soon moving to the University of Cambridge), said, “The proposal, while exciting, is still rather speculative and may not be correct.” One challenge is defining complexity in the context of black holes, Wall said, in order to clarify how the complexity of quantum interactions might give rise to spatial volume.

A potential lesson, according to Douglas Stanford, a black hole specialist at the Institute for  Advanced Study in Princeton, New Jersey, “is that black holes have a type of internal clock that keeps time for a very long time. For an ordinary quantum system,” he said, “this is the complexity of the state. For a black hole, it is the size of the region behind the horizon.”

If complexity does underlie spatial volume in black holes, Susskind envisions consequences for our understanding of cosmology in general. “It’s not only black hole interiors that grow with time. The space of cosmology grows with time,” he said. “I think it’s a very, very interesting question whether the cosmological growth of space is connected to the growth of some kind of complexity. And whether the cosmic clock, the evolution of the universe, is connected with the evolution of complexity. There, I don’t know the answer.”

 

show enclosure

Weatherall, James Owen (2018) Why Not Categorical Equivalence? [Preprint]

That quantum mechanics is a successful theory is not in dispute. It makes astonishingly accurate predictions about the nature of the world at microscopic scales. What has been in dispute for nearly a century is just what it’s telling us about what exists, what is real. There are myriad interpretations that offer their own take on the question, each requiring us to buy into certain as-yet-unverified claims — hence assumptions — about the nature of reality.

Now, a new thought experiment is confronting these assumptions head-on and shaking the foundations of quantum physics. The experiment is decidedly strange. For example, it requires making measurements that can erase any memory of an event that was just observed. While this isn’t possible with humans, quantum computers could be used to carry out this weird experiment and potentially discriminate between the different interpretations of quantum physics.

“Every now and then you get a paper which gets everybody thinking and discussing, and this is one of those cases,” said Matthew Leifer, a quantum physicist at Chapman University in Orange, California. “ is a thought experiment which is going to be added to the canon of weird things we think about in quantum foundations.”

The experiment, designed by Daniela Frauchiger and Renato Renner, of the Swiss Federal Institute of Technology Zurich, involves a set of assumptions that on the face of it seem entirely reasonable. But the experiment leads to contradictions, suggesting that at least one of the assumptions is wrong. The choice of which assumption to give up has implications for our understanding of the quantum world and points to the possibility that quantum mechanics is not a universal theory, and so cannot be applied to complex systems such as humans.

Quantum physicists are notoriously divided when it comes to the correct interpretation of the equations that are used to describe quantum goings-on. But in the new thought experiment, no view of the quantum world comes through unscathed. Each one falls afoul of one or another assumption. Could something entirely new await us in our search for an uncontroversial description of reality?

Quantum theory works extremely well at the scale of photons, electrons, atoms, molecules, even macromolecules. But is it applicable to systems that are much, much larger than macromolecules? “We have not experimentally established the fact that quantum mechanics applies on larger scales, and larger means even something the size of a virus or a little cell,” Renner said. “In particular, we don’t know whether it extends to objects the size of humans and even lesser, it extends to objects the size of black holes.”

Despite this lack of empirical evidence, physicists think that quantum mechanics can be used to describe systems at all scales — meaning it’s universal. To test this assertion, Frauchiger and Renner came up with their thought experiment, which is an extension of something the physicist Eugene Wigner first dreamed up in the 1960s. The new experiment shows that, in a quantum world, two people can end up disagreeing about a seemingly irrefutable result, such as the outcome of a coin toss, suggesting something is amiss with the assumptions we make about quantum reality.

In standard quantum mechanics, a quantum system such as a subatomic particle is represented by a mathematical abstraction called the wave function. Physicists calculate how the particle’s wave function evolves with time.

But the wave function does not give us the exact value for any of the particle’s properties, such as its position. If we want to know where the particle is, the wave function’s value at any point in space and time only lets us calculate the probability of finding the particle at that point, should we choose to look. Before we look, the wave function is spread out, and it accords different probabilities for the particle being in different places. The particle is said to be in a quantum superposition of being in many places at once.

More generally, a quantum system can be in a superposition of states, where “state” can refer to other properties, such as the spin of a particle. Much of the Frauchiger-Renner thought experiment involves manipulating complex quantum objects — maybe even humans — that end up in superpositions of states.

The experiment has four agents: Alice, Alice’s friend, Bob, and Bob’s friend. Alice’s friend is inside a lab making measurements on a quantum system, and Alice is outside, monitoring both the lab and her friend. Bob’s friend is similarly inside another lab, and Bob is observing his friend and the lab, treating them both as one system.

Inside the first lab, Alice’s friend makes a measurement on what is effectively a coin toss designed to come up heads one-third of the time and tails two-thirds of the time. If the toss comes up heads, Alice’s friend prepares a particle with spin pointing down, but if the toss comes up tails, she prepares the particle in a superposition of equal parts spin UP and spin DOWN.

Alice’s friend sends the particle to Bob’s friend, who measures the spin of the particle. Based on the result, Bob’s friend can now make an assertion about what Alice’s friend saw in her coin toss. If he finds the particle spin to be UP, for example, he knows the coin came up tails.

The experiment continues. Alice measures the state of her friend and her lab, treating all of it as one quantum system, and uses quantum theory to make predictions. Bob does the same with his friend and lab. Here comes the first assumption: An agent can analyze another system, even a complex one including other agents, using quantum mechanics. In other words, quantum theory is universal, and everything in the universe, including entire laboratories (and the scientists inside them), follows the rules of quantum mechanics.

This assumption allows Alice to treat her friend and the lab as one system and make a special type of measurement, which puts the entire lab, including its contents, into a superposition of states. This is not a simple measurement, and herein lies the thought experiment’s weirdness.

The process is best understood by considering a single photon that’s in a superposition of being polarized horizontally and vertically. Say you measure the polarization and find it to be vertically polarized. Now, if you keep checking to see if the photon is vertically polarized, you will always find that it is. But if you measure the vertically polarized photon to see if it is polarized in a different direction, say at a 45-degree angle to the vertical, you’ll find that there’s a 50 percent chance that it is, and a 50 percent chance that it isn’t. Now if you go back to measure what you thought was a vertically polarized photon, you’ll find there’s a chance that it’s no longer vertically polarized at all — rather, it’s become horizontally polarized. The 45-degree measurement has put the photon back into a superposition of being polarized horizontally and vertically.

This is all very fine for a single particle, and such measurements have been amply verified in actual experiments. But in the thought experiment, Frauchiger and Renner want to do something similar with complex systems.

As this stage in the experiment, Alice’s friend has already seen the coin coming up either heads or tails. But Alice’s complex measurement puts the lab, friend included, into a superposition of having seen heads and tails. Given this weird state, it’s just as well that the experiment does not demand anything further of Alice’s friend.

Alice, however, is not done. Based on her complex measurement, which can come out as either YES or NO, she can infer the result of the measurement made by Bob’s friend. Say Alice got YES for an answer. She can deduce using quantum mechanics that Bob’s friend must have found the particle’s spin to be UP, and therefore that Alice’s friend got tails in her coin toss.

This assertion by Alice necessitates another assumption about her use of quantum theory. Not only does she reason about what she knows, but she reasons about how Bob’s friend used quantum theory to arrive at his conclusion about the result of the coin toss. Alice makes that conclusion her own. This assumption of consistency argues that the predictions made by different agents using quantum theory are not contradictory.

Meanwhile, Bob can make a similarly complex measurement on his friend and his lab, placing them in a quantum superposition. The answer can again be YES or NO. If Bob gets YES, the measurement is designed to let him conclude that Alice’s friend must have seen heads in her coin toss.

It’s clear that Alice and Bob can make measurements and compare their assertions about the result of the coin toss. But this involves another assumption: If an agent’s measurement says that the coin toss came up heads, then the opposite fact — that the coin toss came up tails — cannot be simultaneously true.

The setup is now ripe for a contradiction. When Alice gets a YES for her measurement, she infers that the coin toss came up tails, and when Bob gets a YES for his measurement, he infers the coin toss came up heads. Most of the time, Alice and Bob will get opposite answers. But Frauchiger and Renner showed that in 1/12 of the cases both Alice and Bob will get a YES in the same run of the experiment, causing them to disagree about whether Alice’s friend got a heads or a tails. “So, both of them are talking about the past event, and they are both sure what it was, but their statements are exactly opposite,” Renner said. “And that’s the contradiction. That shows something must be wrong.”

This led Frauchiger and Renner to claim that one of the three assumptions that underpin the thought experiment must be incorrect.

“The science stops there. We just know one of the three is wrong, and we cannot really give a good argument which one is violated,” Renner said. “This is now a matter of interpretation and taste.”

Fortunately, there are a wealth of interpretations of quantum mechanics, and almost all of them have to do with what happens to the wave function upon measurement. Take a particle’s position. Before measurement, we can only talk in terms of the probabilities of, say, finding the particle somewhere. Upon measurement, the particle assumes a definite location. In the Copenhagen interpretation, measurement causes the wave function to collapse, and we cannot talk of properties, such as a particle’s position, before collapse. Some physicists view the Copenhagen interpretation as an argument that properties are not real until measured.

This form of “anti-realism” was anathema to Einstein, as it is to some quantum physicists today. And so is the notion of a measurement causing the collapse of the wave function, particularly because the Copenhagen interpretation is unclear about exactly what constitutes a measurement. Alternative interpretations or theories mainly try to either advance a realist view — that quantum systems have properties independent of observers and measurements — or avoid a measurement-induced collapse, or both.

For example, the many-worlds interpretation takes the evolution of the wave function at face value and denies that it ever collapses. If a quantum coin toss can be either heads or tails, then in the many-worlds scenario, both outcomes happen, each in a different world. Given this, the assumption that there is only one outcome for a measurement, and that if the coin toss is heads, it cannot simultaneously be tails, becomes untenable. In many-worlds, the result of the coin toss is both heads and tails, and thus the fact that Alice and Bob can sometimes get opposite answers is not a contradiction.

“I have to admit that if you had asked me two years ago, I’d have said just shows that many-worlds is actually a good interpretation and you should give up” the requirement that measurements have only a single outcome, Renner said.

This is also the view of the theoretical physicist David Deutsch of the University of Oxford, who became aware of the Frauchiger-Renner paper when it first appeared on arxiv.org. In that version of the paper, the authors favored the many-worlds scenario. (The latest version of the paper, which was peer reviewed and published in Nature Communications in September, takes a more agnostic stance.) Deutsch thinks the thought experiment will continue to support many-worlds. “My take is likely to be that it kills wave-function-collapse or single-universe versions of quantum theory, but they were already stone dead,” he said. “I’m not sure what purpose it serves to attack them again with bigger weapons.”

Renner, however, has changed his mind. He thinks the assumption most likely to be invalid is the idea that quantum mechanics is universally applicable.

This assumption is violated, for example, by so-called spontaneous collapse theories that argue — as the name suggests — for a spontaneous and random collapse of the wave function, but one that is independent of measurement. These models ensure that small quantum systems, such as particles, can remain in a superposition of states almost forever, but as systems get more massive, it gets more and more likely that they will spontaneously collapse to a classical state. Measurements merely discover the state of the collapsed system.

In spontaneous collapse theories, quantum mechanics can no longer to be applied to systems larger than some threshold mass. And while these models have yet to be empirically verified,  they haven’t been ruled out either.

Nicolas Gisin of the University of Geneva favors spontaneous collapse theories as a way to resolve the contradiction in the Frauchiger-Renner experiment. “My way out of their conundrum is clearly by saying, ‘No, at some point the superposition principle no longer holds,’” he said.

If you want to hold on to the assumption that quantum theory is universally applicable, and that measurements have only a single outcome, then you’ve got to let go of the remaining assumption, that of consistency: The predictions made by different agents using quantum theory will not be contradictory.

Using a slightly altered version of the Frauchiger-Renner experiment, Leifer has shown that this final assumption, or a variant thereof, must go if Copenhagen-style theories hold true. In Leifer’s analysis, these theories share certain attributes, in that they are universally applicable, anti-realistic (meaning that quantum systems don’t have well-defined properties, such as position, before measurement) and complete (meaning that there is no hidden reality that the theory is failing to capture). Given these attributes, his work implies that there is no single outcome of a given measurement that’s objectively true for all observers. So if a detector clicked for Alice’s friend inside the lab, then it’s an objective fact for her, but not so for Alice, who is outside the lab modeling the entire lab using quantum theory. The results of measurements depend on the perspective of the observer.

“If you want to maintain the Copenhagen type of view, it seems the best move is towards this perspectival version,” Leifer said. He points out that certain interpretations, such as quantum Bayesianism, or QBism, have already adopted the stance that measurement outcomes are subjective to an observer.

Renner thinks that giving up this assumption entirely would destroy a theory’s ability to be effective as a means for agents to know about each other’s state of knowledge; such a theory could be dismissed as solipsistic. So any theory that moves toward facts being subjective has to re-establish some means of communicating knowledge that satisfies two opposing constraints. First, it has to be weak enough that it doesn’t provoke the paradox seen in the Frauchiger-Renner experiment. Yet it must also be strong enough to avoid charges of solipsism. No one has yet formulated such a theory to everyone’s satisfaction.

The Frauchiger-Renner experiment generates contradictions among a set of three seemingly sensible assumptions. The effort to explicate how various interpretations of quantum theory violate the assumptions has been “an extremely useful exercise,” said Rob Spekkens of the Perimeter Institute for Theoretical Physics in Waterloo, Canada.

“This thought experiment is a great lens through which to examine the differences of opinions between different camps on the interpretation of quantum theory,” Spekkens said. “I don’t think it’s really eliminated options that people were endorsing prior to the work, but it has clarified precisely what the different interpretational camps need to believe to avoid this contradiction. It has served to clarify people’s position on some of these issues.”

Given that theoreticians cannot tell the interpretations apart, experimentalists are thinking about how to implement the thought experiment, in the hope of further illuminating the problem. But it will be a formidable task, because the experiment makes some weird demands. For example, when Alice makes a special measurement on her friend and her lab, it puts everything, the friend’s brain included, into a superposition of states.

Mathematically, this complicated measurement is the same as first reversing the time evolution of the system — such that the memory of the agent is erased and the quantum system (such as the particle the agent has measured) is brought back to its original state — and then performing a simpler measurement on just the particle, said Howard Wiseman of Griffith University in Brisbane, Australia. The measurement may be simple, but as Gisin points out rather diplomatically, “Reversing an agent, including the brain and the memory of that agent, is the delicate part.”

Nonetheless, Gisin is not averse to thinking that maybe, one day, the experiment could be done using complex quantum computers as the agents inside the labs (acting as Alice’s friend and Bob’s friend). In principle, the time evolution of a quantum computer can be reversed. One possibility is that such an experiment will replicate the predictions of standard quantum mechanics even as quantum computers get more and more complex. But it may not. “Another alternative is that at some point while we develop these quantum computers, we hit the boundary of the superposition principle and that actually quantum mechanics is not universal,” Gisin said.

Leifer, for his part, is holding out for something new. “I think the correct interpretation of quantum mechanics is none of the above,” he said.

He likens the current situation with quantum mechanics to the time before Einstein came up with his special theory of relativity. Experimentalists had found no sign of the “luminiferous ether” — the medium through which light waves were thought to propagate in a Newtonian universe. Einstein argued that there is no ether. Instead he showed that space and time are malleable. “Pre-Einstein I couldn’t have told you that it was the structure of space and time that was going to change,” Leifer said.

Quantum mechanics is in a similar situation now, he thinks. “It’s likely that we are making some implicit assumption about the way the world has to be that just isn’t true,” he said. “Once we change that, once we modify that assumption, everything would suddenly fall into place. That’s kind of the hope. Anybody who is skeptical of all interpretations of quantum mechanics must be thinking something like this. Can I tell you what’s a plausible candidate for such an assumption? Well, if I could, I would just be working on that theory.”

show enclosure

Article written by