Weekly Papers on Quantum Foundations (9)

de Swart, J. G. and Bertone, G. and van Dongen, J. (2017) How dark matter came to matter. Nature Astronomy, 1 (3). 0059. ISSN 2397-3366

Authors: Abeer Al-ModlejSalwa AlsalehHassan AlshalAhmed Farag Ali

Virtual black holes in noncommutative spacetime are investigated using coordinate coherent state formalism such that the event horizon of black hole is manipulated by smearing it with a Gaussian of width $ \sqrt \theta $, where $ \theta$ is the noncommutativity parameter. Proton lifetime, the main associated phenomenology of the noncommutative virtual black holes, has been studied: first in $4$ dimensional spacetime and then generalized to $D$ dimensions. The lifetime depends on $ \theta$ and the number of spacetime dimensions such that it emphasizes on the measurement of proton lifetime as a potential probe for the micro-structure of spacetime.

Authors: Marco BeniniAlexander Schenkel

A brief overview of the recent developments of operadic and higher categorical techniques in algebraic quantum field theory is given. The relevance of such mathematical structures for the description of gauge theories is discussed.

Authors: Robert Oeckl (CCM-UNAM)

A concise review of the derivation of the Born rule and Schr\”odinger equation from first principles is provided. The starting point is a formalization of fundamental notions of measurement and composition, leading to a general framework for physical theories known as the positive formalism. Consecutively adding notions of spacetime, locality, absolute time and causality recovers the well established convex operational framework for classical and quantum theory. Requiring the partially ordered vector space of states to be an anti-lattice one obtains quantum theory in its standard formulation. This includes Born rule and Schr\”odinger equation.

Publication date: Available online 4 March 2019

Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics

Author(s): Florian J. Boge


In this paper I investigate whether the phenomenon of quantum decoherence, the vanishing of interference and detectable entanglement on quantum systems in virtue of interactions with the environment, can be understood as the manifestation of a disposition. I will highlight the advantages of this approach as a realist interpretation of the quantum formalism, and demonstrate how such an approach can benefit from advances in the metaphysics of dispositions. I will also confront some commonalities with and differences to the many worlds interpretation, and address the difficulties induced by quantum non-locality. I conclude that there are ways to deal with these issues and that the proposal hence is an avenue worth pursuing.

Curiel, Erik (2019) Schematizing the Observer and the Epistemic Content of Theories. [Preprint]
McCoy, C.D. (2019) No Chances in a Deterministic World. [Preprint]
The multiverse may be an artifact of a deeper reality that is comprehensible and unique

— Read more on ScientificAmerican.com


show enclosure

One of the biggest and most basic questions in physics involves the number of ways to configure the matter in the universe. If you took all that matter and rearranged it, then rearranged it again, then rearranged it again, would you ever exhaust the possible configurations, or could you go on reconfiguring forever?

Physicists don’t know, but in the absence of certain knowledge, they make assumptions. And those assumptions differ depending on the area of physics they happen to be in. In one area they assume the number of configurations is finite. In another they assume it’s infinite. For now, at least, there’s no way to tell who’s right.

But over the last couple years, a select group of mathematicians and computer scientists has been busy creating games that could theoretically settle the question. The games involve two players placed in isolation from each other. The players are asked questions, and they win if their answers are coordinated in a certain way. In all of these games, the rate at which players win has implications for the number of different ways the universe can be configured.

“There’s this philosophical question: Is the universe finite or infinite-dimensional?” said Henry Yuen, a theoretical computer scientist at the University of Toronto. “People will think this is something you can never test, but one possible way of resolving this is with a game like what William came up with.”

Yuen was referring to William Slofstra, a mathematician at the University of Waterloo. In 2016 Slofstra invented a game that involves two players who assign values to variables in hundreds of simple equations. Under normal circumstances even the most cunning players will sometimes lose. But Slofstra proved that if you give them access to an infinite amount of an unorthodox resource — entangled quantum particles — it becomes possible for the players to win this game all the time.

Other researchers have since refined Slofstra’s result. They’ve proved that you don’t need a game with hundreds of questions to reach the same conclusion Slofstra did. In 2017 three researchers proved that there are games with just five questions that can be won 100 percent of the time if the players have access to an unlimited number of entangled particles.

These games are all modeled on games invented more than 50 years ago by the physicist John Stewart Bell. Bell developed the games to test one of the strangest propositions about the physical world made by the theory of quantum mechanics. A half-century later, his ideas may turn out to be useful for much more than that.

Magic Squares

Bell came up with “nonlocal” games, which require players to be at a distance from each other with no way to communicate. Each player answers a question. The players win or lose based on the compatibility of their answers.

One such game is the magic square game. There are two players, Alice and Bob, each with a 3-by-3 grid. A referee tells Alice to fill out one particular row in the grid — say the second row — by putting either a 1 or a 0 in each box, such that the sum of the numbers in that row is odd. The referee tells Bob to fill out one column in the grid — say the first column — by putting either a 1 or a 0 in each box, such that the sum of the numbers in that column is even. Alice and Bob win the game if Alice’s numbers give an odd sum, Bob’s give an even sum, and — most important — they’ve each written down the same number in the one square where their row and column intersect.

Here’s the catch: Alice and Bob don’t know which row or column the other has been asked to fill out. “It’s a game that would be trivial for the two players if they could communicate,” said Richard Cleve, who studies quantum computing at the University of Waterloo. “But the fact that Alice doesn’t know what question Bob was asked and vice versa means it’s a little tricky.”

In the magic square game, and other games like it, there doesn’t seem to be a way for the players to win 100 percent of the time. And indeed, in a world completely explained by classical physics, 89 percent is the best Alice and Bob could do.

But quantum mechanics — specifically, the bizarre quantum phenomenon of “entanglement” — allows Alice and Bob to do better.

In quantum mechanics, the properties of fundamental particles like electrons don’t exist until the moment you measure them. Imagine, for example, an electron moving rapidly around the circumference of a circle. To find its position you perform a measurement. But prior to the measurement, the electron has no definite position at all. Instead, the electron is characterized by a mathematical formula expressing the likelihood that it’s in any given position.

When two particles are entangled, the complex probability amplitudes that describe their properties are intertwined. Imagine two electrons that were entangled such that if a measurement identifies the first electron in one position around the circle, the other must occupy a position directly across the circle from it. This relationship between the two electrons holds when they’re right next to each other and when they’re light-years apart: Even at that distance, if you measure the position of one electron, the position of the other becomes instantly determined, even though no causal event has passed between them.

The phenomenon seems preposterous because there’s nothing about our non-quantum-scale experience to suggest such a thing is possible. Albert Einstein famously derided entanglement as “spooky action at a distance” and argued for years that it couldn’t be true.

To implement a quantum strategy in the magic square game, Alice and Bob each take one of a pair of entangled particles. To determine which numbers to write down, they measure properties of their particles — almost as if they were rolling correlated dice to guide their choice of answers.

What Bell calculated, and what many subsequent experiments have shown, is that by exploiting the strange quantum correlations found in entanglement, players of games like the magic square game can coordinate their answers with greater exactness and win the game more than 89 percent of the time.

Bell came up with nonlocal games as a way to show that entanglement was real, and that our classical view of the world was incomplete — a conclusion that was very much up for grabs in Bell’s time. “Bell came up with this experiment you could do in a laboratory,” Cleve said. If you recorded higher-than-expected success rates in these experimental games, you’d know the players had to be exploiting some feature of the physical world not explained by classical physics.

What Slofstra and others have done since then is similar in strategy, but different in scope. They’ve shown that not only do Bell’s games imply the reality of entanglement, but some games have the power to imply a whole lot more — like whether there is any limit to the number of configurations the universe can take.

More Entanglement, Please

In his 2016 paper Slofstra proposed a kind of nonlocal game involving two players who provide answers to simple questions. To win, they have to give responses that are coordinated in a certain way, as in the magic square game.

Imagine, for example, a game that involves two players, Alice and Bob, who have to match socks from their respective sock drawers. Each player has to choose a single sock, without any knowledge of the sock the other has chosen. The players can’t coordinate ahead of time. If their sock choices form a matching pair, they win.

Given these uncertainties it’s unclear which socks Alice and Bob should pick in the morning — at least in a classical world. But if they can employ entangled particles they have a better chance of matching. By basing their color choice on the results of measurements of a single pair of entangled particles they could coordinate along that one attribute of their socks.

Yet they’d still be guessing blindly about all the other attributes — whether they were wool or cotton, ankle-height or crew. But with additional entangled particles they’d get access to more measurements. They could use one set to correlate their choice of material and another to correlate their choice of sock height. In the end, because they were able to coordinate their choices for many attributes, they’d be more likely to end up with a matching pair than if they’d only been able to coordinate for one.

“More complicated systems allow for more correlated measurements, which enable coordination at more complicated tasks,” Slofstra said.

The questions in Slofstra’s game aren’t really about socks. They involve equations such as a + b + c and b + c + d. Alice can make the value of each variable either 1 or 0 (and the values have to remain consistent across the equations — b has to have the same value in every equation where it appears). And her equations have to sum to various numbers.

Bob is given just one of Alice’s variables, say b, and asked to assign a value to it: 0 or 1. The players win if they both assign the same value to whichever variable Bob is given.

If you and a friend were to play this game, there’s no way you could win it all the time. But with the aid of a pair of entangled particles, you could win more consistently, as in the sock game.

Slofstra was interested in understanding whether there is an amount of entanglement past which a team’s winning probability stops increasing. Perhaps players could achieve an optimal strategy if they shared five pairs of entangled particles, or 500. “We’d hoped you could say, ‘You need this much entanglement to play it optimally,’” Slofstra said. “That’s not what is true.”

He found that adding more pairs of entangled particles always increased the winning percentage. Moreover, if you could somehow exploit an infinite number of entangled particles, you would be able to play the game perfectly, winning 100 percent of the time. This clearly can’t be done in a game with socks — ultimately you’d run out of sock features to coordinate. But as Slofstra’s game has made clear, the universe can be far knottier than a sock drawer.

Is the Universe Infinite?

Slofstra’s result came as a shock. Eleven days after his paper appeared, the computer scientist Scott Aaronsonwrote that Slofstra’s result touches “on a question of almost metaphysical significance: namely, what sorts of experimental evidence could possibly bear on whether the universe was discrete or continuous?”

Aaronson was referring to the different states the universe can take — where a state is a particular configuration of all the matter within it. Every physical system has its own state space, which is an index of all the different states it can take.

Researchers talk about a state space as having a certain number of dimensions, reflecting the number of independent characteristics you can adjust in the underlying system.

For example, even a sock drawer has a state space. Any sock might be described by its color, its length, its material, and how raggedy and worn it is. In this case, the dimension of the sock drawer’s state space is four.

A deep question about the physical world is whether there’s a limit to the size of the state space of the universe (or of any physical system). If there is a limit, it means that no matter how large and complicated your physical system is, there are still only so many ways it can be configured. “The question is whether physics allows there to be physical systems that have an infinite number of properties that are independent of each other that you could in principle observe,” said Thomas Vidick, a computer scientist at the California Institute of Technology.

The field of physics is undecided on this point. In fact, it maintains two contradictory views.

On the one hand, students in an introductory quantum mechanics course are taught to think in terms of infinite-dimensional state spaces. If they model the position of an electron moving around a circle, for instance, they’ll assign a probability to each point on the circle. Because there are infinite points, the state space describing the electron’s position will be infinite-dimensional.

“In order to describe the system we need a parameter for every possible position the electron can be in,” Yuen said. “There are infinitely many positions, so you need infinitely many parameters. Even in one-dimensional space , the state space of the particle is infinite-dimensional.”

But perhaps the idea of infinite-dimensional state spaces is nonsense. In the 1970s, the physicists Jacob Bekenstein and Stephen Hawking calculated that a black hole is the most complicated physical system in the universe, yet even its state can be specified by a huge but finite number of parameters — approximately 1069 bits of information per square meter of the black hole’s event horizon. This number — the “Bekenstein bound” — suggests that if a black hole doesn’t require an infinite-dimensional state space, then nothing does.

These competing perspectives on state spaces reflect fundamentally different views about the nature of physical reality. If state spaces are truly finite-dimensional, this means that at the smallest scale, nature is pixelated. But if electrons require infinite-dimensional state spaces, physical reality is fundamentally continuous — an unbroken sheet even at the finest resolution.

So which is it? Physics hasn’t devised an answer, but games like Slofstra’s could, in principle, provide one. Slofstra’s work suggests a way to test the distinction: Play a game that can only be won 100 percent of the time if the universe allows for infinite-dimensional state spaces. If you observe players winning every time they play, it means they’re taking advantage of the kinds of correlations that can only be generated through measurements on a physical system with an infinite number of independently tunable parameters.

“He gives an experiment such that, if it can be realized, then we conclude the system that produced the statistics that were observed must have infinite degrees of freedom,” Vidick said.

There are barriers to actually carrying out Slofstra’s experiment. For one thing, it’s impossible to certify any laboratory result as occurring 100 percent of the time.

“In the real world you’re limited by your experimental setup,” Yuen said. “How do you distinguish between 100 percent and 99.9999 percent?”

But practical considerations aside, Slofstra has shown that there is, mathematically at least, a way of assessing a fundamental feature of the universe that might otherwise have seemed beyond our ken. When Bell first came up with nonlocal games, he hoped that they’d be useful for probing one of the most beguiling phenomena in the universe. Fifty years later, his invention has proved to have even more depth than that.

show enclosure

Thebault, Karim P Y (2019) The Problem of Time. [Preprint]
McCoy, C.D. (2018) Did Universe Have a Chance? In: UNSPECIFIED.

In 2012, particles smashed together in the Large Hadron Collider’s 27-kilometer circular tunnel conjured up the Higgs boson — the last missing particle predicted by the Standard Model of particle physics, and the linchpin that holds that decades-old set of equations together.

But no other new particles have materialized at the LHC, leaving open many mysteries about the universe that the Standard Model doesn’t address. A debate has ensued over whether to build an even more enormous successor to the LHC — a proposed machine 100 kilometers in circumference, possibly in Switzerland or China — to continue the search for new physics.

Physicists say there’s much we can still learn from the Higgs boson itself. What’s known is that the particle’s existence confirms a 55-year-old theory about the origin of mass in the universe. Its discovery won the 2013 Nobel Prize for Peter Higgs and François Englert, two of six theorists who proposed this mass-generating mechanism in the 1960s. The mechanism involves a field permeating all of space. The Higgs particle is a ripple, or quantum fluctuation, in this Higgs field. Because quantum mechanics tangles up the particles and fields of nature, the presence of the Higgs field spills over into other quantum fields; it’s this coupling that gives their associated particles mass.

But physicists understand little about the omnipresent Higgs field, or the fateful moment in the early universe when it suddenly shifted from having zero value everywhere (or in other words, not existing) into its current, uniformly valued state. That shift, or “symmetry-breaking” event, instantly rendered quarks, electrons and many other fundamental particles massive, which led them to form atoms and all the other structures seen in the cosmos.

But why? “Why should the universe decide to have this Higgs presence all over? That is a big, big question,” said Michelangelo Mangano, a particle theorist at CERN, the laboratory that houses the LHC.

Physicists wonder whether the Higgs symmetry-breaking event had a role in creating the universe’s matter-antimatter asymmetry — the unexplained fact that so much more matter exists than antimatter. Another question is whether the Higgs field’s current value is stable or could suddenly change again — an unsettling prospect known as “vacuum decay.” The value of the Higgs field can be thought of as a ball settled at the bottom of a valley. The question is, are there yet deeper valleys in the mathematical curve that defines the field’s possible values? If so, the ball will eventually tunnel to the lower, more stable valley, corresponding to a drop in the energy of the Higgs field. A bubble of the more stable “true vacuum” would grow and encompass the “false vacuum” that we’ve been living in, obliterating everything.

Not only is the Higgs field tied to the origin and fate of the universe, but the Higgs particle’s behavior can also reveal hidden or otherwise unknown particles that it interacts with — perhaps those that make up the cosmos’s missing dark matter. In a particle collider, when particles smash together at nearly light speed, their kinetic energy converts into matter, occasionally forming heavy particles such as the Higgs boson. This Higgs then quickly morphs into other particles, such as a pair of top quarks or W bosons, where the probability of each outcome depends on the strength of the Higgs’ coupling to each type of particle. Precisely measuring the probabilities of these different Higgs decays and comparing the numbers to Standard Model predictions reveal if anything is missing, since the probabilities must add up to one.

“The more we study , the more we may find that this whole story might not fit together exactly as we expect, which would lead to new physics,” said Melissa Franklin, a particle physicist at Harvard University. “From an experimental point of view, we just want to make a bunch of them and see what happens.”

That’s one reason she and many of her colleagues want to build a bigger, better machine. The first phase of the proposed supercollider has been nicknamed the “Higgs factory,” because the machine would collide electrons and positrons with energies precisely tuned to maximize their chance of yielding Higgs bosons, whose subsequent decays could be measured in detail. In phase two, the giant machine would slam together protons, resulting in messier but much more energetic collisions.

With the LHC, most of the Higgs boson’s couplings with other Standard Model particles have been measured with roughly 20 percent precision, but a future collider, by producing many more Higgs bosons, could pin the numbers down with an accuracy of 1 percent. This would give physicists a much better sense of whether the probabilities add to one, or whether Higgs bosons are occasionally decaying into hidden particles. Extra particles coupled to the Higgs appear in many theories of physics beyond the Standard Model, including the “twin Higgs” and “relaxion”models. “Unfortunately, there are so many models and so many parameters that there is no hope of a no-lose theorem,” said the particle physicist Matt Strassler — “just a might-win opportunity.”

Perhaps the most important coupling that physicists want to nail down is called the triple Higgs coupling — essentially the strength of the Higgs boson’s interaction with itself. This number is measured by counting rare events, not yet seen at the LHC, in which a Higgs boson decays into two of itself. The Standard Model makes a prediction for the value of the triple Higgs coupling, so any measured deviations from this prediction would signify the existence of new particles not included in the Standard Model that affect the Higgs.

Measuring the triple Higgs coupling would also reveal the shape of the mathematical curve that defines the Higgs field’s different possible values, helping to determine whether the vacuum of our universe is stable or only metastable — settled in a local rather than a global minimum of the curve. If the Standard Model’s prediction for the coupling is correct, then the universe is metastable, destined to decay billions or trillions of years from now. This is nothing to worry about, but rather an important clue about the larger story of our cosmos. The ability to reveal the universe’s fate is why the triple Higgs coupling “is at the heart of the experimental program at the future colliders,” said Cédric Weiland, a particle physicist at the University of Pittsburgh who has studied this coupling.

With a Higgs factory, Weiland said, physicists could measure the triple Higgs coupling with a precision of 44 percent. The second-phase proton-proton collider could nail its value to within 5 percent.

The baseline expectation is that measurements at a future collider will simply confirm the Standard Model, which seems frustratingly unbreakable even as it gives an incomplete account of the physical universe. Some physicists balk at the prospect of investing billions of dollars in a machine that might simply add more decimal places of precision to our knowledge of an existing set of equations.

Physicists and funding agencies will actively debate the value of an LHC successor over the next few years. Whether to spend 20 years and as many billions of dollars constructing a 100-kilometer-circumference collider hinges on its discovery potential. Past colliders struck upon the puzzle pieces of the Standard Model one by one. But with that puzzle complete, there’s no guarantee that a future machine will find anything new, leaving physicists with a dilemma: to build or not to build?

show enclosure

Lazarovici, Dustin (2019) On the measurement process in Bohmian mechanics (reply to Gao). [Preprint]

Aharonov–Bohm interference of fractional quantum Hall edge modes

Aharonov–Bohm interference of fractional quantum Hall edge modes, Published online: 04 March 2019; doi:10.1038/s41567-019-0441-8

An interferometer device demonstrates the interference of fractional quantum Hall effect edge states. This is a big step towards braiding non-Abelian anyons.

Article written by