Sitting and experiencing gravitational redshift isn’t enough for decoherence

 Luboš Motl, June 19, 2015

Most of the contemporary research into «foundations of quantum mechanics» isn’t good, if I kindly and generously avoid the word «šit», and a recently celebrated 2013 paper

Universal decoherence due to gravitational time dilation

by Igor Pikovski, Magdalena Zych, Fabio Costa, and Časlav Brukner that just got published in Nature Physics (even though it has only collected 6-9 citations in those 18 months) isn’t much better. Well, maybe it is slightly better than the worst papers in this category.

It was enough to win some praise in Nude Socialist (probably because they mindlessly view Nature Physics as a stamp proving quality; «Einstein kills Schrödinger’s cat: Relativity ruins quantum world» is a really, really painful and completely wrong title) as well as a more sensible response at Backreaction.

Just like in all other cases, anti-quantum zeal is the main driving force behind similar papers. In particular, these people just don’t like Schrödinger’s cat. Much like Schrödinger himself, they believe that the linear superpositions of macroscopically distinct states (e.g. alive cat and pure cat) must be banned or «absurd», as Schrödinger has said. What these folks and especially the inkspillers in assorted Nude Socialists seem incapable of getting is the fact that Schrödinger was completely wrong. The superpositions are always as «allowed» as the pure vectors we started with. There is nothing «absurd» about them.

This rule about the «legality» of all the superpositions is the superposition postulate of quantum mechanics and it holds universally and everywhere in the Universe. However, the superpositions don’t mean anything absurd. When we’re only interested in the question whether the cat is alive or dead, the cat is alive or dead, not alive and dead at the same moment. The options «alive» and «dead» are orthogonal to one another which means, according to the quantum mechanical formalism, mutually exclusive.

The authors of the preprint are more reasonable so they sort of know what I wrote in the previous sentence. But they still want to believe that the superpositions must be impossible at least «effectively». That’s why they decided that something about the gravitational field must be enough to make all the superpositions at least unphysical or undetectable. Decoherence is the process by which the information about the relative quantum phases gets irreversibly forgotten.

And these authors want us to believe that if an object is just «sitting» in the gravitational field described by the laws of general relativity – with the gravitational redshift (clocks tick more slowly in depths of the gravitational field) – it is enough for the relative quantum phases between different positions of the object to be decohered i.e. irreversibly forgotten. Except that this claim isn’t right.

Their specific thought experiment involves \(N\) harmonic oscillators with different frequencies \(\omega_i\) that are sitting in the gravitational field. They actually clump them to \(N/3\) three-dimensional oscillators but it makes no important difference. These oscillators should be thought of as «springs» in some crystal, or an object like that, and this object has one set of center-of-mass degrees of freedom. Most importantly, this object carries the collective coordinate, the height \(x\).

So far so good. What is the Hamiltonian? In their approximation, the Hamiltonian is

\[ H = H_{cm} + mgx + H_0 + H_{int} \]
\[ H_0 =\sum_{i=1}^N \hbar\omega_i n_i \]
\[ H_{int} = \frac{\hbar gx}{c^2} \left( \sum_{i=1}^N \omega_i n_i \right) \]

OK, the center-of-mass Hamiltonian \(H_{cm}\) may include just things like \(p^2/2m\) for the whole object. There is the overall gravitational potential energy \(mgx\) for the whole object in the Hamiltonian, too. The previous two sentences describe the «collective» degrees of freedom. \(H_0\) describes the total energy of the \(N\) quantum harmonic oscillators – the «internal» degrees of freedom.

\(H_{int}\) is the only part of the Hamiltonian that «couples» the collective degrees of freedom with the internal ones. This term arises due to general relativity. In general relativity, the gravitational potential measures the degree to which the time is slowed down in the gravitational field. It means that the influence of gravity may be represented by scaling all frequencies to

\[ \omega_i \to \omega_i \left( 1+ \frac{\Phi(x)}{c^2} \right) \] and they use the first nontrivial approximation for the gravitational potential in the Earth’s gravitational field, namely \(\Phi(x)=gx\).

Great. Now, their strategy is obvious. They exploit the interaction term \(H_{int}\) as an analogy of the usual interaction terms between the observed system’s degrees of freedom and the environment. Because of this interaction of the observed system – the collective position \(x\) – with the environment – all the internal excitations of the harmonic oscillators – the different values of \(x\) decohere from each other.

The magnitude of the offdiagonal element \(\rho_{12}\) of the reduced density matrix for the positions (which they refer to as the «visibility») goes like

\[ V(x)\sim |\rho_{12}| \sim \exp[-(t/\tau_{dec})^2] \] with the decoherence time scale

\[ \tau_{dec} = \sqrt{\frac{2}{N} } \frac{\hbar c^2}{k_B T g \Delta x}. \] The stronger the gravitational field is, the more quickly the positions decohere. The warmer the temperature is (why temperature and which temperature? I will discuss it below), the more quickly the system decoheres. The further the positions we want to tell apart are from each other, \(\Delta x\), the more quickly they decohere. The larger number of harmonic oscillators you have, the more quickly the states decohere.

All of this sounds simple and is backed by a calculation that looks OK. However, an assumption of the calculation is conceptually flawed which also means that the main conclusion – the interpretation of the paper – is completely wrong. Why?

We have to say two more details about their calculation.

The first «detail» is that they assign a thermal state (density matrix) of temperature \(T\) to all the internal degrees of freedom, the harmonic oscillators.

The second «detail» is that they just mindlessly «trace over» all the internal degrees of freedom at every moment.
The first «detail» makes it obvious that you get some loss of interference effects, at least in practice. If you make a system \(x\) – like the center-of-mass altitude of the object in this thought experiment – interact with a thermal heat bath, the thermal heat bath will tend to spoil the well-defined phases describing \(x\). But is it genuine decoherence?

It’s not because the second «detail» is simply a mistake. These authors haven’t understood the conditions that are required to make «tracing over» legitimate.

What do we mean by «tracing over»? A density matrix of a composite system \(AB\), \(\rho_{AB}\), may be described in a basis that makes the compositeness of the system – i.e. the tensor-product character of the Hilbert space – manifest. The basis vectors of the Hilbert space of \(AB\) may be written as \(\ket{a_i}\otimes \ket{b_K}\) where \(\ket{a_i}\) are the basis vectors for \(A\) and \(\ket{b_K}\) are the basis vectors for \(B\).

When we decompose the system in this way, the indices \(a,b\) in the matrix elements \(\rho_{ab}\) of the density matrix may be represented as pairs of indices: \(a\to iK\), \(b\to jL\). So the matrix elements of \(\rho\) are actually \(\rho_{iK,jL}\) where \(i,j\) are the indices describing the Hilbert space of \(A\) while \(K,L\) are the indices describing the states of \(B\).

The reduced matrix for \(A\) may be computed by tracing over the \(B\)-type indices

\[ \rho^{(A)}_{ij} = \sum_K \rho_{iK,jK}. \] We have made \(K=L\) and summed over it – the usual activity we describe as a «trace». But it is only a «partial» trace because we have only identified and summed over 2/4 = 1/2 of the density matrix’s indices. Now, the remaining reduced matrix \(\rho^{(A)}\) only has the \(A\)-type indices \(i,j\). Mathematically, the operation is clear. But as a physicist, you must ask: When is this operation useful or legitimate in a process that still gives you useful results?

The answer is that you may only trace over the \(B\)-type indices or degrees of freedom if they describe some information that will never controllably influence any of your future measurements. In that case, \(\rho^{(A)}\) is enough instead of the full \(\rho=\rho^{(AB)}\) to calculate the predictions for all the measurements you will make.

In the case of decoherence, this condition is satisfied because we’re tracing over things like the properties of some radiation that quickly escapes to «infinity» and we can never catch it and measure it again. Not only that. The radiation will never be able to influence the system whose behavior we want to predict and observe. That’s why we are allowed to trace over it!

The main technical reason that makes the interpretation by Pikovski et al. wrong is that this condition simply isn’t satisfied in their setup.

What actually happens when they introduce the interaction term \(H_{int}\) between the «collective altitude» \(x\) and the internal harmonic oscillators which are in the thermal state is that the influences between these two groups of degrees of freedom, the collective and the internal ones, will go in both directions.

So while the system could have been in a pure state of the collective position to start with i.e. the initial density matrix was

\[ \rho_{AB} = \ket{\psi_A} \bra{\psi_A} \otimes \rho_{B,\rm internal}, \] this will no longer be the case later. Even the density matrix for \(A\) i.e. \(x\) will become mixed as \(x\) interacts with the internal degrees of freedom that were mixed to start with. It’s obvious: \(A\) i.e. \(x\) gets contaminated by the non-pure state of the harmonic oscillators, thanks to the mutual interactions.

This is the part of the interaction that Pikovski et al. realize and maximally exploit. This contamination is what gives them the decrease of the off-diagonal elements, the loss of coherence.

However, what they don’t realize or acknowledge is the second part of the interaction! The non-thermal state of the degree of freedom \(A\) i.e. \(x\) also implies, thanks to the interactions with the internal harmonic oscillators, that those harmonic oscillators won’t be in a thermal state later, either! The purity and thermality will be transferred from one part of the system to the other and vice versa – just like the energy is oscillating between two mutually coupled oscillators. Back and forth. They saw and hyped the «back» but overlooked the «forth».

And this non-thermality of the harmonic oscillators at later times will influence the actual evolution of the density matrix for \(A\) i.e. \(x\). For this reason, the actual off-diagonal element \(\rho_{12}\) for two values of the center-of-mass position \(x\) will not uniformly decrease to zero. It will kind of oscillate. To calculate the true evolution, you need the full density matrix \(\rho^{(AB)}\); the partial trace \(\rho^{(A)}\) just isn’t enough for these purposes. The oscillations may become very complicated if \(N\) is large and the frequencies \(\omega_i\) are sufficiently generic. But the off-diagonal element \(\rho_{12}\) will always tend to return to the initial large values after some time.

There can’t be any uniform decrease here because no degrees of freedom are being «forgotten», like the property of the radiation that escapes to infinity in the proper decoherence. Their result indicating the monotonic decrease is wrong because it’s based on their incorrect assumption that the internal degrees of freedom will always remain in the thermal state (or that they will never influence \(x\) again).

The fact that if calculated properly, \(\rho_{12}\) just cannot go monotonically to zero is absolute common sense. It just means that the evolution of a closed system in quantum mechanics is and has to be unitary. Pure states evolve into pure states. And this is a closed system. It doesn’t conceptually matter that it has many degrees of freedom.

After all, their assumption of \(N \to \infty\) is nothing else than a part of their trick, a method to spread fog. A way to make you overlook that their qualitative conclusion is obviously wrong. If you look at the formula for the decoherence time, it goes like \(1/\sqrt{N}\). So their «decoherence» becomes faster for a large \(N\) but they surely do claim that in principle, their recipe works for a small value of \(N\) or even \(N=1\), too.

But just think about a single harmonic oscillator in the gravitational field. Or a single hydrogen atom which is qualitatively analogous because it has some internal degrees of freedom, too. It is totally obvious that you won’t get any decoherence here. (We would have to discuss the forces keeping the hydrogen atom at a fixed \(x\), rather than freely falling, but imagine that it’s some electromagnetic force.) Instead, you may describe the hydrogen atom e.g. in terms of a 4-dimensional Hilbert space that allows two excited levels of the atom; and two center-of-mass positions. The Hamiltonian becomes a \(4\times 4\) matrix and the evolution of all pure and mixed states is a combination of oscillations. No monotonic decrease of anything takes place.

And because nothing qualitatively changes for larger values of \(N\), there won’t be «true» decoherence for these larger values, either.

There are papers that kind of say similar things like I do. For example, a week ago, Adler and Bassi argued that «the Pikovski et al. effect is thus a dephasing without a fully decoherent large time limit,» among other things. But I still feel that the seriousness of the fallacies that are hiding behind similar claims – basically claims that quantum mechanics «ceases to hold» in some situations – isn’t sufficiently highlighted.

As I said, I think that the anti-quantum zeal is the primary driver of similar research that is either completely flawed or has some «subtle» mistakes that invalidate the big conclusions. Here, they don’t like the idea that quantum mechanics allows pure states with relative phases even between macroscopically distinct states, at least in principle. Although the technology is different, their motivation is similar to the motivation of Penrose’s flawed ideas that «the gravitational field causes the collapse» which is responsible for the emergence of «classicality».

Even though Pikovski et al. is less silly than the comments by Penrose, the basic profound conceptual mistake is the same. Classicality doesn’t have to emerge by any dramatic, new or previously overlooked, effect. Quantum mechanics as we have known it since the 1920s makes correct predictions for seemingly classical systems, too. That is the basic point – a true point in the presence of gravitational fields or without them – that all these Pikovskis and Penroses and Puseys and Rudolphs and Barretts and Moldoveanus and Leifers and Christians and (560 lines of text were omitted) fail to get. The misunderstanding of this basic point, namely that proper quantum mechanics just works, is what makes them spend decades on completely unjustified and ultimately wrong «research».

Gravity isn’t needed for quantum mechanics to make the right predictions for experiments that involve both small objects, large objects, as well as their interactions. It just works and it has always worked. One may also see that even if this Pikovski et al. effect were «needed» to reconcile our «seemingly classical» perceptions of objects’ positions with quantum mechanics, it would make just the altitude \(x\) behave classically. There would still be no coherence for the other (horizontal) coordinates \(y,z\).

It’s also silly to propose «real experiments» based on this paper. It is obvious how to predict what happens in such experiments – and be sure that this is what will happen. One must appreciate that the prediction by Pikovski et al. based on the «premature tracing over» isn’t quite right, however. The correct calculation has to use the full density matrix, and not the partial trace. The purity goes from the collective degree of freedom to the internal ones and back. For this reason, at least in principle, the off-diagonal elements of the reduced density matrix don’t converge to zero for any system with a finite number of degrees of freedom.

Добавить комментарий

Ваш адрес email не будет опубликован.