The very meaning of «probability» violates the time-reversal symmetry
Luboš Motl, October 10, 2014
An exchange with the reader reminded me that I wanted to dedicate a special blog post to one trivial point which is summarized by the title. This trivial issue is apparently completely misunderstood by many laymen as well as some low-quality scientists such as Sean Carroll.
This misunderstanding prevents them from understanding both quantum mechanics and classical statistical physics, especially its explanation for the second law of thermodynamics (or the arrow of time).
What is the issue? For the sake of completeness, let’s talk about the spreading of the wave function \(\psi(x,t)\) describing the position of a particle. In the diagram above, time starts at the bottom and it goes up. You see that there are are three stages of «spreading». The wave packet spreads between \(t=0\) and \(t=1\), then it abruptly shrinks because the particle is observed, and then is spreads again from \(t=1\) to \(t=2\), shrinks at \(t=2\), and spreads between \(t=2\) and \(t=3\). The diagram is qualitative and could be applied to the probability distributions for any observable in classical or quantum physics, OK?
You see that the diagram above is self-evidently asymmetric with respect to the upside-down flip. The flipped version looks like a tree
and I will refer to it as «the wrong diagram».
What is going on here? Between \(t=0\) and \(t=1\), and similarly in the other two stages of the correct tree diagram, the probability distribution or the wave function evolve according to some equations that have a mathematical property: they are invariant under the time-reversal symmetry. The wave function \(\psi^*(x,t)\) with the extra complex conjugation evolves in the same way (i.e. obeying the same Schrödinger equation) as \(t\) goes up as \(\psi(x,t)\) evolves if \(t\) goes down.
Similar comments apply to the evolution of the phase space probability distribution in classical statistical physics. The equation is known as the Liouville equation and its fundamental form is symmetric under the time-reversal symmetry, too. These three «continuous segments» of the «green tree» diagram are something that the confused people don’t have a problem with.
What they have a problem with are the «discontinuous jumps» at \(t=1\) and \(t=2\) – and lots of their counterparts in the real world. Needless to say, they’re the moments of the «measurement». Look at the measurement at \(t=1\), for example: this moment is the horizontal line of the first picture that divides the tree to 1 triangle below the line and 2 triangles above the line. Why did the distribution shrink at that moment? Why didn’t it «expand» instead? To answer this simple question, let’s first describe the situation near the moment \(t=1\):
At time \(t=1-\epsilon\), i.e. before the measurement, the wave packet was spread. The location of the particle (or any property of a classical or quantum system) was ill-defined or fuzzy or uncertain or partly unknown.
At time \(t=1+\epsilon\), i.e. after the measurement, the wave packet was concentrated. The location of the particle (or any property of a classical or quantum system) became well-defined or sharp or certain or well-known.
I originally wrote the second sentence using the clipboard (via copy-and-paste) but then I had to edit it because the adjectives are different. In fact, they are completely opposite. Note that if you interchange the moments \(1\pm \epsilon\) with one another, you simply obtain propositions that are wrong. One may «suddenly learn» some information but one may never «unlearn it» abruptly after an infinitesimal period of time.
Of course, you may «flip» these definitions – but then you will get an equivalent description of physics in which \(-t\) is used instead of \(+t\), and/or in which the word «past» means the «future» and vice versa. There is no reason to add this extra confusion; you won’t gain anything by this proposed chaos in the terminology. The «past» and the «future» are totally and qualitatively different whenever something about learning or observing or probabilities is involved (and it always is).
The probability distribution at the moment \(t=1-\epsilon\) i.e. before the measurement – whether the probability distribution is calculated from a wave function in quantum mechanics, or it is a fundamental object in classical statistical physics (or its informal counterparts: my statements really apply to any form of probability discussed by anyone and anywhere) – determines at what locations \(x\) the wave packet is more likely to be concentrated at \(t=1+\epsilon\), i.e. right after the measurement. Yes, I have used the word «likely» again so the «definition» is circular. It’s inevitable because one can’t really define the Bayesian probability in terms of anything more fundamental. There is nothing more fundamental than that.
But what’s important to notice is that the meaning of the probability always refers to the situation
a property is unknown/blurred at \(t=1-\epsilon\)
it is well-known/sharp at \(t=1+\epsilon\)
The two signs simply cannot be interchanged. The very meaning i.e. the right interpretation of the wave function or the phase space probability distribution is in terms of probabilities so the time-reversal-breaking quote above is inevitably understood in every and any discussion about probability distributions and wave functions.
Their very meaning – their defining property – is to tell us something about the final state of the measurement at \(t=1+\epsilon\), out of some incomplete knowledge at \(t=1-\epsilon\). Again, to stress the point, their very meaning is to tell us something about time-reversal-asymmetric abrupt events. If there were no time-reversal-asymmetric abrupt changes of our knowledge i.e. if there were no learning and no measurements and no observations, there could be no probabilities! In that case, there would be no probability distributions and there would be no wave functions because the very meaning of all these things is to tell us what to expect at \(t=1+\epsilon\).
There is no contradiction connected with the existence of the «event of learning» or «measurement» at \(t=1\). Obviously, we sometimes have to learn some information about something, otherwise we couldn’t talk about anything and there could be no science – or ordinary life, for that matter. If the process of learning has some internal structure, if we are measuring something with an apparatus that works for complicated reasons, there is something to discuss.
But if we only want to talk about the general claim that «there are measurements» i.e. events in which we suddenly learn some sharp or sharper information about something, there is really nothing to talk about. It’s as elementary and irreducible fact about the human thought as you can get. People learn. Ergo there are these «shrinking discontinuities» in the probability distributions for everything and anything in the world. The «past side» («before» side) of these measurements always has a more blurry distribution than the «future side» (or «after» side). Whoever writes whole chapters or books or book series about the very existence of «observations» or «learning of the information» is guaranteed to have written meaningless, pompous, vacuous philosophical flapdoodle only.
This behavior of the probabilities around the measurement – where the probability tells us what to expect «after» the measurement – is the source of what I call the logical arrow of time. Stephen Hawking and others use the word «psychological arrow of time» and it’s clearly the same thing. Hawking uses the term «psychological» for a good reason – learning about something by seeing it is a «psychological process».
The reason why I prefer to avoid this «psycho-» prefix is that it leads the people to think that an analysis how brains work and whether they have consciousness and what consciousness means is an obligatory ingredient in a complete analysis of the logical arrow of time. It’s not and that’s why the term «logical arrow of time» is more appropriate. What we really need is just the fact that some information (about an observable, a property of the external world) or the truth value of a proposition is unknown at \(t=1-\epsilon\) but it is known at \(t=1+\epsilon\). I don’t need to assume anything whatsoever about «agent» for whom it is known, his or her structure, the mechanisms inside the brain, and so on. I don’t need to assume that there is an «agent» that also has some other capabilities aside from knowing or not knowing whether a proposition about Nature is true. The logical arrow of time is about the (logical) truth values of propositions that abruptly change at \(t=1\), and the probabilities tell us what the final product of the change (which «after» state) is reasonable to be expected!
This logical arrow of time is a simple, elementary, and irreducible part of our existence within Nature. But it has consequences. If you think about the comments above and recognize that all these things are as clear as you can get, you should also understand that there is no «measurement problem» in quantum mechanics – the existence of the «a measurement» is tautologically an inseparable part of any statement about the corresponding «probabilities».
And you will understand that there is no problem with the thermodynamic arrow of time, either. The proof of Boltzmann’s H-theorem or its variations and generalizations are proofs that the thermodynamic arrow of time (showing the direction of increasing entropy) is inevitably correlated with the logical arrow of time. But the logical arrow of time always exists in any logical framework that talks about probabilities because probabilities are always relevant before a moment when a property is «learned» or «decided» so they are linked to time and the relationship treats the past and the future absolutely asymmetrically!
I don’t really believe too much that this clear-as-sky explanation of this issue will make someone new scream «Heureka» because these people love to be confused idiots and they are proud about it. It is probably a part of their self-confidence to think that there is something seriously wrong with statistical physics or quantum mechanics or thermodynamics (and maybe with mathematics, too) and it would hurt their ego if they had to learn that something has been wrong with the (time-reversal-asymmetric) semi-infinite part of their world lines i.e. with their lives up to this moment when they had a nonzero probability to understand what «probability» means and why none of these would-be problems exist. But the probability was too low so it’s not surprising that most of them have remained confused morons instead.
And that’s the memo.