Many worlds: a Rozali-Carroll exchange
Luboš Motl, february 21, 2015
Sean Carroll wrote another tirade,
The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics
where he tries to defend some misconceptions about the «many worlds interpretation» of quantum mechanics while showing that he is totally unable and unwilling to think rationally and honestly.
After some vacuous replies to vacuous complaints by vacuous critics who love to say that physics isn’t testable, he claims that the following two assumptions,
- The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space.
- The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.
imply that the worlds where other outcomes of quantum measurements materialized must be as «real» as our branch of this network of parallel world. This claim is self-evidently untrue. Quantum mechanics – as understood for 90 years – says that no such words «really» exist (they can’t even be well-defined in any way) even though the theory respects the postulates.
So Carroll’s claim is equivalent to saying that \(12\times 2 = 1917\) because \(1+1=2\) and \(2+3=5\). Sorry but this would-be «derivation» is completely wrong.
There are lots of wrong things about the many-worlds muddy thinking and many ways to prove that the world cannot work like that. But I will focus on the question whether these many worlds are «implied» by the postulates.
Let me comment on a conversation between Carroll and Moshe Rozali.
Moshe: My discomfort (“objection” is too confrontational) is somewhat related, so maybe this is an opportunity for me to learn.
After decoherence, the state in the Hilbert space transforms effectively into a classical probability distribution. We have certain probabilities, given by the Born rule, assigned for every possible outcome. Those possible outcomes no longer interfere. In all other cases where this situation occurs in science, we understand those different possibilities as potentialities, but we shy away from attributing to them independent “existence”.
Now, I am not too worried about ontological baggage. I suspect that in the present context ontology as we understand it cannot be made well defined. Rather, I am worried about the epistemological baggage: what does it buy you, declaring that all those potentialities actually “exist”? Is it more than a rhetorical move? And, why make that move here and not, for example, in the context of statistical mechanics?
Note that Moshe’s first goal is not to be confrontational and he suppresses his language to achieve this goal. Is that really necessary? Well, Moshe’s comment is totally right but I think that he makes the content weaker than it should be, too.
Moshe’s point is that the probabilities predicted in quantum mechanics are analogous to probabilities that people had known before quantum mechanics, when they thought that the world was described by classical physics. But in that old era, they weren’t obsessed by saying that the other potentialities were «real». They were just potential outcomes. Analogously, there is no justification for claiming that these potentialities are «real» in quantum mechanics.
One reason why Moshe’s language is suppressed is that this is not just some vague analogy or an incomplete argument. The probabilities in classical physics are a limiting case of those in quantum mechanics. They are fundamentally the very same thing! For example, when you roll dice, the event is affected by random variations of the neuron impulses that control your muscles, and these random variations depend on quantum phenomena that may only be predicted probabilistically. These variations are amplified when the dice move in a complicated way.
For this reason, the uncertainty about the number shown by the dice is largely driven by the usual uncertainty – probabilistic character – of quantum mechanics. So if the different histories are «real» in quantum mechanics, they must be viewed as equally «real» if you talk about dice in the classical language, too. The probabilities everywhere in the classical world would require all the potentialities to be «real» as well – all these classical probabilities arise in the \(\hbar\to 0\) limit of their quantum counterparts, so they must obviously have the same interpretation.
Another reason why Moshe’s comment is «diluted» is that one may actually show that there can’t exist any consistent way of defining «how many times the world is split into several worlds» and how it happens. If the world were «splitting» too rarely, one would still face the superpositions in situations when someone may see that the outcomes are «sharp». If the world were «splitting» too often, it would effectively mean that a measurement is being made – and the interference is being erased – too often which would contradict experiments. The truth is «in between» – but where it exactly is depends on the observer. If he’s convinced he’s able to perceive an observable, it must be on the «classical side» of the Heisenberg cut. Everything else may be on the «quantum side».
Another reason why Moshe’s comment is «diluted» is that one may actually show that there can’t exist any consistent way of defining «how many times the world is split into several worlds» and how it happens. If the world were «splitting» too rarely, one would still face the superpositions in situations when someone may see that the outcomes are «sharp». If the world were «splitting» too often, it would effectively mean that a measurement is being made – and the interference is being erased – too often which would contradict experiments. The truth is «in between» – but where it exactly is depends on the observer. If he’s convinced he’s able to perceive an observable, it must be on the «classical side» of the Heisenberg cut. Everything else may be on the «quantum side».
But I don’t want to go in this direction. Let’s continue with the discussion whether the «real character of the potentialities» is inevitable.
Sean: Moshe – I might be misunderstanding the question, but I’ll try.
Sean is way too modest. He’s not misunderstanding just the question. He is misunderstanding all of modern physics and the concept of rational thinking, too. Why?
Sean: I think this is a case where the ontology does matter.
To a physicist’s ear, the sentence «ontology does matter» sounds weird. Why? Because it is weird. The word «ontology» doesn’t mean anything in legitimate physics because it cannot be defined in any operational way; and any mathematical definition that may be given to the world may be inadequate as a description of Nature.
What does «ontology» mean? If you analyze what all these would-be thinkers say, you will see that «ontology» and «classical physics» is exactly the same thing.
«Ontology» is the idea that something objectively exists and has properties that are objective. All the information about the things that objectively exist may be written in an objective way. Mathematically, the space of possible states is known as the «phase space» and the dynamical laws determining the evolution of the point in a phase space is known as «classical physics».
But for 90 years, physicists have known that Nature simply doesn’t obey classical physics – it doesn’t obey this framework of (well-defined) ontology. It works differently, according to the laws of quantum mechanics, and Planck’s constant \(\hbar\) directly quantifies how much wrong the idea of «ontology» is! Because Planck’s constant is nonzero, it can never be quite right to think about Nature in terms of «ontology» or «objective state of physical systems».
Sean: In statistical physics, the theory says that there is some actual situation given by a microstate, but we don’t know what it is.
No, that’s wrong, too. Classical statistical physics in no way «demands» that the precise microstate is in principle knowable. The very point of classical statistical physics is that the precise point in the phase space is unknown to the observer but all of classical statistical physics works perfectly if it also assumes that it is unknown and unknowable to Nature (or God), too.
What classical physics allows us to do is to make predictions that assume that some objective reality exists at each point, even before the measurement, and this assumption leads to certain Ansätze for the probabilities. For example, we may always predict the evolution \(A\to C\) by inserting an intermediate moment \(B\) and write
\[ P(A_i\to C_f) = \sum_j P(A_i\to B_j) P(B_j\to C_f). \] One may get from the initial state \(A_i\) in the past to the final state \(C_f\) in the future in many ways but one of the classical states \(B_j\) must be realized at the intermediate moment. To get the probability of getting from \(A_i\) to \(C_f\), we simply sum the probabilities of getting from \(A_i\) to \(C_f\) through any intermediate state \(B_j\).
The formula above is usable in the classical world – the assumption that the probabilities may be written as these sums is the probabilistic reincarnation of the classical notion that an intermediate state exists even if it is not measured. But note that I said that this assumption was classical. It doesn’t mean that it’s correct. And indeed, physicists have known for 90 years that it is not correct.
The quantum counterpart of the previous displayed equation is
\[ \bra {C_f} U_{A\to C} \ket{A_i} = \sum_j
\bra {C_f} U_{B\to C} \ket{B_j}
\bra {B_j} U_{A\to B} \ket{A_i},\\
P(A_i \to C_f) = | \bra {C_f} U_{A\to C} \ket{A_i} |^2 \] The sum over the intermediate states \(B_j\) in the first line is perhaps analogous to the classical sum but it is totally inequivalent, too. Why? One first sums the amplitudes and then squares their sum’s absolute value, instead of summing the squares. That’s why the interference and other quantum phenomena occur. One may derive the classical formula from the quantum formula in a certain limit – using both the usual \(\hbar\to 0 \) limiting procedures as well as decoherence – but one simply cannot derive the second, quantum equation from the classical one – regardless of the identification of the phase space!
Quantum mechanics strongly and unambiguously refutes the idea that the world at the intermediate moment \(B\) (before the initial and final measurement) has some objective features. If you are making this assumption, you are assuming that the world is described by classical physics and you are guaranteed to produce wrong predictions.
Sean: So instead we work with probability distributions; they can evolve, and we can update them appropriately in response to new information. None of this changes the fact that there is a microstate, and it evolves (typically) deterministically once you know the whole state.
No. Again, there doesn’t exist any need for the microstate in classical statistical physics to be knowable in principle. In classical physics, one may imagine – and people often find it useful – that a precise microstate actually exists and is known to Nature (or God) even if it is unknown to us. So the \(P_{AC}=\sum P_{AB}P_{BC}\) formula assuming a clear intermediate state may be used, even though we often don’t use it.
But in quantum mechanics, one can’t even imagine that. The laws of classical statistical physics are compatible with the idea that the precise information about the intermediate microstate (or the state at any moment, for that matter) is known to some perfect agent. But the laws of quantum mechanics are not compatible with that assumption!
So the idea that the «precise intermediate microstate was known to Nature» was an axiom you could add (but you didn’t have to add!) into your axiomatic systems for classical statistical physics. But it’s an axiom that you simply have to abandon if you want to understand the correct generalization or deformation of classical statistical physics, namely quantum (statistical or otherwise) mechanics. The axiom is simply no longer valid just like the axioms that the spatial geometry is perfectly flat etc. are no longer true in general relativity.
The fact that this axiom «didn’t hurt» in classical physics doesn’t mean that it doesn’t hurt in quantum mechanics. It surely does.
Sean: In QM the situation is just completely different. You don’t have a probability distribution over microstates, you have a quantum state.
No. That’s a completely wrong description of the differences between classical physics and quantum mechanics. The fact that some things about the state of the physical system may be known and others may be unknown (or known just in terms of probabilities) is something that holds both for classical physics and quantum mechanics. There is no difference between the two frameworks when it comes to the point that probability distributions may be needed or exploited. And when they’re needed or exploited, their interpretation is exactly the same. Both in classical and quantum physics, probabilities we are interested in are probabilities that «a certain statement about observables holds». It’s the observables, not microstates, that are found at the root of physics.
But a density matrix may be considered a quantum description of the probability distribution over microstates, exactly what Carr*ll claims to be wrong.
The aspect by which classical physics and quantum mechanics differ are the laws that allow us to calculate these probability distributions. The difference between the two equations with sums above (the sum of products of probabilities in classical physics; and the sum of products of complex probability amplitudes in quantum mechanics) may be viewed as a good symbol of the qualitative difference between the laws of classical physics and laws of quantum mechanics. But again, it’s the formulae by which the probability distributions are calculated, that are different in classical and quantum physics. The fact that both frameworks may use or do use probabilities is shared by both. And the probabilities mean the same thing in both frameworks. They always refer to numbers that tell us which outcomes may be reasonably expected in a situation when the outcome is unknown before the measurement, and known afterwards.
Sean: You use that quantum state to calculate the probability of experimental outcomes, but we aren’t allowed to think that the outcome we observe represents some truth that was there all along, but we just didn’t know. That’s what interference experiments (and Bell’s theorem etc) tell us.
Right. It’s surprising that these two correct sentences appear in the middle of all the junk. As far as I can say, they directly contradict everything else that Carr*ll wrote.
Sean: The quantum state isn’t just a probability distribution. See also the PBR Theorem.
No, that’s a wrong proposition again. The quantum state – pure state or the general density matrix – may be fully described in terms of probability distributions for all observables that it predicts. These probability distributions for various observables are not quite independent from each other but they do exist and if you know all of them, you may reconstruct the full density matrix. What’s impossible is to use the classical formulae such as \(P_{AC}=\sum P_{AB}P_{BC}\) to calculate these probabilities. But that doesn’t mean that the probability distributions that the quantum state or density matrix encodes are not probability distributions. They are probability distributions. Their interpretation (consequences for our predictions of experiments) is exactly the same as it was in classical physics. It’s the laws to calculate them that have been upgraded and qualitatively changed!
Sean: Now, of course you are welcome to invent a theory (a “psi-epistemic” model) in which the wave function isn’t the reality, but just a black box we use to calculate probabilities. Good luck — it turns out to be hard, and as far as we know there isn’t currently a satisfactory model.
No. The correct and complete theory (or framework, waiting for the Hamiltonian etc. to be specified) was found 90 years ago and it is called quantum mechanics. It was a groundbreaking discovery, probably the most important discovery of the 20th century physics, but it wasn’t that hard because its founders were very smart.
The philosophical phrases «psi-ontic model» and «psi-epistemic model» are being used by the self-styled philosophers to describe laws of physics governing a point in the phase space or the probability distribution on the phase space (i.e. classical deterministic physics and classical statistical physics), respectively. And to describe Nature using either of these two templates isn’t just hard. It is impossible because Nature doesn’t obey any laws of classical physics and this fact has been known for 90 years and should be known to everyone who gets at least an undergraduate degree in physics.
Sean: The Everettian says, Why work that hard when the theory we already have is extremely streamlined and provides a perfect fit to the data?
Sean: It’s not just an Everettian – it’s a sleazy, dishonest, stupid, šitty aßhole who loves to impress others with misleading, superficial, demagogic commercials trying to sell šit as gold. If I were obscene, I would say that it is not the Everettian who said it. It was Carr*ll.
Sean: (Answer: because people are made uncomfortable by the existence of all those universes, which is not a good reason at all.)
Some people are made uncomfortable but other people may also easily show that no such a thing exist. The true reason why people have problems with the intrinsically probabilistic laws of quantum mechanics is that their brain, like Carr*ll’s brain, are just too tiny and incapable of thinking beyond classical physics.
Or, using the words of P. Hayes, the reason behind the problems is that people feel uncomfortable while thinking about probability theory without ensembles but it’s just a self-inflicted injury.
Moshe: Thanks, Sean. I suspect this is my own personal misunderstanding, so I don’t want to take too much of your time. Let me try just once again to state my confusion.
«Sean, I am ready to lick your ass as deeply as you want, if you have time.» Please, Moshe, is that really necessary or desirable? You have studied physics at quite some level for decades and done some serious research, unlike Carroll, so why are you always starting with this aßlickery dedicated to a bully who doesn’t have the slightest clue what he is talking about?
Moshe: I have no problem believing in Everettian Quantum Mechanics, and I certainly see the appeal of getting everything from unitary evolution with no additional assumptions. So we don’t have any real disagreement. But, I am confused about the natural language description of the situation. So maybe it is about ontology, a concept I clearly have some trouble with.
Holy cow. The idea that the unitary evolution is positively correlated with the fairy-tales about many worlds is just a cheap demagogy, and Moshe must have been drunk if he bought it. The actual relationship is opposite. The idea that there is an objective branching into the parallel worlds erases the mixed terms, makes the inference impossible, and it contradicts the unitary evolution.
Moshe: I take it that the crucial part in taking different possibilities as actualities is not in the post-decoherence description. If the world was fundamentally stochastic, simply described by an evolution of a density matrix, not too many people would claim that the different possibilities are more than just potentialities, and most will agree that only one of them is realized. And, this is what I feel uncomfortable about — pre-decoherence it is certainly murky to discuss the world in classical terms and argue on what exists and what not. And post-decoherence we have a probability distribution, for which normally we only believe one situation is realized. At which point are we forced into believing that all branches co-exist?
(Independently, as I complained before, almost everything is physics has continuous spectrum, so “branches” and worlds “splitting” must be only a metaphor).
Exactly. Decoherence in no way implies that the other potentialities become «real». Decoherence just means that the equation involving the sum of products of amplitudes may be effectively rewritten as the equation involving the sum of products of probabilities – as long as we trace over some environmental degrees of freedom. So after decoherence, the probabilities approximately (very accurately) follow the laws of classical physics. But they’re the same classical probabilistic distributions we always had in mind when we thought about the world classically. In particular, the other potential outcomes are not «real» anywhere.
The «continuous splitting» is just one particular problem that shows that no functional version of MWI can actually exist at the mathematical level. There are many other ways to see that it can’t work.
Sean: Moshe: Of course the world can perfectly well be said to be described by a density matrix, since any pure state determines a density matrix. The real question is, how does the density matrix evolve? We sometimes think of decoherence happening, off-diagonal elements disappearing, and the state “branching.” But that’s only for the reduced density matrix for some subsystem; the full density matrix obeys the unitary von Neumann equation, from which the above description can be derived.
Right, decoherence may only be derived – and it is only true – if there are environmental degrees of freedom that the observer doesn’t have access to. But when he doesn’t have access to those, he may trace over them, and the decoherence-like calculation is exactly correct to predict everything he has access to.
At any rate, whether one is tracing over something or not, it is totally obvious that the eigenvalues (or diagonal entries) of the density matrix have the same probabilistic interpretation. That also means that if the other potentialities aren’t real in one case, they can’t be real in any other case, either.
Sean: So you have a choice: you can believe all that, and take the “probability distribution” to be a measure on which branch you find yourself on, like a good Everettian.
This is just a childish visualization for someone who needs to draw pictures but it doesn’t make things any more meaningful, quite on the contrary. If two different worlds exist, there is no reason to say that we’re in one of them with the probability 64% and another one with 36%. The most sensible distribution would be 50% – 50%. So the very assigning of general probabilities to the «branches» means that we are not really talking about «several worlds that are equally real» but about some asymmetric generalization of this concept (a diagram which is claimed to be the «real thing») which doesn’t really make any mathematical sense.
At the end, it’s the probabilities that may be calculated in quantum mechanics and verified by measurements and the misleading picture with the potentialities as «actual worlds» doesn’t help to make anything meaningful – it really contradicts quantum mechanics as long as one has at least somewhat high standards.
Sean: Or you can — do something else!
No, the right thing to do is to have the courage and do nothing (or to shut up and calculate) and simply accept Nature as it is – and as the founding fathers of quantum mechanics found Nature to be 90 years ago. Everything «else» that people have done – and continue to produce – is worthless, fundamentally wrong stinky šit.
Sean: Change the formalism in some way so that you get to say “but those other parts of the density matrix aren’t real things.” You could invent a hidden variable that points to one particular branch (as in Bohm), or you could explicitly change the dynamics so that the other branches actually aren’t there, or you could invent a completely new (and as-yet unspecified) ontology such that the density matrix simply provides a probability distribution over some different set of variables.
There are about 5 major classes and hundreds of subtypes of this stinky šit that various peabrains have been doing for years.
Sean: But you have to do something — otherwise you’re just stamping your feet and insisting that some parts of the formalism are “real” and some are not, for no obvious reason.
No, we don’t have to do anything and a good physicist doesn’t do any of these stinking šits because he is not a stinking šithead. Instead, he accepts quantum mechanics as the right description, a theory that unambiguously implies that classical physics is incorrect and everything that the likes of Carr*ll have written about the character of the physical law were piles of crap.
For an observer, his observations or perceptions are the only «truly real» (yet subjective) things, and the laws of physics may be used to (probabilistically) relate them with each other. Saying that anything «else» is «real» is either downright wrong or physically meaningless.
Moshe: Sorry, in «described by an evolution of a density matrix” I meant “described by an evolution of a probability distribution”, which unfortunately changed the meaning quite a bit.
Anyhow, I am not confident enough to have a real opinion on the reality of the wavefunction in the fully quantum regime, or whether this question makes sense. But, I thought that the real force of the “many-worlds” interpretation of Everettian Quantum Mechanics is that you don’t have a choice, and this I don’t see. I see a plausible scenario of how you get classical probabilistic description out of QM, which is quite a bit! But I don’t see why you need to declare the alternatives as “real”, any more than you do for other classical probability theories.
For example, in the fully quantum regime you can take the view so nicely expressed here by Tom. If the question of “what is real” only makes a meaningful appearance post-decoherence, I think you never have a situation with co-existing worlds. And, if the many-worlds part of the interpretation depends on how you interpret the wave-function pre-decoherence, I think the inevitability claim is not that strong.
We’ve already discussed that. A diluted solution of the truth.
Moshe: But anyhow, thanks for the discussion. I am probably missing some of your points, but it’s been useful for me nonetheless.
Another good moment to vomit.
I can’t imagine that this junk will ever go away. The society is being bullied by tons of extraordinarily stupid and arrogant bullies similar to Carr*ll and the likes of people who have some clue, like Moshe, are increasingly manipulated to this role of inconsequential aßlickers who are fading away – even at places whose purpose is to concentrate the minds that should no better.