Luboš Motl, January 27, 2016
Sabine Hossenfelder tries to promote some bizarre would-be research of her pseudoscientific friends, in this case about the «gravitational rainbow». She credits Smolin and Magueijo (2003) with this flawed concept even though those guys only tried to plagiarize Smoot-Steinhardt 1993, Accioly+Blas 2001, Accioly 2002, and others. Thankfully, nothing valuable was stolen here.
The event that made her write about this junk again was the recent publication of a Lewandowski+2 (2014) article in a journal. All the differences between the papers are irrelevant. All these people have been playing with the idea that quantum gravity «forces» the speed of light to be energy-dependent or, equivalently, the metric tensor must depend on the frequency of the quanta, just like in real-world materials (such as glass) which produce dispersion (and therefore rainbows).
All these people share a belief of a sort – a totally irrational belief – that gravity in combination with quantum mechanics makes it necessary for the spacetime to look like «glass» of a sort, or an aether, something that unavoidably breaks the principle of relativity (or the Lorentz invariance). However, whether or not you believe that string/M-theory is the right theory of Nature, it is a sufficient counterexample to disprove the idea that the Lorentz symmetry, quantum mechanics, and gravity are incompatible. They’re perfectly compatible within string theory!
Numerous experiments have refuted this concept. I don’t really keep track of all the (astrophysics-based) experiments because it’s been a totally settled issue for me for over 20 years but they keep on coming and are being improved every year. Experiments can’t ever «rigorously prove» that some real coefficient is strictly zero but the current experimental upper limit for many related coefficients is so unnaturally small that one may conclude that the coefficient is either exactly zero, or the Lorentz-violating fundamental theory would have to solve an additional, huge, new hierarchy problem to become viable.
It’s simply not true that any sort of violation of the Lorentz invariance (within a Minkowski background) must happen or should be expected. The principle of relativity, the principle of constancy of the speed of light, the equivalence principle, and the postulates of quantum mechanics are inequivalent principles. But they are not contradicting each other. All of them are consistent. They are logically independent. Only a brutally sloppy person may view one of these principles as a reason to abandon another. The theories (well, one theory, as we believe today) that obeys all these principles is very special, constrained, and basically uniquely determined, but it exists. Even before string theory was known to be a consistent theory of quantum gravity, people knew that all those principles were compatible at the approximate level of effective field theories. The need to respect both quantum mechanics and gravity (it’s better to avoid the term «quantum gravity» here because these crackpots are basically injecting wrong statements into the very phrase) just doesn’t mean that the spacetime resembles a glass or a rainbow or a classical cellular automaton or anything of the sort.
Again, the counterexample to these pseudoscientists’ assumption that the principles above are incompatible indisputably exists whether or not string/M-theory is the right theory of the Universe around us. Hossenfelder and fellow would-be scientists don’t hesitate to deny even the most well-known facts, pretend that they have never heard about them. For example, Hossenfelder wrote:
Einstein’s theory of general relativity still stands apart from the other known forces by its refusal to be quantized.
When Scherk and Schwarz showed that quantum gravity did follow from a consistent quantum mechanical framework (and therefore the sentence above is just plain wrong), your humble correspondent was less than one year old and Ms Hossenfelder hadn’t been conceived yet. She has had all her damn life to have noticed this important development and lots of people around her have been helping her but she has still failed – simply because she lacks the basic integrity, not just the kind of enhanced integrity expected from scientists but the ordinary person’s integrity, too.
In effective field theory description of gravity, the metric tensor arises and its values may be described as being dependent on the observed energy scale. But the key point that all these individuals are trying to deny to one extent or another is that the descriptions with different metrics (e.g. metric tensors associated with different renormalization group scales) are complementary to each other. One simply cannot use all of them at the same time.
There is only one metric tensor field in the spacetime whenever it makes sense to talk about the field at all – not many let alone infinitely many. One may parameterize the physics in different ways and assume that the same dynamics is taking place on different geometric spacetime backgrounds (especially because of the numerous dualities in string theory). But one must always realize that either one background exists, or another, but not both of them simultaneously. These are just different descriptions of the same degrees of freedom. There can’t be «very many» independent metric tensors in the same spacetime because there’s only «one package» of the conservation laws (for the stress-energy tensor) or the corresponding (spacetime translational) symmetries that are needed to render the unphysical degrees of freedom in «one metric tensor» harmless.
And again, there’s simply no reason to «sacrifice» the Lorentz invariance, straightforward natural ways to do so are self-evidently ruled out experimentally, and if one were «sacrificing» the Lorentz invariance, one would have to do it properly, anyway. To show the utter stupidity of these people, let me quote a characteristic piece of the «recent» article by Lewandowski+2 around equation 23:
We thus obtain a modified dispersion relation from (20),
\[ E^2 = m^2 + (1+\beta)p^2 + O(p^4) \] where \(\beta\approx 1\) should be true even for \(p\approx m\approx 1 GeV\),
if you allow me to simplify their talkative text. Holy cow, you must be joking. The correct formula is \(E^2=m^2+p^2\) i.e. \(\beta=0\) and no additional \(O(p^4)\) term appears in the relation. You just need to take an ordinary proton and accelerate it to the speed comparable to 50% of the speed of light to see that relativity works. The actual experiments people have performed have tested these phenomena as well as vastly more extreme ones. So how can Lewandowski et al. even suggest that they’re not unhinged imbeciles if they believe that the ordinary dispersion relation relation for the ordinary proton is violated by \(O(100)\) percent at speeds comparable to 50% of the speed of light? No detectable violations exist. Of course they are complete idiots if they believe in such violations.
A characteristic delusion penetrating all these people’s texts is their belief that the introduction of random errors into the formalism is a principle of a sort. So they just take Einstein’s special relativity, throw it out of window by adding exactly all the terms that the theory prohibits, and they apparently think that they have found something as important as Einstein did, if not more so.
But the deliberately introduced errors and inequalities are not principles.
Instead, errors and inequalities may be described as the «lack of principles». If you contradict the dispersion relation \(E^2=m^2+p^2\) that special relativity makes unavoidable, then you clearly cannot assume the same postulates that led Einstein to the special theory of relativity. But you haven’t replaced them by any other, at least comparably strong, principles. You have only denied some principles.
So if you want to determine what this «general framework» you are talking about predicts, you have to allow all kinds of theories that violate the rules of relativity in basically any way. You are back to the most general non-relativistic model building with a preferred reference system. The probability that a random theory in this class will look relativistic at the precision of experiments we can do – i.e. the probability that such a theory may avoid falsification – is basically zero.
Clearly, you need either the strict rules of relativity or something that has basically the same effect. These people just don’t have anything of the sort. Most likely, nothing of the sort exists. So it’s almost exactly accurate to say that they just 100% contradict all the empirical evidence that was used to verify Einstein’s relativity. They are full-fledged relativity deniers.
Everywhere in their papers and blog posts, you may see their incredibly lousy standards and the complete inability to learn anything from research. Let’s look at a sentence from Hossenfelder’s blog post:
In a paper which recently appeared in PLB (arXiv version here), three researchers from the University of Warsaw have made a new attempt to give meaning to rainbow gravity. While it doesn’t really solve all problems, it makes considerably more sense than the previous attempts.
In these words, you may see that Hossenfelder probably realizes that none of the previous papers about the «rainbow gravity» makes sense. But she makes it explicit that she realizes that the new paper doesn’t quite make sense, either. But she believes it makes «considerably more sense than the previous attempts». What is this statement supposed to mean? If you have two papers or two theories that don’t really make sense, how do you measure which of them makes more sense? And even if this comparison could be made, what’s Hossenfelder’s evidence that the new paper makes sense than the previous ones?
It doesn’t. All of them are garbage and these people are just stuck. They keep on producing bogus science because they are getting money from generous sponsors for this worthless activity. So they pretend that there is some progress but there is none. Their new papers make no sense just like their previous papers made no sense. Meanwhile, the people who are doing genuine science are writing papers where everything is carefully verified to make sense. If they claim that something follows from something, they very carefully verify all the arguments that it’s the case. If they claim to have a theory that’s compatible with a great body of some empirical data, they actually do lots of work to check this claim – and, more importantly, this claim is true. If it didn’t seem sufficiently convincing that this claim is true, genuine scientists wouldn’t dare to publish a paper making such a claim. Only subpar writers can behave in this way.
You know, genuine scientists are working hard to make sure that well over 99% of their argumentation works. They can’t be satisfied with hints that 32% of their paper seems to make sense while it could have been just 31% ten years ago. One simply can’t build any science on a conglomerate of ideas out of which only 32% seems to make sense. It’s like building a skyscraper with 32% of bricks added in between 68% of sand. It won’t work as a home.
Hossenfelders, Smolins, Magueijos, Lewandowskis, and similar people have nothing to do with science. What they do may superficially resemble science in the eyes of a layman who has no clue how to verify any claim. But none of the things they do really works – just like the wooden phones of the cargo cultists in the well-known analogous story. And whenever something important is found by others or, much less frequently, by themselves, they make sure that they deny or forget all these things as soon as possible. Their original dogmas and misconceptions are always and forever more important for them than any new evidence, any opportunity to learn a lesson. They’re stuck in a stinky cesspool where no valuable material floats and they’re trying to throw the content of the cesspool into all directions in the effort to pretend that they’re good enough because the whole world is a giant cesspool.
But the whole world is not a cesspool.