Deep conceptual questions are rarely solved «directly»

october 05, 2014

I was just planning to explain what’s wrong about the whole attitude to research that is exemplified e.g. by Sean Carroll’s text

Ten Questions for the Philosophy of Cosmology

Carroll writes down 10 «big questions» – usually not very good ones, I will answer most of them below. But independently of the precise choice of the questions, there exists something more important and seriously flawed in the thinking of Carroll’s and many, many others – something seriously defective about their whole conception of the «scientific method».

It seems clear that the method according to the likes of Carroll – and their papers reinforce this point – has the following stages:

  • start with a «new» philosophical idea you’ve heard somewhere
  • convince yourself that it is deep, and write increasingly verbally sophisticated and persuasive articles making others to believe that it must be great and deep
  • just write the breakthrough papers showing that the idea may be used to calculate everything in a branch of physics more accurately and at a deeper level

You know, the problem is that this algorithm never works – or at least almost never works. Progress in physics, including the most conceptual breakthroughs, follows different lines.

That’s why breakthroughs in physics almost always occur very indirectly, after the heroes have tried to solve seemingly more concrete, down-to-earth problems. As Witten said at the beginning of the video above, quantum theory began with Planck’s successful phenomenological interpolation fitting the curve of the black body radiation. The idea of energy quanta (photons) actually arose out of that research, along with some initial wave-particle duality insights and the «analogous» description of the electrons’ motion.

So the birth of the quantum theory was indirect and low-key, you could say. As Witten said, the birth of string theory was analogously low-key: Veneziano just constructed a particular function of the momenta, the scattering amplitude for the pions in his not quite well-motivated theory of the strong force. This Veneziano amplitude played a very analogous role as the Planck’s curve, and the rest of string theory was actually sprouting out of this low-key seed.

I would actually argue that Newton’s discovery of the laws of gravity and mechanics were analogous to Planck’s, too. He decided there had to be some more universally applicable laws and was interpolating the existing knowledge about the motion of moons etc. and the existing knowledge about the motion of apples etc. on the Earth. This interpolation was very analogous to Planck’s interpolation between high frequencies and low frequencies emitted by a black body. There are lots of other examples.

Einstein’s discoveries are often presented as an example of breakthroughs when great philosophical principles lead to the great physical theories «directly». It’s possible to present them in this way. However, what I find important is that unlike e.g. the postulates of quantum mechanics, none of these principles was really «new» relatively to properties of well-known theories.

What do I mean? The special theory of relativity is based on two postulates: First, the laws of physics have the same form in all inertial frames. Well, this postulate was true in Newtonian physics. It was not only true; it was appreciated as a principle (already by Galileo). The second postulate says that the speed of light was the same in all frames, regardless of the source and the observer. One could have extracted that from the Morley-Michelson experiments. Einstein didn’t need those. He has really extracted it from Maxwell’s equations. They clearly imply that the light speed is independent of the source. Einstein also made the modest leap that the speed is independent of the observer. It was just a stricter version of the first postulate, anyway. If a train is moving uniformly, we can’t distinguish its state of motion from the state at rest – because (or if) there is no «wind». Einstein figured out that this «smooth experience in the uniformly moving train» could have been violated by an «aether wind» or other relativity-breaking properties of the medium in which the light propagates. That was enough for the rest of special relativity to sprout.

General relativity also added the equivalence principle, a pattern that was known and even Isaac Newton was appreciating it as an unexplained hint of some extra knowledge. The inertial and gravitational masses are the same; the effect of gravity and the effect of inertia are empirically indistinguishable. In combination with special relativity, it was enough to construct general relativity.

My point is that in these cases when «philosophically flavored principles» were apparently in the root of the new key physics discoveries, it was old principles, not new ones, and the key discovery was about taking them seriously and/or interpolate in between them. We could also say that all these huge advances in physics were due to some «unification» (special relativity unifies/reconciles mechanics and field theory, general relativity unifies/reconciles special relativity and the law of gravity, quantum field theory unifies/reconciles quantum mechanics and special relativity, string theory unifies/reconciles quantum field theory and general relativity). And the «unification» is always a form of «interpolation» – so it is analogous to what Planck did when he wrote down the right formula for the black body curve.

The big breakthroughs don’t ever germinate out of a new bold philosophical principle. New insights in physics are produced either by known inconsistencies of the existing theories or their deviations from the experiment, or from a unification/interpolation of the known laws. And if truly and conceptually new philosophical principles (such as the postulates of quantum mechanics) ever emerge along with the new theories, it’s always «truly new principles» that haven’t been «awaited» by any philosophers – and in fact, these philosophers then often have problems with the new important principles for decades if not centuries.

There are other ways to see why «Carroll’s scientific method» is defective. I said that he first decides what the «big philosophical principles» are and then he refines them to make their depth more persuasive. But the whole second step is just about fooling himself – and others. Whether the new philosophical principle or proposition is any useful for progress in science is already known before the second stage – and more convincing ways to describe it can’t change anything about the potential! If the principle were any good, one should switch directly to the step 3 and use the «great» new philosophical principle to deduce some radically new yet quantitative physics knowledge. To make the real revolution in physics. It’s not happening which is a great piece of evidence that the initial principles are no good regardless of the amount of hype that they are receiving.

The actual principles underlying the future revolutions in physics are therefore almost certainly different than what people can just easily imagine or guess and they won’t be guessed «directly», just like they were not «guessed directly» when the quantum theory or string theory were being discovered. The work on the amplituhedron may look technical but it’s very plausible that it will exactly lead to some of these totally new principles. To repeat some «lore» and «superstitions» that almost everyone has heard is no helpful for progress in physics.

The principles that will emerge in (or stimulate) the future physics breakthroughs will either be so «conventional» that people could welcome that «we have always known that» (the relativity principle is a historical example); or they will be so new that they will force us to revise our whole language and they will render the current philosophical questions meaningless (like all questions secretly assuming classical physics were rendered meaningless once physics switched to the quantum mechanical framework).

Let me make these points a bit clearer by discussing the 10 «deep questions of Carroll» in some detail.

In what sense, if any, is the universe fine-tuned?

Phenomenologists have some well-defined formulae for the «degree of fine-tuning» in a quantum field theory. As long as we talk in a well-defined framework, this question has been answered. If the question is supposed to suggest that there is some «better measure» of fine-tuning, it is a wishful thinking that moreover brings nothing new. Of course that everyone who uses some «measures of fine-tuning» knows that there could be other expressions for this benchmark and they could perhaps be better ones.

But at the end, all these quantities describing the amount of fine-tuning are guaranteed to have a temporary life, anyway. The degree of fine-tuning is calculated within a particular class of approximate theories. If one knows the exact theory and can write it down, the most invariant definition of fine-tuning drops to zero. There will still be events and properties in the Universe that occurred due to coincidences – we already know that such things exist. And the quantities that the deepest theory treats as calculable will be exactly calculable, and therefore adding no fine-tuning at all. There is some reorganization of knowledge that every new and deeper physical theory including the final theory brings with it and this reorganization of knowledge cannot be summarized by a single benchmark.

Even though the talks about fine-tuning are presented as very deep in religious and philosophical discussions, they are just technical tools in fundamental physics, tools that don’t seem too mysterious. Most typically, fine-tuning and naturalness has been discussed in relation with the «surprisingly low» mass of the Higgs boson (relatively to the Planck scale etc.). People may guess – and collect empirical data – about «what kind of a cause» is responsible for the unbearable lightness of the Higgs’ being. The answers have often been polarized to «SUSY-or-technicolor-like» technical explanations vs the anthropic principle. Maybe that this «polarization» holds a key to something. But in the final theory, the Higgs mass may be a rather composite, non-fundamental question that may have a rather messy or combined explanation or an explanation looking nothing like the proposed «templates» of explanations.

So it’s very likely counterproductive to persuade yourself that there can only be two «major answers».

There are some questions about coincidences. These days, the density of visible matter, dark matter, and the cosmological constant are comparable to each other. Is that a coincidence? Here, the answer is actually completely known. Unless the cosmological models we use are «totally wrong», we know that these quantities weren’t comparable in the distant past and they won’t be comparable in the distant future. So a claim that these quantities «have to be» comparable because of a universal law of physics may be easily falsified.

If there is an explanation why they’re comparable during our lives, it’s an explanation involving the dynamics of life and things that life needed to develop. So these explanations clearly have nothing to do with fundamental physics per se – they’re composite exercises in biology and similar disciplines. Many claims that «something is of the same order» may be shown to be equivalent and some of these equivalences may be interesting, striking, or surprising. But all of those will depend on some «non-fundamental physics» or other branches of science.

Moreover, I would say that even at the level of «several numbers», seeing that three largest contributions to the energy density in some segregation scheme – visible matter, dark matter, dark energy – are comparable shouldn’t be seen as surprising. From a perspective, you may view it as a great example of naturalness where it works. The ratios of the three largest contributions to the energy density are of order one. That’s great because it’s natural for such ratios to be of order one! Of course, one must acknowledge that this argument is a bit demagogic because the percentages of baryonic matter etc. at a particular moment long after the Big Bang aren’t terribly fundamental – they parametrically depend on the age of the Universe which is taken to be a rather arbitrary number («now») – while «true naturalness» says that the «truly elementary parameters» are of order one.

When one combines these two criticisms, I think it becomes pretty obvious that the observations about «cosmic coincidences» cannot hold any key to directly unlock some great insight about fundamental physics. They are partly inevitable, partly natural, and they may be fundamentally violated and the fact that some of these things approximately hold «now» (during the existence of this life on Earth) is a messy question depending on lots of messy things in biology and elsewhere.

How is the arrow of time related to the special state of the early universe?

It’s not related at all. I have explained this elementary point of undergraduate physics about 500 times but Carroll is just way too hopeless a moron. The only relation is that the early Universe is an example of an «initial state of physical system». But the right physics arguments and causes explaining the arrow of time – the second law of thermodynamics and related insights – work totally equally for any macroscopic physical system – a large or smaller one – in any period of time. The «whole Universe» doesn’t play any privileged role among them whatsoever and there is no relationship between the laws of thermodynamics and model building in early cosmology.

Deeper laws of physics may tell us what the «right» initial state of the Universe actually was (the Hartle-Hawking state or something much fancier playing a similar role). But just the general fact that it was a low-entropy state is made necessary by the laws of physics as we know them.

What is the proper role of the anthropic principle?

The anthropic principle isn’t really a principle; it is a «lack of principles». There is no coherent well-defined definition of «the anthropic principle» that would be viable as a principle to learn something about physics. Instead, the anthropic ideology is leading people not to search for new principles in physics (and to abandon some well-established ones, too).

More well-defined «incarnations» of the «anthropic principle» span a wide spectrum of claims and ideas, from vacuous and useless tautologies to «rules of thumb» telling you that you may want to «bias your thinking» in a particular direction to completely unjustified speculations that some portions of physics will see no progress (no new old-fashioned laws allowing us to calculate more things more accurately). These are totally different claims and have to be carefully distinguished while the anthropic ideology seems to be all about the obfuscation of the differences between these claims (much like the «climate change» ideology and others). If a theory may be used to derive that life never arises (or that anything behaves differently than we observe – any contradiction), then the theory is falsified. That’s nothing new and nothing else than falsification, the basic procedure in the whole scientific method. In practice, physicists won’t use the «existence of life» as a constraint on theories in this form. They will decompose it to the existence of the right fields (Standard Model plus metric tensor plus inflaton plus things responsible for baryogenesis etc.), right values of the parameters, and viable initial conditions. Good enough combinations of those are known to predict life so one doesn’t have to «check for life» separately. One just checks the usual physical properties!

However, if two theories admit life or the existence of a star system with life etc., then they are equally viable – they pass the empirical test – and favoring one because it claims to produce «more life» or something like that is just a clear fallacy. This claim of mine is established science – or follows from basic laws of the Bayesian inference etc. – and suggesting that these basic properties of science will be radically modified can’t be any helpful. You can’t make progress in physics if you declare that the laws of mathematics or logic will cease to hold.

So the proper role of the anthropic principle in physics is really no role at all. There may be a multiverse and the viability of theories assuming a multiverse must keep all special properties of the multiverse into account (and indeed, it’s possible to exclude ranges of the cosmological constant because they don’t allow the birth of stars). But there is no «new principle» that would allow us to deduce something about physics in a new way. And there is no way to argue that Weinberg’s success in claiming that the cosmological constant was «almost maximal allowed by the constraint on the stars» was anything else than a coincidence. There could have been other constraints on the value; and neither Weinberg nor anyone else has actually presented any evidence that «no other constraints could have existed».

Once again, this is really the core of what is wrong about the anthropic ideology. The existence of life predicted by a theory is a «necessary condition» for the theory to be viable. But the advocates of the anthropic ideology often distort this true and innocent fact (a tautology of a sort) by suggesting that «the existence of life is the only condition» or «a sufficient condition» that theories (or their parameters) have to obey – which is simply not true in most cases (or they surely have no evidence that it is true).

What part should unobservable realms play in cosmological models?

Unobservable realms and any unobservable concepts may play any role in a proposed scientific theory they want. There may be many such components in a scientific theory, or fewer components. At the end, scientific theories are only judged by their agreement with the observable facts about Nature and by the non-contrived character of its basic axioms. A theory with many unobservable components may still be the most natural viable theory – typically if these unobservable components are needed for the internal coherence of the theory or its agreement with the empirical data. Some unobserved or even unobservable features or realms may always follow from a theory – and they neither hurt nor help the theory in the evaluation of its validity. Bias in either direction would be a fallacy (some people prefer theories predicting [almost] nothing they can’t directly see because they’re scared of such things; others may prefer theories predicting as huge pieces of the universe as possible, to increase the «room for life», but both of these groups are just acting irrationally or dishonestly).

Again, Carroll is asking a question that every competent scientist is able to answer.

What is the quantum state of the universe, and how does it evolve?

All these questions are pretty much meaningless. The Universe is subject to quantum mechanics so in principle, one may talk about its wave function much like the wave function of any other physical system. But there is nothing special about the Universe when it comes to the usability of the «wave function». In particular, all wrong «interpretations» of quantum mechanics remain wrong if we look at the quantum evolution of the Universe. The whole point of combining all these quantum questions with cosmology is irrational.

Cosmology may have some special links to the foundations of quantum mechanics but the questions describing these open puzzles look nothing like Carroll’s questions.

Are space and time emergent or fundamental?

Those questions may be given «answers» within sufficiently well-defined mathematically formulated frameworks only and Carroll hasn’t done it. Theories must ultimately be able to explain all the observations that have so far been parameterized as occurring in space and time. In quantum gravity, the location of an event in space and time is almost certainly acquiring a non-fundamental status – also because the spacetime geometry itself is dynamical. But theories making naively wrong assumptions about the spacetime – e.g. that it is a spin foam or spin network – are still excluded because they contradict the empirical facts (e.g. the Lorentz symmetry) and one can’t resuscitate these stinky dead bodies of flawed physics by references to some would-be deep philosophical theses.

The disappearance of the «fundamental status» of the spacetime is already manifest in lots of descriptions of dynamics that have been found within string theory. It’s plausible that this is where the evolution towards the «disappearance of the spacetime» stops; it’s plausible that a similar process will continue. The right answer, if any, will be obvious from the actual research that must proceed differently than by «picking an answer to a philosophical question, and then readjusting everything to the desired answer».

What is the role of infinity in cosmology?

The laws of quantum gravity surely allow the Universe to be infinite – e.g. the infinite 11-dimensional spacetime in M-theory. On the other hand, any defensible «complete theory including cosmology» must agree with the finite past of the Universe, and the corresponding finiteness of the visible Universe at each moment. Moreover, if the cosmological constant is positive, and it seems to be, the physical degrees of freedom behind the horizon fail to be completely independent from the visible ones. Here, a question about the right meaning of the «cosmic horizon complementarity» would be sensible.

Theories assuming that the spacetime admits no continuous description at very short distances are ruled out because they clash with the experimentally verifiable Poincaré symmetry.

Again and again, Carroll’s repeated questions about the links between the arrow of time and cosmology only highlight his complete incompetence.

Concerning «Are there preferred ways to compare infinitely big subsets of an infinite space of states?», this question has nothing to do with physics because no known law of physics or algorithm in physics needs to perform such operations and there are good reasons that all good laws of physics in the future will avoid such problematic questions, too. Even if they need «something like that», they will also come equipped with the rules «what is the right way to do so». Loop corrections in quantum field theory have ultraviolet divergences (infinities) but they also tell us about the «right procedures to subtract them». The principles choosing the «right methods of renormalization» are ultimately determined by other principles of physics (unitarity, gauge symmetry, agreement with experiments), so it’s completely wrong for Carroll to suggest that «we first choose a way to deal with infinities» and then we do physics with those. We must be open-minded about all such things until some physical principles deduced in some way tell us a specific answer!

If someone has a proposed law of physics that he may call «the anthropic principle» and if it depends on a self-evidently ill-defined procedure to deal with infinite sets in mathematics, it is obviously and demonstrably either an inconsistent law of physics, or at least an incomplete one. If it is inconsistent, it should be abandoned immediately. If it is incomplete, then the new rules that specify «what should actually be done with the infinite subsets» etc. contain the bulk of the mystery, so one has only replaced one body of ignorance by another, equally large one. Just to be sure, you will never find a uniform measure on infinite sets, among other things, which is enough to see that the majority of the «research directions» in the «anthropic principle» research suffer from incurable diseases. (Another class of incurable diseases is these would-be theories’ acausality – they often want the events/decisions of the early Universe to depend on counting of objects in the future which means that they create loops and «closed time-like curves, a contradiction.) But some of those working on those things just prefer not to see these elementary flaws. They continue to spit meaningless and manifestly wrong papers defended by the hype that «the fundamental philosophical point is so deep that it justifies the production of arbitrary atrocious stuff». Science doesn’t allow principles that are this deep.

Can the universe have a beginning, or can it be eternal?

The bulk of this question is really completely equivalent to the previous one and the answer is the same. Eternal spacetimes such as the 11-dimensional spacetime of M-theory are surely solutions to some equations of the underlying theory in the general sense. On the other hand, the relevant application of the theory including «insights about cosmology» that should be applied to our Universe has to be past-finite. The number of degrees of freedom in our asymptotically de Sitter space is finite, \(S\sim 10^{120}\), which is the upper bound on the entropy. The entropy never decreases which means that we can only reconstruct the time by a finite amount before we get to \(S=0\), and that’s the beginning.

So the answer is, once again, that the fundamental theory is not «dogmatic» about the answer to this question. It doesn’t really force the answer to be «only Yes» or «only No». Both finite and infinite Universes are allowed – and finite Universes (finite in space, when it comes to the visible part, and past-finite in time) are needed to explain the Universe where we live. In the future, our Universe is going to be long-lived as an empty de Sitter space but super-super-long timescales longer than the Poincaré recurrence time are surely «problematic» or «unphysical» in some way. We could ask questions «in what way» but clearly, some other advances will have to take place before it becomes meaningful to address these questions.

How do physical laws and causality apply to the universe as a whole?

Very nicely, thank you. The truly fundamental laws can’t evolve in time – if this were true, it would mean that one also has to find the laws constraining the evolution of the first laws, and they would be new, deeper laws, and this sequence ultimately has to terminate if we claim the Universe to be understood by a complete theory. So the former laws are just some approximate laws that only hold within some environments while the latter laws are deeper and possibly complete and universal. There is really nothing shocking about these things. The (effective) laws of hydrodynamics involving objects in water may depend on time if the water is heating up, but ultimately even the heating up of the water may be described by more detailed laws.

The sub-questions here are more or less the same.

How do complex structures and order come into existence and evolve?

The ability to lower its own entropy is a necessary condition for an object to display signs of life. On Earth, we are receiving a high-energy, «concentrated energy» photons from the Sun, and organisms (and other systems) on the Earth’s surface convert this energy to «less concentrated energy» i.e. lower-energy photons that are radiated away (and which inevitably carry a higher entropy because the number of low-energy photons is higher). This asymmetry of the «input» and «output» is needed because the second law of thermodynamics demands it. There are additional insights of this kind one may quote but at the end, one may say that they’re interesting features of «biophysics», not «fundamental physics».

Is the appearance of life «inevitable»? A general theory following the same local «field theory» laws doesn’t necessarily produce life; there is arguably no life possible in the 11-dimensional spacetime (this may be debatable but one may find vacua where it could be nearly proven). On the other hand, a theory equipped with all the extra knowledge to be applicable to our Universe has to predict life, otherwise it is excluded because we do observe life. Again, Carroll’s question is really the same as many previous questions and the answer is known and mixed. The theory allows both answers in general, but we also know that the «completed edition of the theory» with all the data relevant to our Universe has to produce the answer that we know empirically. The very assumption that there must be something deeper or more unambiguous about these Yes/No questions about finiteness etc. is almost certainly an unjustifiable assumption, and probably a fallacy.

Summary

None of Carroll’s questions may really lead to a new breakthrough in physics if you try to address them directly. Most of the questions have some «mixed answers» but once you make these questions more accurate, these questions have well-known answers! So most of the «depth» is actually just an illusion caused by Carroll’s sloppy and ambiguous formulation of these questions.

And the several questions that may be viewed as exceptions, those that are really deep, are hard to answer and if they are ever answered in this form, physics will first have to find some seemingly «completely unrelated insights», to make progress on a different form, before the tools that are capable of answering these questions emerge.

For those reasons and many others, it is extremely counterproductive and fundamentally wrong to try to «ignite» the progress in physics by trying to convince others that «not really well-defined questions», «not really open questions», and «not really deep questions» are well-defined, open, and deep. They almost never are and even the questions that will be answered after the «next revolution in physics» will really be substantially different from those that are being constantly asked today.

If the questions that are being repeated all the time were holding the key to the next revolution in physics, the revolution would have already taken place!

And that’s the memo.

Добавить комментарий

Ваш адрес email не будет опубликован.