# Locality, nonlocality, and anti-quantum zealots

Luboš Motl, June 21, 2015

Quantum field theories and string theory, the two most viable types of quantum mechanical theories, respect the Lorentz invariance, the basic symmetry that defines Einstein’s special theory of relativity. This symmetry guarantees that no information can be sent superluminally or instantaneously: there can’t be any action at a distance. The relativistic locality ends up being equivalent to the relativistic causality: the cause must precede its effects, \(t\lt t’\), in all inertial systems.

In quantum field theory (defined through the canonical quantization), we may derive the canonical coordinates \(\phi(x,y,z)\) and the canonical momenta \(\partial_0 \phi(x,y,z)\) from the Hamiltonian. The usual procedures guarantees that the \(t=0\) equal time commutators say that

\[ [\phi(x,y,z),\partial_0 \phi(x’,y’,z’)] = 0 \] for \((x,y,z)\neq (x’,y’,z’)\). The Lorentz invariance of the Heisenberg equations motion subsequently guarantees that the commutator is zero at later times, too. The (super)commutator of two fields always vanishes at all spacelike separations.

This vanishing commutator has a straightforward and far-reaching consequence. If we make a decision around the point \((x,y,z)\), the decision may be interpreted as a part of the measurement of some observable \(F(x,y,z)\). Around the point \((x’,y’,z’)\), we may measure operators such as \(G(x’,y’,z’)\). Because \(F\) and \(G\) (super)commute with one another, it follows that the decision associated with \(F\) cannot influence the result of the measurements of \(G\). Whether we make the \(F\)-decision or not, the theory will make the same predictions for all measurements of the type \(G\). And because the results of all measurements that are possible in principle encode *everything* that is meaningful in physics, we see that there is no action at a distance. The Lorentz invariance guarantees that no \(F\)-decision can ever influence a \(G\)-measurement at a spacelike separated point.

There is no nonlocality. There is no action at a distance. There is no doubt about this statement.

The term «nonlocality» had to be defined carefully and operationally. We generalized an empirical definition that could have existed in classical physics, too. Can a decision at one point influence the predictions for measurements in a spacelike-separated region? The answer is No. It unambiguously follows from the relativistic quantum mechanical theories. If you wanted to defend a different conclusion, you would have to start to describe billions of physical phenomena from scratch – using a completely different theory. You would almost certainly fail because your theory would have to be profoundly and strongly inequivalent to the relativistic quantum mechanical theories – but it would have to «look» equivalent in all the tests that have already been done because in those tests, quantum field theory etc. succeeded.

One could have tried to define «nonlocality» differently. But no different definition – no definition disconnected from «the influence of some decisions on some measurements» – would really make any sense from a physicist’s viewpoint. All sane physicists – like all competent particle physicists and physicists from sufficiently close disciplines that actually do some research with «beef» – agree that quantum field theories are local. They are sometimes called *local field theories* for that very reason.

However, anti-quantum zealots love to say that quantum mechanics «is» or «has to be» nonlocal for it to make the right predictions for the entanglement experiments. In particular, we may prepare two electrons in the maximally entangled singlet state – something that both anti-quantum zealots and some quantum information practitioners call the «Bell state»:

\[ \ket{j=0} = \frac{ \ket{\uparrow\downarrow} — \ket{\downarrow\uparrow} }{\sqrt{2}} \] In the ket vectors, the first arrow represents the state of the «first electron» (the spin is either «up» or «down»); similarly, the second arrow depicts the second electron.

If you measure the component \(j_z\) of the spin of the «first electron» and you will get up, then you know that the other electron has spin down, and vice versa. The quantum state above makes the same prediction for \(j_x\) and \(j_z\) and any \(\vec j\cdot \vec n\) for a unit 3-vector \(\vec n\). The results of the two measurements, if we perform the same measurement (linked to the same axis) for both electrons, will always be perfectly anticorrelated. It’s inevitably so because the \(\ket{j=0}\) state is the eigenstate of the operator

\[ \vec j = \vec j_1 + \vec j_2 \] with the eigenvalue zero. It means that if you measure \(\vec j\cdot \vec n\) for any axis \(\vec n\), you have to get eigenvalues that add to zero.

This fact about Nature is absolutely unequivocal and verified in all the quantum experiments that began a very, very long time ago. The prediction of quantum mechanics is unquestionable, too. But even now, in the 21st century, some people still love to emit tons of fog about these unquestionable basic facts.

All this confusion began with the flawed 1935 paper by Einstein, Podolsky, and Rosen. Einstein and the two postdocs were thinking in the classical way and they found it *unbelievable* that the correlations could exist for all the components \(\vec j\cdot \vec n\) simultaneously.

They thought that if the two electrons are guaranteed to have anticorrelated values of \(j_z\), they *objectively* have to exist either in the state \(\ket{\uparrow\downarrow}\) or the state \(\ket{\downarrow\uparrow}\) before the measurement. But because both \(\ket{\uparrow}\) and \(\ket{\downarrow}\) predict 50% probability for \(j_x=+1/2\) and 50% probability for \(j_x=-1/2\) and this «split» applies to each electron, EPR and their followers found it «necessary» for the probabilities of \(j_{1x},j_{2x}\) to be either «positive,positive» or «positive,negative» or «negative, positive», or «negative,negative» to be 25%, 25%, 25%, 25%, respectively.

However, that’s simply not what quantum mechanics predicts. Everyone who understands quantum mechanics agrees that the perfect anticorrelation will exist if we measure \(j_{1x}\) and \(j_{2x}\), too. The wrong assumption in the EPR derivation is *classical physics*. They assume that the two spins already have some independent well-defined states before they are measured. But they don’t. Before they are measured, the two spins are *entangled* – which is nothing else than the most accurate and most general quantum elaboration on the adjective *correlated*.

The correlations between the results of measurements *is a correlation*. The previous sentence is a tautology. There are still some people who try to pretend that the correlation is something else than a correlation even though they use the word «correlation» themselves. We say that the measurements of the two electrons are correlated because the probability distribution \(p(j_{1x},j_{2x})\) for all four possible arrangements of the values of \(j_{1x}\) and \(j_{2x}\) does *not* factorize:

\[ \nexists p_1(j_{1x}), p_2(j_{2x}): \,\, p(j_{1x},j_{2x}) = p_1(j_{1x}) p_2(j_{2x}) \] The full probability distribution for the two objects (electrons’ spins) simply cannot be written as a simple product of two distributions for one object (for the objects separately).

Assuming the initial singlet state, quantum mechanics predicts these correlations for all the spin measurements you can think of. There is nothing «paradoxical» about it. There isn’t any classical theory (or a «classical model») that makes the predictions. This fact isn’t a problem with quantum mechanics or a mystery about quantum mechanics; instead, this fact is a proof that all classical theories are ruled out as theories of Nature. They are wrong. People who keep on defending them may be easily proven to be complete idiots. That’s obviously the only right interpretation of the result.

What is the *reason* of these correlations? According to the right theory – quantum mechanics (e.g. quantum field theory where this EPR experiment may be easily embedded), the reason of the correlation(s) is *not* an action at a distance. At the beginning, I reminded you of the proofs that there is *no* action at a distance in quantum field theory!**Instead, the reason of all these correlations – the reason of the entanglement – is the two subsystems’ being in the contact in the past.**

The two electrons with correlated spins were really prepared at the same place – from some process that guaranteed that \(\vec j=0\) for the final state (possibly thanks to the angular momentum conservation law). It means that the reason is the same as the reason for any correlation, e.g. for the reason for the correlation between Bertlmann’s socks. Bertlmann is a crazy professor in Vienna who always picks two socks of different color on his left foot and right foot, respectively. This anticorrelation is guaranteed by *design* – when he was putting the socks on, he is careful not to wear two identical socks.

(Instead of Bertlmann, one could talk about a mentally healthy person who wears the same socks, too. The anticorrelation would be replaced by a correlation but the main argument would be unchanged.)

When you measure the colors of his socks, there is nothing mysterious about the anticorrelation. It was guaranteed by design because the same brain decided about the two socks in the morning. There was no action at a distance exerted by one sock on the other sock.

The same comment applies to the anticorrelation of the spins in the singlet state. They’re anticorrelated because they were prepared together. In the ER-EPR correspondence, this anticorrelation (or any entanglement) may be interpreted as a non-traversable wormhole. But such wormholes have to be created locally i.e. have a common origin, too. You create the two «throats» of the Einstein-Rosen bridge and then you may increase the distance between them. But there’s no way to «suddenly» create a bridge between two spacelike-separated points!

When I say that the entanglement always and exclusively exists when the two subsystems have a common origin, it’s important to realize «where this claim comes from» and «why we know it’s right». It’s right because

- it follows from the locality of the quantum field theory etc. at the beginning: a sudden creation of correlated bits at spacelike-separated points could be used to send information instantaneously, and that would violate locality (which can be mathematically proven to be impossible)
- this fact is also compatible with
*all the experiments*that have ever been done.

If you want to disagree with my claim that «the entanglement is always due to the common origin of the subsystems – their interaction sometime in the past», you want to make a truly extraordinary claim. You need at least some evidence. Your evidence will either be theoretical – you will construct a completely different theory than quantum field theory that allows you to do completely new things, but still manages to agree with quantum field theory in all experiments that have been done because quantum field theory has always succeeded. Be sure that you will fail.

Or you need some empirical evidence. Now, we can’t ever *completely* empirically prove that something (in this case, «entanglement without some contact of the subsystems in the past») is impossible. If you ignore any theories in science, it’s always «possible» that someone in the future will construct a perpetuum mobile, a warp drive, an EM drive, or anything else which science considers impossible.

But for you to actually have this evidence of the empirical type, you must actually *do the damn thing*. Dreams and promises just don’t mean anything in science. So if you want a non-theoretical evidence in the claim that «entanglement may exist even if the two subsystems have never interacted with each other», you will have to design a procedure that may be verified to produce entangled states even though the entanglement isn’t guaranteed by any interaction of the subsystems in the past!

Needless to say, you are guaranteed to fail. The less attention you pay to physical theories, the less able to see this fact you may be. But once the attention you pay to the scientific knowledge drops beneath a certain fuzzy threshold, you will become a generic unhinged loon rather than a participant in a meaningful exchange. A sensible participant of a scientific debate simply cannot ignore *all of science*.

This blog post was partly inspired by a rant by a Romanian crank. I just can’t stand these pompous fools who spread complete idiocy everywhere and feel very smart for doing so. So let me respond to particular sentences.

Is Nature is Local or Nonlocal?

Nature is local, at least whenever a quantum field theory is a sufficiently good description. String theory is local in some respects, subtly nonlocal in others. But no nonlocality is ever needed to explain the results of EPR-like experiments. The experiments testing entanglement have nothing to do with nonlocality.

In quantum mechanics there are two strong points of view.

No, there is only *one* correct framework of modern physics that we call «quantum mechanics». It has its new rules that differ from classical physics. But it gives clear answers to all physically meaningful questions. The question whether the laws governing the behavior of a physical system are *local* or *nonlocal* are among the perfectly well-defined ones. And the answer is that the laws are local.

On one hand the philosophers of physics insist that Bell showed us that nature is nonlocal: «What Bell Did», and on the other hand qubists and practitioners of high energy physics stress that nature is purely local and there is no «tickle at a distance».

Quantum mechanics is a theory in physics. Just like any collection of insights in science, it allows lots of people – especially the laymen – to say wrong things. Quantum mechanics is an advanced enough part of science which means that people who have nothing else than «common sense» just can’t be expected to be experts. In particular, all people who are «just» philosophers of science are simply laymen. So it shouldn’t be surprising that their claims about quantum mechanics are mostly wrong. If they defend these wrong claims «strongly», it means that they are not just fools. They are arrogant deluded stubborn imbeciles. But the existence of self-confident imbeciles has no scientific implications.

Now last time I called this debate sterile because both sides are right as they talk about different things.

No, they are talking about the same thing but some people give right answers and some people give wrong answers to the questions.

Also I have yet to meet supporters of a camp not agreeing with the mathematical points of the other camp, and so it is all purely a matter of perspective.

This is just the «anything goes» postmodern babbling that tries to present every truth as relative. There is nothing relative about these matters. The comments I wrote above are 100% right and the people who don’t get these points are 100% incompetent.

Hidden behind this seeming disagreement are the epistemic and ontic points of view.

No genuine physicist uses the terms «epistemic» or «ontic» in the discussions about locality or nonlocality of a physical theory. This is terminology used purely by the pompous fools referred to above.

In the first stage, these terms «epistemic» and «ontic» are just parts of a vague philosopher’s jargon. In the second stage, these adjectives are being given a well-defined meaning. But it is spectacularly clear that the word «ontic» means that the user of this word believes that Nature is described by classical physics with some laws governing the evolution of \(q_i(t)\) on the phase space. The users of the «epistemic» adjective believe that the world is described by the classical Liouville evolution of the probability distribution \(\rho(q_i,t)\) on some phase space.

Who is right, «ontic» or «epistemic»? None of them is right because both «camps» assume that Nature is classical. But Nature is not classical. Nature is *quantum*. The quantum mechanical description using a pure state resembles the «epistemic» picture because only probabilities may be calculated; but unlike the epistemic picture, this description is «complete» and cannot be made more precise, not even in principle. The quantum description involving a pure state is as complete and «maximally accurate» as the «ontic» description!

But that doesn’t mean that the correct quantum description is «in between» the «ontic» and «epistemic» laws of physics. In fact, the «ontic» and «epistemic» laws should be viewed as physically identical – the same «Universe» could be described by both kinds of the laws (one would use the «ontic» laws in the case of completely knowledge and the «epistemic» ones in the case of incomplete knowledge – but the Universe around would be the same!).

Quantum mechanics is light years away from the «ontic+epistemic» archipelago. It is a qualitatively new kind of a theory to describe Nature, a description that rejects the assumption of classical physics that there exist observables that have well-defined objective values even prior to the measurement and these objective values are enough to predict the results of all measurements. This assumption is simply wrong in Nature, quantum mechanics has taught us. Quantum mechanics allows us to probabilistically predict results of measurements but it doesn’t allow us to say that there is any sharply and well-defined «objective reality» before the measurements.

Let’s try to disentangle the arguments and explain this local-nonlocal divide. Let’s start with the case for nonlocality. This point of view starts with quantum correlations. In the words of Bell: «correlations cry out for explanations». Now only two kinds of explanations for correlations were ever found:

- common causes from the past
- an event causing the other one and neither of them are valid explanations for quantum mechanics correlations.

This is complete rubbish. (1) i.e. «common causes from the past» is the correct explanation for *all correlations* (correlations implied by *any entaglement*) everywhere in physics.

The first kind of explanation falls under local hidden variable approach and this was disproved by Bell, while the second kind is forbidden by the special theory of relativity because spatial separated experiments were performed where there was not enough time for the signal to propagate from Alice to Bob side.

It’s simply not true. Quantum mechanics doesn’t include any «local hidden variables» – but it doesn’t need any of the hidden variables to guarantee the correlations and to make sure that the «common causes from the past» are the reason behind *all* these correlations. It is easy to verify – by a calculation and by an experiment – that these correlations between the spins (for example) will be there if the two electrons are created in the singlet state in their «common factory».

The absence of a third explanation is typically stated as nonlocality.

It may be stated as nonlocality by an idiot but there is no nonlocality.

Mathematically this is expressed as violation of Bell’s locality condition:\[ p(s, t | a, b) = p^1 (s|a) p^2 (t|b) \] which is equivalent with parameter and outcome independence.

Bell’s inequality does *not* hold in Nature. Bell’s inequality is deduced from assumptions – local hidden variables – that are known not to hold in Nature. And Bell’s inequality itself is experimentally violated. So what is this supposed to mean? A wrong inequality based on physically flawed assumptions cannot imply *anything* about physics in the actual Universe.

Now no qbist is denying that quantum mechanics violates parameter and outcome independence because this is a solid mathematical and experimental fact.

No physics practitioner with «beef» uses the terms «parameter independence» and «outcome independence». When you investigate what these extra redundant terms mean in the papers by the «interpreters», you will find out that these things are just «new variations of the concept of locality» except that they always vaguely and implicitly include some assumption that the world is either «ontic» or «epistemic» or even a «local hidden variable theory». But the world is neither which is why all these words are physically meaningless, too.

But the local point of view starts with no-signaling, or the inability of Alice to influence the outcomes for Bob (and unsurprisingly no nonlocality supporter is denying this either).

It doesn’t just «start» with it. This is the only conceivable, physically meaningful definition of «locality» one may have.

In the QBist point of view, each measurement is local and quantum mechanics is a tool which updates my personal degree of belief in order to make sense of what I observe. The Alice-Bob correlations can only be determined when the two sides come in contact and for this to happen travel at speeds lower than the speed of light is required.

It’s not a «QBist point of view». It’s the basics of quantum mechanics that were discovered and repeatedly reiterated by Heisenberg and pals – who were rightfully awarded by the Nobel prizes for these most important discoveries of the 20th century science.

To better understand this debate I encourage you to watch this meeting moderated by Brian Green.

His name is Greene, not Green.

At 1:22:00 Rudiger Schack makes a provocative statement: quantum correlations are like Bertlmann socks.

It’s not a provocative statement. Correlations predicted by quantum mechanics were always examples of *correlations* – and the term «Bertlmann’s socks» is nothing else than a synonym for «correlations» addressed to readers of relatively new popular books who are more likely to have heard and understood the term «Bertlmann’s socks» than the term «correlation». The content is *exactly the same*. Schack’s statement is just the tautology «quantum correlations are examples of correlations» translated into the stupid jargon of readers and writers of stupid pop-science books.

I think this is just an extravagant way of saying that quantum correlations are just correlations and no explanations are needed in general.

Right.

[I cannot take Schack’s Bertlmann comment at face value as this would imply he disagrees with Bell’s mathematical statements from his famous Bertlmann’s socks paper and that would be wrong].

Holy cow, what a breathtaking idiot. Bell’s mathematical statements are 100% irrelevant for physics and 100% irrelevant for the validity of Schack’s statements about quantum mechanics or any other statement about quantum mechanics because Bell’s mathematical statements are derived from the assumption that quantum mechanics is *wrong*. If you assume that 2+2=5, you may derive lots of other things from that but none of these derived consequences will be relevant in the world where 2+2=4. Bell’s theorem is just worthless garbage about a trivial and unphysical world. Why do these hacks keep on saying that this theorem is relevant for the validity of claims about quantum mechanics? It is not.

Now since both sides agree on the mathematics and on experiments, but disagree on interpretation maybe there is a middle ground. Abner Shimony introduced the expression: «passion at a distance» but in the charged atmosphere of today in quantum foundations this is not a popular point of view.

The value of this «call for middle ground» is zero. Someone may decide to fill 1/2 of his skull with pieces of a brain, the other 1/2 of his skull with feces, but that won’t make him more reasonable than the people who have a brain in the skull.

Behind the local-nonlocal debate there is a fracture of interpretation: is quantum mechanics ontic or epistemic? Jean Bricmont expresses best the ontic point of view around 4: 35 in the interview below:

Again, given the prevailing definitions of the words «ontic» and «epistemic» by the users of these words – words that are a priori physically meaningless – Nature is neither «ontic» nor «epistemic» because both of these adjectives mean «classical». Nature is something completely different. Nature is quantum mechanical.

«you need a theory about the world whose fundamental concepts are not expressed, the meaning is not expressed in terms of measurment».

Complete rubbish. Exactly the opposite claim is what underlies correct modern physics. *All* physically meaningful claims about Nature have to be expressed as statements about the results of measurements.

The opposing epistemic point of view was best expressed by late Asher Peres: «quantum mechanics while correct it is not universal, some things must remain unanalyzed».

Rubbish. Quantum mechanics is *completely universal*. But like every scientific theory, axiomatic system, or even a human language, quantum mechanics has its own rules about sentences that are meaningful, and sentences that are meaningless. A system of meaningful and valid statements defines quantum mechanics, it applies universally, and it is capable of explaining all patterns in the empirical observations ever made by humans. So according to quantum mechanics, all *physically meaningful* questions may be analyzed. It is a complete theory. Quantum mechanics doesn’t allow one to analyze *physically meaningless* questions but the same fact is obviously true for any theory in science.

For now the supporters of each camp do not agree at all with the opposite point of view and seems that nothing can change their minds as each position is perfectly self-consistent. But what is my position because I am neither in the epistemic nor in the ontic camp?

Science is not about «camps» and all existing «systems of claims» that may be shown to be inequivalent to the right theory may also be shown to be either internally inconsistent, or incompatible with the empirical data. There is only *one correct* theoretical framework in modern physics that we call quantum mechanics. Everyone who likes to say meaningless things about Nature is a vacuous confused babbler. Everyone who says meaningful things that are inequivalent to the conclusions of the correct theory is demonstrably wrong.

First, Asher Peres position is wrong because his argument is pure handwaving inspired by Godel’s incompletness theorem. In Godel’s proof there is this key step of arithmetization of syntax without which the proof falls apart, and this is missing from Peres’ musings.

These extremely vague links to other popular memes are omnipresent in the «interpretation» business. These people have no standards whatsoever. Most of them are high. The analogy between quantum mechanics and Gödel’s theorems, if it can be talked about at all, is so vague and superficial that it shouldn’t serve as the basis of any serious analysis or paper.

More important, quantum mechanics can be reconstructed from the assumption of its universality. I believe the epistemic point of view is essentially correct, but I disagree that the Bayesian point of view gives you the complete story.

If you «disagree» with the laws of physics, feel free to apply for asylum in a different Universe that you will find more pleasing, idiot.

In fact I predict that quantum collapse happens in nature by itself (similar with spontaneous symmetry breaking) and that there is a boundary between quantum and classical due to dynamically generated superselection rules.

The «collapse» cannot be an objective phenomenon in Nature because such an «objective collapse» would imply the action at a distance.

This implies a testable extension of the quantum formalism and I’ll talk about this in future posts.

It is incredibly obvious that as long as one modifies quantum mechanics substantially, in a way that isn’t just a self-evidently tiny variation of a sort, the predictions of most phenomena will differ by O(100%) and these predictions have clearly been been refuted by experiments. All these theories have been falsified.

The same approach which allowed me to reconstruct quantum mechanics from physical principles predicts a unique extension of quantum formalism using Grothendieck group construction. Let experiments decide if I am right or wrong.

The experiments have already produced the results that decide about the fate of the theories by this Romanian crackpots and all similar crackpots. Experiments that have been done every day by chimps are basically enough for that. This lip service constantly paid to the «experiments» is a religious ritual of crackpots by which they want their junk to look more attractive to the stupid people who know nothing about science but who love religious rituals such as the repetition of the word «experiment». But whenever the musings by these folks are being compared to *any* experiment, they are always and immediately seen to be totally wrong. Pure rubbish.

I also think the basic demand expressed by Jean Bricmont is perfectly valid, but I disagree that the Bohmian interpretation is the way to go.

Agree, disagree. This is like some political debating camp. In science, one doesn’t agree or disagree. In science, one presents evidence in favor or something, against something, or he shuts up.

The main fault of Bohmian’s approach is distinguishing the complex number formalism of quantum mechanics and splitting the wavefunction into the real and imaginary parts.

It is one of the numerous lethal flaws of Bohmian mechanics and they are deeply intertwined.

The quantum harmonic oscillator can be successfully solved in phase space or in the quaternionic formalisms and one obtains the same predictions. However the actual representations are very different in mathematical terms, and who says complex wavefunctions deserves ontic status and quaternionic wavefunctions do not?

Complex amplitudes and all matrix elements etc. are complex numbers, not quaternionic numbers. In any mathematically sensible description of quaternion-valued matrices, each quaternion is just a \(2\times 2\) complex matrix of a special type, \(((\alpha,\beta),(-\beta^*,\alpha^*))\). One may see that interesting physical systems don’t have operators of this kind. On the other hand, one can see that \(i\) is essential. The imaginary unit \(i\) has to appear in the (unavoidably anti-Hermitian) commutator of two Hermitian operators, e.g. \(xp-px=i\hbar\), in Schrödinger’s or Heisenberg’s equations of motion, or in \(\exp(iS/\hbar)\) entering the Feynman’s path integral. The need for \(i\) in all these situations is ultimately equivalent. One needs one imaginary unit but there’s no room for several imaginary units. But if one finds an equivalent description that yields the same physical predictions, then a physicist just treats the two descriptions as equivalent and doesn’t spend any time by talking about the «apparent non-uniqueness» of the formalism. At any rate, «ontic» and «epistemic» theories in the usual strict definitions are wrong theories of Nature.

Finally, is nature local or nonlocal?

It’s not «finally». That’s what the blog posts began with. Nature is local. At least everything that is needed to explain the entanglement experiments is local.

Local or nonlocal are bad words lacking a precise enough meaning.

«Local» and «nonlocal» are very good and important words that were defined at the beginning of this blog post and that are important for people who actually study various (classical or) quantum mechanical theories of physical phenomena.

Nature is pure quantum mechanical, quantum mechanics is universal, locality-independent and no-signaling.

Nature is pure quantum mechanical and universal but the remaining two adjectives are just «would-be clever», confusing, and ultimately worthless mutations of actual adjectives that are needed in physics, especially the adjective «local».