Sabine Hossenfelder thinks that our era in physics is characterized by three paradigm shifts:
Paradigm Shifts (Backreaction)These three entries are:
- Fall of reductionism
- The multiverse
- Disappearance of a fundamental spacetime
Reductionism is alive and kicking
Reductionism is the attitude or philosophical position that the objects and phenomena in the real world are composed out of simpler, more elementary objects and the fundamental interactions in between them; or the scientific strategy to study things by assuming that the first part of the sentence is correct.
A vast majority of the theoretical progress in science during the recent centuries has been a story about the success of reductionism. And nothing is changing about it at the beginning of the 21st century, either. Hossenfelder lives in a very different world. She interprets the current situation as follows:
You sometimes find today people in talk vigorously arguing that reductionism has limitations, just to find there's nobody actually disagreeing with them. Except for the old professor in the front row.Well, there are unfortunately many people who make living out of spreading vague nonsense about the limits of reductionism. And there are kilotons of people in all kinds of scientific institutions who nod when they hear this vacuous babbling.
In fact, Hossenfelder reveals what is behind this attitude of - mostly young - people. The old white male professor in the front row - imagine someone like Steven Weinberg - thinks that reductionism works and no "limitations of it" have been found. Because the old white male people are considered "uncool" by all the other people in the "fifth row" who have gotten to these institutions because they're neither old nor white nor male, such as Ms Hossenfelder herself, they immediately pick the opposite position.
Now, younger generations may often have a point. But they can't have a point just by their being younger. In this particular case, the opposition to the key role of reductionism in science is clearly justified by nothing whatsoever.
In theoretical physics, the last two decades did show us that the notion of "compositeness" may be subtle. If we change the conditions of the environment - in particular, the value of some scalar fields - the objects that used to look elementary (and light) become composite (and heavy) and vice versa.
Dualities, i.e. the equivalences between superficially vastly different descriptions, became an important family of the true paradigm shifts that have changed physics during the last 20 years. In particular, the holographic duality implies that many gluons and quarks - interacting by the strong interaction - may become physically indistinguishable from e.g. a black hole in the Anti de Sitter space.
Of course, such a configuration of matter is a high-entropy state in both descriptions (because black holes do carry a huge entropy, too, and because the counting of microstates has to agree in all descriptions). But the two descriptions lead us to very different ideas which configurations are "simple" or "elementary" and which of them are "complicated" or "composite".
But in some important sense, this duality revolution - while teaching us many new subtleties about the issue of compositeness - has strengthened reductionism rather than weakened it. It has provided us with many new ways how physical phenomena can be reduced to the elementary ones. Instead of one way how things are made out of the elementary building blocks, we must acknowledge that the number of ways is larger.
There's one more reason why the duality revolution has strengthened reductionism. The foes of reductionism used to say "more is different" or "infinitely many is different". This proposition implicitly contained the assumption that "more is more complex" or "infinitely many is more complex".
However, dualities imply that "infinitely many is, on the contrary, simpler". When you have a large number of particles, a large number of colors, or another integer asymptotically approaches infinity, there typically exists a "dual description" in which the "infinite N" limit is well-defined and the large N situations may be calculated as its small perturbations: there exists a new 1/N expansion.
So the original procedure to "reduce" still works, but for each "infinitely complex" limit, we actually have one new method how to reduce the phenomena to different building blocks. We may say that the dualities don't reduce the validity of the original methods of reductionism; instead, they add new and equally functioning new methods.
It seems obvious that the further progress in science will continue to be proportional to the number of new achievements in the realization of reductionism. Every time two phenomena, A and B, are connected in some way, which is a key process that theorists' work of organization of experimental insights includes, it is either the case that A is composed out of B, or B is composed out of A, or both A and B are composed out of something else. All these three possibilities mean a success of reductionism and there's really no other way to show that A and B are related.
At the beginning, I mentioned that reductionism can either be a "belief in the reductionist structure in principle", or a practical activity arising from this belief. These two interpretations of reductionism are partially correlated. At any rate, I think that the latter, the "reductionism in practice", has been very successful and continues to be successful. This success, together with the absence of any negative evidence against the reductionist principles, also supports the former "philosophical belief".
One hundred years ago, people began to learn that the Universe was much larger than they had thought. For centuries, the people would think that the Universe was as large as the solar system. Of course, a long time ago, they figured out that the stars were just "other Suns" that were very distant.
There were hundreds of millions of stars in our galaxy, the Milky Way. For a while, they thought that the Universe was as big as the Milky Way. However, it was realized that there are many other dots in the skies which are other galaxies and there are hundreds of billions of them in the visible Universe. The Universe is just huge.
Entertainingly enough, the fate of the "size" of the Universe in time - its age - has undergone the opposite transformation. People used to think that our Universe had to be infinite in time: its age was thought to be infinite. However, the discovery of the expansion of the Universe - and the Big Bang Theory - has revealed that the age of the Universe is finite, and (in c=1 units) pretty much equal to the size of the visible Universe.
That's another event that has put space and time on more equal footing, something that relativity makes natural or inevitable in most situations.
However, the multiverse is an even bigger enhancement of our ideas about the size of the Universe. The multiverse, which becomes the "whole Universe" in this framework, is composed out of many "partial universes" (one must be careful because in the context of the multiverse, the term "universe" can mean two different things!) which can't observe each other, not even in principle. Moreover, the individual universes can't influence each other because of their causal isolation or at least because they don't share the same "light" etc.
Is the multiverse a part of a valid description of everything?
Well, this question has several layers. If you ask me whether the fundamental equations that govern reality admit "other kinds of empty space" whose detailed properties differ from the vacuum that surrounds our planet, my answer is a resounding Yes. Not only I would guess that the answer is Yes. I think that this answer has been established beyond any reasonable doubt. There are many vacua in string theory. Some of them are nicer, simpler, more supersymmetric, and less viable than ours. Others that are semi-realistic arguably exist, too.
Another question is whether the "other universes" should be considered a part of the "same world" with the "same history" in which we live - and whether such an assumption can teach us something.
Concerning this second question, I am not sure about the answer. I think that the answer has not yet been established by any reliable scientific evidence but it is probably No. If we're separated from "other universes" by domain walls that dramatically change the effective laws of physics - the spectrum of elementary particles, among other things - then it is likely that no realistic clocks and sticks can survive the transfer to a different universe even if such a transfer were causally possible.
You could think that such a "destruction of the clocks" is just a technical limitation and it still makes sense to think about the different universes in a unified way. Well, maybe. But I think that such a unification is probably illegitimate even in principle. Why? It's because the "time" in the different partial universes is not the same time. It's measured by "different clocks". And the difference between the clocks is not just "technical".
The "space" in the two adjacent universes, obtained by a bubble nucleation, is not quite the same space, either. The x-coordinates in two such spaces are in a similar relationship as the x-coordinates in two spacetimes that are T-dual or mirror-symmetric to each other. You shouldn't be directly comparing them.
The more separated your bubble is from the other universes by some huge gradients of elementary fields (that destroy "clocks") or even by bubble nucleation, the less sensible it is for you to include the two regions of space into the same spacetime. This statement of mine should be comprehensible for those who have the feelings of experimenters: if you can't measure something, your theories shouldn't talk about it.
Well, I would disagree with such a strict rule: theories can talk and often have to talk about many things that can't be directly measured experimentally. But I do think that there are independent theoretical reasons why it makes no physical sense to include both different partial universes in the same space.
So while I think that the "other universes" do exist - in the sense that they're solutions to the same equations as our Universe and their fate may be theoretically calculated by these equations (the laws of nuclear physics also allow huge nuclei even though they're not too important for our lives!) - you shouldn't imagine that they exist in our neighborhood. Such an idea won't allow you to predict any additional processes, I think.
The anthropic people would disagree with this conclusion because they think that the multiverse gives us a "measure" that allows us to make "reasonable expectations" about our own location within the multiverse - and within the "landscape" of the possible solutions to string theory.
I don't really think that it's causally possible to extract such information from the fundamental theory. We can't ever assume that we're "typical" in any sense that the anthropic people envision. Why? For example, because there can be many more people - trillions of people - who will live in the future. If that's the case, then it would be shocking - a contradiction with the anthropic predictions - that we are among the first 20 billion people who have lived on Earth.
The anthropic champions could claim that the anthropic principle implies a doomsday: there can't exist trillions of people in the future. But such a "prediction" of their "theory of initial conditions" is clearly impossible because it is acausal. In the real world, the question whether the mankind will ever manage to expand to trillions of people is dictated by completely different things - how we and the future generations manage to fight with various threats to the civilizations and its future development and growth, including evil aliens, nuclear war, huge pandemics, and environmentalism, among other lethal threats. (I started with the least likely threats.)
So it's simply not possible for the "anthropic principle" to produce a very different, "statistical" algorithm to predict the world's population in a far future. Unless such an algorithm would actually be compatible to - or equivalent with - the detailed dynamical laws which seems implausible.
More generally, I think that it is illegitimate to assume that the "choice of the boundary conditions for our Universe" has to be determined by an algorithm that depends on a simple "pre-history of our inflating multiverse" in a simple way. There exist many more possible ways how the "point on the landscape" for our visible Universe was determined - strictly or probabilistically - and most of them can't be "biologically" or "statistically" derived from a family tree of a multiverse.
The very process of "quantum tunneling" into a different vacuum means that the daughter vacuum doesn't inherit much from its parent. So it can't be terribly useful for our predictions to learn who the parent was and who were his parents. ;-) Real physics starts with the Big Bang in our visible Universe, I think.
To summarize, the "other universes" are here with us to stay. But it seems pretty much impossible that we will ever measure something about them. And it seems impossible for us to learn something about our Universe by studying the other universes. These "other universes" follow from string theory and we may calculate their properties. But this knowledge will always be just a by-product of the theories found to describe our Universe.
These conclusions will have to be altered - in the positive direction - if we find out that in the "family tree" of the possible vacua, our point on the landscape is pretty close to the most "fundamental" or "symmetric" or "natural" vacua. In a religious analogy, you could say that people may find out that we are pretty close relatives of God. ;-) If that's the case, it makes sense to analyze the vicinity of the "God" vacua carefully and we may find ourselves. However, if we're very far from the preferred points on the landscape, it is probably not useful to study the structure of the "family tree".
Spacetime is not fundamental
I have written about this point many times, too. Spacetime is doomed. This statement is somewhat vague but its more well-defined refinements have been established. Once again, dualities play an important role here.
What looks like a winding number in one description may look like a component of the momentum in a T-dual description. Various descriptions with very different shapes of hidden dimensions may be equivalent to each other - by mirror symmetry. Moreover, the holographic duality implies the equivalence of physical phenomena in spacetimes that differ even in their number of large dimensions!
The topology of space may be transformed to a different one. Stabilized black holes, which look like a region of completely empty space (almost everywhere), may also be equivalently described as a bound state of a huge number of gluons or other particles. Matrix theory, the AdS/CFT correspondence, dualities, and all kinds of critical transitions where new light degrees of freedom - and new dimensions - may emerge under certain conditions are the reasons why spacetime will never be "absolute" again.
Aristotle has thought that "the rest frame" (connected with the Earth) was privileged and unique. Things wanted to stay at rest: Aristotle used the "infinite friction" limit of physics which wasn't terribly useful to understand motion. :-)
However, Galileo and Newton showed that all inertial systems - moving uniformly with respect to each other - are equally good to formulate the laws of physics. Einstein realized that such transformations preserved not only the simplicity of the laws of mechanics but also the speed of light because these transformations also mixed space and time (special relativity) and the equivalence could also include accelerating and other general coordinate systems (because/therefore gravity has to be included among the forces, as general relativity dictates).
The last two decades of fascinating insights in string theory have also showed that the very dimensionality of spacetime is flexible and somewhat ill-defined in string theory. When all the "hidden" dimensions show their beauty in the full glory, you always see 9+1 or 10+1 dimensions. But different ways how the dimensions may decompactify are connected with a dense network of dualities involving diverse compactifications, topological transitions, and equivalences. Spacetime is very flexible at the fundamental level.
Much of these insights refer to "space": it can be transformed to many other, inequivalent geometries or described by "non-geometric" degrees of freedom. In particular, the "nearly flat empty" geometry around large black holes seems to be emergent. However, by special relativity, it seems clear that time, much like space, should fail to be fundamental in the very same way. However, our understanding of an "emergent time" is much less clear. One of the reasons is that the Minkowskian time can't really be compactified (closed time-like curves imply logical contradictions). And without any compactification, it's hard to "melt" the time into other forms.
However, the Euclidean time may be compactified - that's what finite-temperature calculations do all the time. The Euclidean time may be melted into other forms, undergo topological transitions and other things. That's what black holes - with their nonzero temperatures - are doing all the time. It's a big open task to understand how the "character of time" may dramatically change, in a similar way as in the case of space, and what all these things mean.
In particular, can there be any meaningful timeless description of physics? What are the physical observables that may be calculated in such an unfamiliar regime of existence?
Hossenfelder also asks whether the fundamental entities are "space", "matter", or "something in between".
Of course, that's mostly a terminological question. "Space" may also be defined as "geometry". But both of these terms are somewhat vague because the concept of "geometry" may be generalized in many ways. It's clear that when we talk about geometry these days, we no longer think about the "two-dimensional or three-dimensional flat Euclidean geometry" which is what geometry meant 2,000 years ago.
The Riemannian, higher-dimensional, quantized, non-commutative, and other adjectives have become standard qualities that may describe geometries we encounter. All such theoretical constructs are still counted as "geometries". In perturbative string theory, we may also care about the "stringy geometry" that also knows about the T-dual coordinates, T-duality in general, mirror symmetry, and all other things that the two-dimensional conformal field theory relates to the spacetime geometry and/or describes by the same methods.
Because in string theory, gluons, electrons, and other particle species are "made out of" the very same stuff as gravitons, it's clear that the matter and geometry is unified. For example, the E8 x E8 gauge bosons in heterotic string theory arise as very similar excitations to the gravitons.
You may either say that there are two forms of "existence" whose origin used to be split but it is unified in string theory. Or you can say that the E8 x E8 gauge bosons, while being "matter" in the pre-stringy theories, become a part of the "generalized stringy geometry" (involving the extra left-moving 16-dimensional torus), and everything is made out of the "generalized geometry". This kind of unification of matter and geometry has essentially been known already to Kaluza and Klein who were the first ones to realize that photons (and gauge bosons) could arise just as gravitons in a higher-dimensional geometry. String theory has brought us many new ways how to link gravity and other, conventional types of matter.
Of you can say that everything is made out of matter and some forms of matter, namely strings (and/or branes) in particular vibrational patterns etc. living on a pre-existing spacetime with no dynamical geometry inserted, and some of the vibrational patterns - the gravitons - just happen to be indistinguishable from deformations of the spacetime geometry: the spacetime geometry is made out of them and you ultimately find out that it is de facto dynamical, anyway.
In all three interpretations, the mathematics that ultimately describes what's going on is identical. So you can't really say which of the three answers is correct. However, it's clear that the "absolute" separation of the forms of existence into "matter" and "underlying space" is gone. While it will be forever useful to separate these players in the analysis of any conceivable realistic situation, the iron curtain that separated them has been torn down in fundamental physics.
To summarize, reductionism is doing very well. The more progress science makes, the more powerful reductionism will become, just like in the recent decades and centuries. We are finding new ways to reduce, making the methods based on reductionism more important than ever before.
The multiverse may be here with us to stay but the "other universes" will probably remain just some by-products of our most accurate theories to describe our Cosmos. They will remain curiosities that can't teach us much about the environment we like so much. At some point, people will hopefully find out the point in the "landscape" which describes the bubble we inhabit. At that moment, the "other universes" will literally become curiosities - places we won't ever be able to communicate with.
The spacetime is no longer "absolute" and it is no longer "strictly separated from other forms of matter" that used to occupy it. The iron curtain between these different forms of existence has been torn apart.
However, there's one crucial point that I haven't articulated in this text yet: it's still absolutely critical for any theory considered in physics to predict something that behaves as space and something that behaves as matter living in this space - and the major classes of particle species - that live in this Universe.
If we say that the spacetime and matter ceased to be fundamental and/or separated, it surely doesn't mean that they have disappeared from physics. Quite on the contrary. They're still essential parts of physics - essential predictions that every theory has to make unless it wants to be instantly eliminated. The reason is that the matter and space are being observed all the time and physics has to agree with the observations.
Exactly because the most accurate theories we are testing and elaborating upon today can't build the world out of a pre-existing spacetime and a strictly and permanently separated matter living on the spacetime - because that's not how the world works at the fundamental level - it becomes extremely nontrivial for these theories to predict physics that approximately, with a huge accuracy, does seem to separate the phenomena to spacetime and the required types of matter.
If a theory with a "non-fundamental spacetime" fails in this task, it was probably too abstract or detached from the reality and it immediately dies. Only the theories carefully walking on the thin rope between the "hot philosophical sexiness of the new concepts that destroy all the boundaries" and the "cold empirical realism required to match all the strictly classified experimental facts" has a chance to survive.
And that's the memo.