Friday, January 30, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Emergence of thermodynamics from statistical physics

I have seen - and participated in - a nearly infinite discussion at Backreaction, started by a paper about not-quite-conservative solutions to the black hole paradox, where a couple of armchair physicists (and sometimes professional armchair physicists) were simply not able to understand some basic things about statistical physics, thermodynamics, and their relationships. I am always amazed where the self-confidence of the people who are parametrically as dumb as a door knob comes from. These pompous fools never want to realize that they're just wasting other people's time by their stupidity - or maybe they do realize it? Besides Sabine, a physics kibitzer called Peter was genuinely obnoxious. Why are so many loud people who talk nonsense about physics called Peter?

As far as I understand the sociology of these things, basic philosophical postulates of statistical physics and thermodynamics - and their key relationships - should be taught and usually are taught when you're a sophomore, an undergraduate student. These things have been known for more than a century - in classical physics - and quantum mechanics has only made certain limited corrections to this basic philosophy (related to the probabilistic nature of predictions and quantization of various quantities). It's just completely baffling for me when someone who has misunderstood everything about these basic issues is flooding some weblogs that are trying to pretend to be close to the actual physics research.

Black holes: progress

In the last decade, the field of quantum gravity i.e. string theory has made a substantial progress that allows us, for the first time in history, to treat black holes on par with other physical systems with many degrees of freedom. The black hole revolution in string theory can be summarized in this way. In this text, I will explain why and how our present description of a black hole - and observers who fall into it - is completely analogous to our description of a toy train, a large bound state of iron atoms.

This text has several goals. First, it should shed some light on various basic principles that are important for our microscopic description of black holes and black hole processes. But there is another goal which is more elementary: to actually explain what's happening with the toy train degrees of freedom and where the "friction" and "irreversibility" in generally come from because this question seems to be misunderstood by many people, too.

Processes to be studied

We will be sending spaceships into the black hole; they will ultimately be destroyed near the black hole singularity. And we will be playing with toy wagons; they will eventually stop because of the friction forces. All statements related to the black holes will use a green font to be distinguished. We will try to find analogies of many important processes and facts on both sides but some of them will require a special treatment.


In both cases, we will be talking about two descriptions of the processes. One of them,
  • the microscopic description,
will have to deal with the iron atoms of the toy train and with the fundamental degrees of freedom of the black hole. The "microstates" of the toy train will be determined by the positions and velocities (points in phase space) of the iron atoms that are found inside the toy train; recall that the phase space becomes discrete once you add quantum mechanics. The "microstates" of the black hole will be referred to as "fuzzballs": rather generic, complicated things can occur to the spacetime inside the event horizon. On the other hand, there will also be
  • the macroscopic description
that will try to neglect the "atoms" and only look at some broad properties of the continuum - big chunks of the metal or large, nearly smooth and empty regions of spacetime in the black hole case. This macroscopic description will include partial differential equations - mechanics of solids (continuum) or Einstein's equations. Clearly, the microscopic description is more fundamental and everything, including the macroscopic description, can be derived from it, at least in principle.

On the other hand, the macroscopic description may be simpler to be found. And indeed, it was historically known before it was derived from the microscopic language: waves in a continuum etc. were known before the matter was visualized as a bound state of atoms and Einstein's equations were found long before the microstates were analyzed in string theory.

Basic properties of the microscopic descriptions

The toy train is made out of atoms. Let's choose them as the microscopic building blocks; more accurate building blocks (e.g. quarks) wouldn't change the discussion qualitatively. The atoms have positions and velocities - and these quantities, living in a "phase space", fully determine the current or past or future "state of the system". And they evolve according to some deterministic, fully reversible equations.

In quantum mechanics, the equations don't determine the positions and velocities deterministically but the wave function, a gadget to determine the probabilities of their various values, does evolve "deterministically" and this evolution is reversible, too. This weaker, quantum form of determinism and irreversibility will be referred to as "unitarity". We have repeatedly explained why. In quantum mechanics, the phase space is divided to discrete "cells" whose volume is a power of "2.pi.hbar". Each of them becomes a basis vector of a Hilbert space and any complex linear superposition of such basis vectors is allowed, as a postulate of quantum mechanics dictates.

The black hole and objects that fall into it is made out of microscopic degrees of freedom, too. But their mathematical description is more abstract and it would be an oversimplification to call them "atoms of space". For example, the AdS/CFT correspondence describes them in terms of quantum numbers of a large number of gluons or gluon fields (on the boundary of the AdS space) and their superpartners. The individual microstates often have many other representations that are compatible with the "gluon description" because of various mathematical identities and dualities.

At any rate, the microscopic description is reversible and it doesn't really distinguish the past from the future. One can't talk about "friction" at the fundamental level: while the microscopic description works for arbitrarily large systems, it only gives us a correct intuition what happens, without additional work, if the systems are small - a few atoms, so to say. The information is preserved.

Basic properties of the macroscopic description

There are too many atoms - or "gluons" hiding inside a large black hole - so to get a usable theory of the large objects, we must simplify. Whenever we neglect a majority of the microscopic, "atomic" degrees of freedom, we talk about a macroscopic description. There can be many useful or "correct" effective macroscopic descriptions and all of them have the same "philosophical properties".

Partial differential equations govern the "most complete" macroscopic descriptions. In the case of the iron toy, we want to describe the toy by its shape, its velocity as a function of position and time, v(x,y,z,t), and its tension (and perhaps temperature) as a function of the same variables, among similar fields. This description is analogous to a bulk description of the spacetime with black holes based on general relativity and Einstein's equations.

But we may simplify the macroscopic description even more than that. We may imagine that the iron is incompressible and only parametrize the positions of the pieces of the toy and their orientation (and perhaps their average temperature). Spacetime configurations with black holes may be described in similar ways. For example, multi-centered black holes may be parameterized by the positions of the centers which seems to be a much smaller amount of information than the fields.

All these macroscopic descriptions are irreversible and break the time-reversal symmetry (or CPT symmetry if you want to incorporate C, P, or CP violation that has nothing to do with the irreversibility found in the macroscopic world). For example, there are many first time-derivatives in the equations. The friction (or viscosity or heat diffusion etc.), which is a term in the macroscopic equations, always slows the toy train down and generates heat: as the second law of thermodynamics observes, it never speeds it up by extracting heat from the iron. In the black hole language, the total area of event horizons never decreases; it usually increases. Also, the black hole interior is always found on the future side of the event horizons, never on their past side. The previous sentence says that there are only black holes in Nature and no macroscopic white holes.

To summarize, macroscopic descriptions always include processes that create heat and entropy (if the entropy change is calculated from the dissipated energy). They're irreversible.

Irreversibility from the microscopic starting point

Now we will see how this fact - irreversibility and entropy growth - is derived from the microscopic, reversible description. Be sure, there's no contradiction here. Not only the very existence of the irreversible phenomena but their very coefficients - such as viscosity or the coefficients of friction - can be derived from the microscopic description.

The main point is that the macroscopic descriptions involve a "lossy compression", if you allow me to import the terminology from JPEG pictures. In the microscopic description, which is always using a GIF-like lossless compression only, all operations are reversible. For example, you may define a complicated permutation of pixels of a GIF image which transforms a sexy woman to a rectangle of noise: the permutation plays the role of the evolution according to the laws of physics. But this permutation is still reversible. If you know the permutation, it is possible to reconstruct the woman from the noise.

On the other hand, that's not possible for a lossy compression. If you compress the woman at the beginning and convert her to a JPEG picture, it doesn't matter much. Maybe you began with a JPEG picture, anyway. The real problem is that you convert the permuted, final, noisy picture into a JPEG format, too. Clearly, the fine features of the points are forgotten, so if you apply the reverse permutation to the "fuzzy" version of the final noisy picture, you will obtain another noisy picture rather than a sexy woman.

This kind of "conversion to JPEG" is what you're doing every picosecond if you keep on using any kind of a macroscopic description. Inevitably, the information is getting lost, the detailed features of the picture are evaporating, and the picture is converging into a noisy rectangle. For random permutations, you obtain a gray uniform picture at the very end, much like for non-gravitational physics of gases.

However, gravity is more similar to permutations that tend to place dark points to the bottom and light points to the top of the picture, so the "attractor" looks different and non-uniform: the qualitative lesson should be that black holes which are highly non-uniform actually maximize the entropy among all states that can be confined into a fixed volume.

Macroscopic picture may temporarily look reversible, too

There's another point I want to mention. Even in the macroscopic description, you may often think that all the information is preserved: just the amplitudes are being reduced. For example, the velocity of the wagons is decreasing, because of the friction. But this decrease of the velocity follows a well-known mathematical law, so you can seemingly revert it.

This observation has an analogy in the black hole situation, too. The infalling observer seems to experience the same laws of physics as he did outside the gravitational fields (because of the equivalence principle). They look reversible. However, this reversibility only holds if you use the fully microscopic description of the degrees of freedom around his spaceship. Still, this description is not quite microscopic because most of the "ultimate microscopic", Planckian degrees of freedom of the black hole being neglected.

However, the toy train eventually stops completely if there is a friction. Once it stops, it becomes effectively impossible to reconstruct its past behavior. You can't say when it stopped one hour after it stopped because all the traces of the past behavior have disappeared.

This comment has a completely exact analogy in the black hole case, too. The infalling observer in the spaceship eventually approaches the singularity and when the tidal forces become devastating enough for him, he is killed, his spaceship is destroyed, and its atoms are torn apart. There is no more reversibility after this moment. It's completely analogous to the train that has stopped.

Now, this irreversibility looks real but it is largely an illusion. If you use the quantum language for this kind of evolution, it is actually not unitary, not even before the toy train stops or the spaceship approaches the singularity. Why? Because the typical "amplitude" of the velocity of the train decreases, so the region occupied in the phase space is shrinking (for damped harmonic oscillators, the amplitude of the position would also be shrinking but in our case, we may be ignorant about the place where the toy train stops).

Quantum mechanically, the number of microstates is proportional to the volume of the phase space and it is therefore shrinking, too. The information proportional to the logarithm of the number of microstates is decreasing, too. It is obvious that if a similar decrease - e.g. exponential decrease - of such amplitudes occurs in a quantum theory, what we observe is a continuous decrease - e.g. an exponential decrease - of the information. The black hole quasinormal (ringing) modes are an example how to describe such a continuous loss of information.

Again, I should have convinced you that the situation is completely analogous for toy trains and black holes.

Loss of information when the singularity is hit

This section was added later. Peter, Sabine, and others make all conceivable mistakes one can possibly make. Therefore, they make the following one, too. When the spaceship hits the singularity or the toy train stops, they believe that something substantial is happening with a lot of information.

But in both cases, it can be easily seen to be incorrect. The degrees of freedom describing the overall motion of the toy train - and the degrees of freedom describing the ordinary matter that is approaching the singularity - are completely negligible in comparison to the entropy of the toy train or the black hole.

It should be obvious in the case of the train which includes quintillions of coordinates of atoms, besides the few center-of-mass degrees of freedom of the wagons. But let us look at the situation of the black hole.

Imagine a large star whose weight is roughly 10^{32} kilograms (or 10^{40} Planck masses). That's close to 10^{60} atoms so the entropy is close to 10^{60}, too. Such a black hole will collapse into a black hole whose mass is 10^{40} Planck masses i.e. whose radius is 10^{40} Planck lengths: tens of miles. The area of the event horizon is 10^{80} or so (in the Planck units), and so is the entropy. Note that the star entropy was just 10^{60}, or the 3/4-th power of the black hole entropy. This scaling law is not a coincidence: it can be derived in a simplified model of astrophysics.

When a star collapses, the entropy increases almost by all of those 10^{80} bits (or "e-digits", if you care about factors of "ln 2"). Almost all those "exp(10^{80})" microstates become indistinguishable for the outside observer. Only a tiny fraction of them could have resulted from a collapse of a star but one needs extremely accurate measurements of the Hawking radiation to prove that 10^{80} conditions are automatically satisfied by the tiny microscopic correlations of the Hawking radiation - because of the stellar origin of the black hole - and only 10^{60} quantum bits in the radiation actually describe details of the initial star-like state.

When the atoms of the star eventually hit the singularity, they may "locally" kill 10^{60} bits of information. However, meanwhile, the entropy of the black hole, as it approached the spherical shape and/or absorbed the last pieces of the star, increased by an amount comparable to 10^{80}. The contribution of the event when "the star hits the singularity" is negligible in the information budget of a large black hole. It is completely wrong to imagine that something important is happening to the black hole information when the matter hits the singularity.

At that time, most of the information potentially available to the exterior observer in the future (through the Hawking radiation) is already "dissolved" in the gigantic information that can be carried by the black hole entropy. Whether or not the star hits the singularity plays no measurable role for the information counting of the observer outside the black hole. The information "killed" at the singularity only plays role for physics as observed by the infalling observer but I think he has more serious issues to think about because he's going to die pretty soon.

Deriving the irreversible laws from the microscopic laws

Another important point. The microscopic laws are more fundamental and they should be able to predict the behavior of all systems, including the large ones. We observe friction - or black hole mergers - in the macroscopic description. The entropy is increasing and we should be able to derive these phenomena from the microscopic description, too.

And of course, we can - including the coefficients of the irreversible term in the macroscopic equations (such as the friction coefficients). Such calculations generalize Boltzmann's H-theorem. The mathematical details may be difficult for various situations but the philosophical setup is always identical and I can explain it using the GIF and JPEG pictures.

The GIF picture sees all the pixel-size non-uniformities. In the real world, their magnitude is governed by universal properties of the atoms. You can calculate the size of the atoms and their preferred relative positions at a certain temperature. If you convert a GIF picture to a lossy, low-quality JPEG picture, you know exactly how much information is lost: almost all of the detailed information is lost; that's why the JPEG files are so much smaller. If you evolve the picture (by a permutation of pixels) and apply the lossy JPEG procedure at the end, you will need a smaller file than one at the beginning to preserve the same rough quality.

That shows that the information was lost, the entropy has increased, and you can actually calculate the rate if you know the details about the microscopic behavior of the GIF pictures and the permutations (the laws of physics). Even if you create a JPEG file of the final noisy picture that has the same size as the initial one, the reverse permutation of pixels won't give you a clear, attractive woman back.

Now, some people ask why can't we also prove that the entropy was low at the end? Isn't it an equivalent statement?

Well, it's surely not because the future and the past don't play the same role in the logical framework in which the specific dynamical laws are embedded - even though the dynamical laws are time-reversal symmetric. The initial states may be well-defined - as neighborhoods of points in the phase space or rather well-defined microstates - because of many reasons: because they encode the assumptions about the world we inherited or learned about or because Nature created the system in one way or the other.

But we can never make detailed microscopic assumptions about the future because the future is, by the very definition of the future, evolving from the past according to the laws of physics. Anyone who assumes detailed things about the future and tries to derive the past is a victim of an ideology with a predetermined goal, dreams, or a wishful thinking.

There can never exist a scientific argument that would allow you to make microscopic assumptions about the future. All rational statements about the future must always be based on the past (plus other logical arguments and insights based on other things from the past): the future is simply unknown otherwise. That's a simple reason why the future never has a low entropy.

So all the irreversible phenomena are, in principle, calculable from the microscopic laws.

Einstein's equations can be derived as the correct thermodynamic limit - and indeed, the thermodynamic limit is the same thingBold as the long-distance limit - of a microscopic, stringy description of a large black hole (perturbative string theory, AdS/CFT, or Matrix theory, among various combined descriptions using derivable effective field theories at different places). Einstein's equations (with some matter fields added) always follow from string theory at low energies. If you use Einstein's equations, you may also study a collapsing star and see that it develops an event horizon such that the black hole interior is always on the future side of this horizon hypersurface.

This line of reasoning plays the very same role as the derivations of the toy train's friction from the dynamics of iron atoms.

Order-of-limits issues: irreversibility

We saw that in both cases, the sensible macroscopic description is irreversible, generates heat (and the entropy associated with it, according to the macroscopic relations of thermodynamics), and loses the information, at least once the train stops and the spaceship is destroyed near the singularity.

But you might protest: doesn't the question "is the information destroyed" have a discontinuous answer as a function of the number of "atoms" N or the entropy "S" proportional to "N"? Well, it surely does. We said that the information was preserved in the microscopic, finite N description, but lost in the macroscopic, infinite N limit. But there's nothing paradoxical about it. There's no reason for this question to have a continuous answer - for example because the answer is not directly measurable.

We may define two obvious questions that are measurable, either in principle, or in practice: after the train stops or the spaceships hit the singularity,
  • can we reconstruct the information with the most accurate gadgets that are conceivable according to the laws of physics?
  • can we reconstruct the information with gadgets whose size is L "atomic" radii (or Planck lengths, in the gravity case) and that can measure periodicities of vibrations up to the L "atomic" times (or Planck times, in the gravity case) accuracy?
The first question is only good in principle because it "assumes" that we can only use the exact microscopic description, so it will never allow us to accept any approximation that led to the irreversibility. The question is really ill-defined for the strict large N limit because the strict large N limit always assumes that we make some approximations and throw most degrees of freedom away: these two assumptions are incompatible.

The second question is closer to the actual "experiments" and allows us to use any fair theoretical or phenomenological approximations we want. But this question has a continuous answer. Clearly, if L is greater or much greater than one (in Planck units), the resolution is just too bad and we won't be able to restore the information in practice, not even if we accept the finiteness of N. In other words, if we actually try to reconstruct the information with available gadgets, we will fail. The situation is becoming even more hopeless if the systems - trains or black holes - grow in size because we need to follow many more degrees of freedom and the required accuracy for each of them keeps on increasing, too.

So there's no paradox about a disconuity of the "is it irreversible" question. In principle, using the exact theoretical description, only the microscopic equations are acceptable and physics is always reversible. In practice, we must actually assume probes and gadgets that have a certain resolution and it becomes impossible to reconstruct the information long before the number of "atoms" N becomes strictly infinite.

Horizons and thermal insulators

There exists one aspect that you may consider to be special: the event horizons. You can't really get the information from the black hole interior because the information is behind the event horizon, all pictures based on classical general relativity imply.

Fortunately, I have a toy train analogy for this issue, too. Many wheels are turning inside a wagon. You could reconstruct their motion if you knew about the heat flows from the wagon. Unfortunately, the wagon is packed in a perfect thermal insulator so you can't see any heat flows whatsoever.

Now, as far as the qualitative questions about the information go, the thermal insulator is completely equivalent to the event horizon: the information can't get through (i.e. outside). There exist simplified equations that simply stop the heat when it reaches the insulator. There also exist simplified equations, Einstein's equations, that don't allow the information to get through the horizon.

You might say that there are no perfect insulators in Nature: the thickness of the insulator in the atomic radii is finite which is never enough to suppress the heat flux completely.

But the same thing holds for the event horizon: there are no exact black hole event horizons in Nature, either. For example, in quantum mechanics, particles can always tunnel through walls - both insulators and horizons. However, the event horizons are much more accurate a description of real physics near the "boundaries" of large black holes than the insulators simply because the size of large black holes in Planck units is a gigantic number - of order 10^{40} or more - so the averaging is much more perfect.

But at the microscopic level, there are no exact insulators and no exact event horizons. At the fundamental level, all objects in Nature are qualitatively similar to bound states of a few particles, e.g. molecules. At the practical level, there are many emergent phenomena in the large N limit and some of them prevent the information from jumping in between different regions.


In the black hole case. it became clear that the degrees of freedom inside the black hole can't be exactly - microscopically - independent from the degrees of freedom outside the black hole, those that end up in the Hawking radiation. Otherwise the information would get doubled on space-like hypersurfaces where most of the Hawking radiation is already gone but the initial mass of the star hasn't yet reached the singularity (such slices exist!). So the information inside the black hole must be really encoded in the same degrees of freedom that became accessible to the observers at infinity, via the Hawking radiation.

Is that a paradox or a deviation from the toy train analogy?

Of course, it's not. One must be very careful which descriptions we use and we cannot mix them incoherently. In the microscopic description, there are no event horizons. Once we specify the exact microscopic state of the degrees of freedom outside and near the horizon, the character of the black hole interior is determined. This may sound paradoxical by itself but it's not: it's just another way to talk about holography in quantum gravity. The interior is encoded in the boundary.

More accurately, you shouldn't talk about the exact "position" of bits of information in the microscopic language because the "position" of objects in a pre-determined smooth spacetime only exists if the spacetime does. And a smooth spacetime inside the black hole only exists in the macroscopic description, if you average many states. So if you ask "where things are" in the microscopic description, you are likely to be asking unphysical questions. The microscopic description really treats the black hole microstates and toy train microstates on equal footing: the black holes just employ a less familiar collection of degrees of freedom (like those matrix observables in Matrix theory or the boundary degrees of freedom in the AdS/CFT correspondence).

And in the macroscopic or semiclassical description, complementarity simply doesn't exist. The macroscopic or semiclassical description assumes that we're not looking at the matter and spacetime with the Planckian (or better) resolution. And if we don't have this resolution, there is no way to prove that the information inside the black hole fails to be independent from the information outside.

Linking the microscopic and macroscopic pictures

In the case of the toy trains, you might imagine how the mechanical description of the pieces of iron emerges from the detailed mechanics of atoms. For black holes, it looks like a much harder problem.

But it is simply true that if you average over regions that are larger than the Planck length - or if you average over microstates that only differ if you have a better, at least Planckian resolution - the smooth space inside the black hole emerges as the correct description. It has to be the case because general relativity is known to be the long-distance limit of string theory but it is a limit that has its own consistency mechanisms. They include causality and diffeomorphism symmetry and because it can't be macroscopically detected whether the horizon has already been crossed (because one would need to know the future exactly which would violate causality), macroscopic physics simply can't change at the event horizon.

Quite generally, the smooth physics of black hole spacetimes obviously has to arise as the averaging over all (or most of) black hole microstates with certain properties. In the path integral language, it is clear that the thermal mixed ensemble has to include sectors with the right periodicity of the Euclidean time and the Euclidean black hole has to be among them. The Euclidean black hole is an object in the "macroscopic description" but because the latter can be derived as a large N approximation of the microscopic one, at least outside the event horizon, the microstates at a given temperature simply must collectively look like a smooth black hole.

The averaging procedure that generates a "smooth empty interior" of the black hole shouldn't be much more difficult than the derivation of optics in water - that only differs by a different index of refraction etc., despite the complicated water molecules that are chaotically moving around.

Some of the black hole microstates describe a spaceship that has just crossed the horizon (and is inside). But they're very special. The most generic ones describe the highest-entropy configurations in the future when the black hole has already thermalized (but hasn't yet evaporated much). From the macroscopic description, we know that in the future, all the spaceships are already destroyed by the singularity. In other words, the generic, high-entropy black hole microstates describe an empty black hole interior.

Summary: missing piece, deriving physics inside

It would be nice to have much more explicit ways to derive the physics of the black hole interior from our microscopic theories by the averaging process, and to know why ordinary probes automatically do this averaging. These microscopic theories based on string theory clearly describe the correct dynamics we need, have all the properties they should have (whenever we can check them, and there's a lot we can check to see that the agreement is highly nontrivial) and they are directly usable by the observers outside the black hole, at least in principle.

But we really don't know whether we should be asking some exact, "microscopic" questions about the black hole interior even though the soon-to-be-dead observers inside the black hole clearly can't measure them with an unlimited accuracy: for example, they can't perform any "scattering experiment" for particles coming from and leaving to infinity, something we often do in flat and AdS spaces.

It is plausible that such an "exact description designed for the infalling observer" doesn't exist. And maybe, it does exist. For example, there can be a perfect description based on the light-cone gauge, using the slices that are "graphically perpendicular" to the event horizon in the Penrose diagram. But it would be nice to have quantitative tools to see how much the "scattering problem" that is applicable in infinitely large spaces but not inside black holes can be replaced by something else.

If it can't, the observer inside should use a semiclassical description where all additional processes occur on the background of a classical black hole interior - that he may want to treat analogously to the flat space or the AdS space: but it won't give him any exact results because his new living space is not infinite.

Quite generally, it is difficult if not impossible to calculate the explicit, detailed physics of the black hole interior from the pictures we have. For example, even if you use the Euclidean black hole paradigm, it involves an analytic continuation that really instructs you to assume that the black hole interior doesn't exist at all: the geometry is smoothly "capped" at the horizon and doesn't continue behind that. In some sense, the complementarity must be explained by the fact that the whole black hole interior is the "only analytically allowed" continuation of some quantities outside the black hole but the complete picture explaining how this continuation should be calculated and why it is unique is not available yet.

But we already know a lot, for example that the postulates of quantum mechanics and the basic links between statistical physics and thermodynamics are completely general and cover the black hole physics, too.

Add to Digg this Add to reddit