Wednesday, January 06, 2010

Is science and experience the same thing as "cognitive bias"?

Sabine Hossenfelder wrote another text showing her confusion about some very basic philosophical principles of science in general and physics in particular.

Every time she writes something new, I am amazed how my previous expectations about the most trivial thing about physics that can still be murky to some people are surpassed by yet another record. Last time, she presented her opinion that naturalness - and therefore the dimensional analysis - have no room in physics. There's no reason to expect unknown dimensionless parameters to be of order one, she still thinks.

Needless to say, dimensional analysis and estimates based on naturalness belong among the most paramount methods of physics if not all of science and rational thought. The areas of squares, triangles, disks, and all sufficiently non-singular surfaces of diameter D scale like D^2: they are equal to D^2 multiplied by a number of order one.

In physics, a similar rule is valid not only for simple geometrical tasks but for all tasks. And it holds not only for quantities with the units of area but for arbitrary quantities. Two quantities with the same units should be of the same order and if they're not, there must exist a qualitative explanation. The more different the two values are, the more urgent such an explanation is. There are many ways to see why this is so. The principle has many more or less rigorous and more or less provable implementations in various technical contexts and there are also many profound consequences of this principle.




In particular, estimates and a qualitative identification of new phenomena would be completely impossible without such a principle. Dimensional analysis is needed to estimate whether a factor may play some role in influencing XY, or whether it is negligible. Rough estimates are almost always the first step towards a more complete understanding of any new phenomenon or a set of phenomena.

Let me stop with this because I think that a person must be severely intellectually limited not to appreciate the importance of this principle by herself and it makes no sense to try to waste time with someone who just can't get this simple idea and who is arrogantly and stubbornly proud about her stupidity instead.



TRF: signal or noise?

Noise vs patterns, experience vs bias

In her new essay designed to advocate similar delusions, Ms Hossenfelder suggests that physics is "cognitively biased". If you imagine the rants by the feminists who complain that physics has been shaped by the white male imperialists which have affected the form of the equations ;-), well, then it's pretty much what Ms Hossenfelder says, too. This is the kind of stuff that some EU countries such as Sweden subsidize as "physics".

To be more positive, she offers an interesting experiment due to Dr Peter Brugger that has been done. She dedicates several paragraphs to it but it's easy to concisely describe the experiment: a medical test has shown that self-described skeptics in the paranormal phenomena are more likely to guess, if they're not given enough time to think, that a genuine pattern is just noise; the paranormal believers are inclined to do the opposite mistake, i.e. to see patterns where there aren't any. Skeptics converged closer to believers when they were "treated" by levodopa (precursor to the neurotransmitter dopamine).

Now, I personally believe that the result of this experiment makes perfect sense - and it is kind of obvious that these are the "biases" in the intuition of various people who are either excessively gullible and mysterious or excessively cynical. And it's likely that a neurotransmitter is likely to increase the feeling of a "signal" or a "harmony", especially if you don't have enough of it to start with. The biases are the reasons why the people are overly believing or why they excessively deny what can be seen. Sabine Hossenfelder doesn't like this obvious explanation - that the author of the paper has offered, too.

As far as I can say, she has no rational justification for her attitude - except for some irrelevant comments on whether or not it was published somewhere. She likes these "institutional" arguments a lot - and it's not hard to understand why. Every incompetent person who has been put at a wrong place is likely to use such arguments involving her place because her being at the wrong place is the most powerful argument she can offer: her knowledge is negligible in comparison.

But she actually wants to make a much more far-reaching and much more ludicrous statement: that people should never look for any explanations of patterns (and small parameters, like those in the Standard Model) because they're subjective and they're results of "cognitive biases", just like in this experiment. That's what she says. Science sucks. Never ask why. Patterns are dead. Holy cow squared. What a ******* *****.

Empirical experience is not just a "bias"

The experiment by Dr Brugger is interesting but we must notice that it has measured the "irrational" abilities of the human samples. It wasn't a test of the power of the scientific method: it was a test of the people who are not allowed to use the scientific method - and who are not even given enough time - to answer the questions about the patterns and noise.

It's obvious that the ability to "instinctively" solve certain problems is useful for the people in many situations. Different people have different degrees of talent - and accumulated experience - to more or less correctly answer similar tasks. Scientists need to become able to "instinctively" solve many problems, too.

But this "intuitive" solving of the problems is not yet science. The human samples in Brugger's experiment were objects to be studied - not the authors of a scientific paper. The scientists are given more time and they are expected to solve the problems by much more robust and objective methods than by quick instincts. And these methods actually exist. If done right, they work, too.

In particular, the thinking people have accumulated a lot of empirical data that was integrated into scientific theories. These scientific theories were chosen from many candidate hypotheses - they were (almost) the only ones that were not shot down, i.e. not falsified, by their proven disagreements with the observations.

Once we have these theories, we may predict and calculate the results of many experiments more accurately than we need in practice. For another class of experiments, the errors are larger. Yet another class of experiments can only be predicted qualitatively and the relevant quantities may only be estimated up to order-one errors. One more class of experiments may remain undecided even in your most complete theory because it transcends its domain of validity.

At any rate, scientific theories give us a framework to determine whether something is true or not, and if the question is spoiled by uncertainties, the framework still allows us to say whether something is more likely or less likely. This is the most typical thing that science allows us to do because we can almost never prove that something in Nature is entirely impossible: to deny that this probabilistic reasoning based on the current knowledge is a legitimate mode of thinking is equivalent to the denial of the whole scientific method.

If we use some theories to explain the observations, we must often make additional assumptions - e.g. about the initial conditions and/or the values of unknown parameters. For a wrong choice of these parameters and/or initial conditions, the theories would disagree with the empirical tests. We always have and must have some a priori ideas about - i.e. probabilistic distributions for - the parameters and/or initial conditions. If it turns out that the values of the parameters and/or initial conditions needed to agree with the detailed evidence are very unlikely given our probabilistic distributions, it either means that our expectations, as encoded in the distribution, were wrong; or we had to be damn unlucky.

There's no way to deny this conclusion: once again, science would be impossible if we couldn't think in this way.

So what do we do after we accept that the parameters and/or initial conditions have to be whatever the empirical evidence requires them to be? We ask ourselves whether such values and properties are plausible.

For example, people found out that QED was working very well. To agree with all the detailed measurements of electromagnetism, they had to accept that the fine-structure constant was 1/137.036 or so. If it were 1/136.804, the theory wouldn't work equally well. Should we be shocked that it's 1/137.036 and not 1/136.804?

At some ambitious level, we can be shocked. It would be great to calculate the precise number exactly - and for many of us, it has been an important driver to study high-energy physics. But there's really no "qualitative" difference between the numbers. We know that the fine-structure constant has to be smaller than one or so for QED to converge - or more dynamically speaking, to send the lethal Landau pole to energies that are huge enough to be irrelevant. So our a priori distribution covers the interval (0,1), roughly speaking, and 1/137.036 is a pretty random number in this interval. (The numbers get even more plausible if we express the fine-structure constant in terms of the electroweak parameters.) We can therefore say that some "complicated dynamics" to be revealed in the (far) future will allow us to calculate the number 1/137.036 exactly but for the time being, we can be moderately satisfied if we just know the value from the measurements.

However, when you incorporate a similar parameter that can also a priori be between (0,1) or (0,2.pi) and its value required by experiments is 10^{-10}, you should be more surprised because 10^{-10} qualitatively differs e.g. from 1/136.804. It's pretty unlikely for a random number between 0 and 1 to be that small. The smallness is a new pattern that doesn't occur by chance too often. Physicists want to understand patterns.

Now, indeed, there is no "exact" quantification how serious a pattern or hint is. We are talking about methods to penetrate through fog at the boundaries of our knowledge and the pictures taken in fog are never quite sharp.

However, it's even more important that each scientist - and maybe each rational person - has to have his own, at least vague criteria for what he considers likely and unlikely patterns, what he considers to be reasons for shock. Whatever scientific theory or a method to calculate the probabilities you start with, it will produce something. Who has no framework like that is spiritually and intellectually empty: he or more likely she knows absolutely nothing about the Universe and he or she can make no predictions and give no explanations. People don't have, can't have, and shouldn't have identical opinions about the unknown phenomena but they should have some opinions that form a relatively self-consistent whole.

Also, any such a framework, as long as it is rational, will imply that a parameter that is more brutally fine-tuned (e.g. further from 1 on the logarithmic scale) is a bigger source of surprise - and a bigger reason to keep on thinking - than a parameter that is closer to one. Again, I can't tell you any objective thresholds where everyone should start to be shocked. There is no fully canonical calculation of such a threshold in a generic situation. Each person tolerates a different multiplicative factor in his or her satisfactory order-of-magnitude estimates. However, it's clear that if the estimates get too bad - e.g. by ten orders of magnitude or more - something must be wrong with our expectations.

And whenever something is wrong with our expectations - which is just a slightly fuzzy version of a falsification of a theory - we must try to fix it. This is what all science is all about. Hypotheses get falsified by the evidence. Proving that a theory requires a fine-tuning - a highly unlikely additional assumption - to work properly is equivalent to a partial falsification of the hypothesis (which may include a framework to estimate probabilities and magnitudes of unknown effects) because the additional assumption is unlikely to be satisfied by chance, as estimated by our cutting-edge methods to estimate the probabilities (the chance).

I hope that I have convinced you that whoever denies the importance of this method - the importance of distinguishing patterns from noise and likely statements from unlikely statements - cannot possibly know anything important about science. Unfortunately, Ms Sabine Hossenfelder is extremely far from being the only one.

And that's the memo.


P.S.: I feel the urge to add one specific example that will be relevant tomorrow when another pseudoscientific book about the arrow of time will be released. Some people think that the low entropy of the early Universe is very unlikely which signals a problem with physics.

To make this faulty argument, they use a probability distribution for the initial states that says that high-entropy states should be much more likely (and the infinite-entropy states should be infinitely likely, so that their measure is not even normalizable to unity). If the existing, established scientific theories actually predicted that the high-entropy initial states were much more likely, the observed low-entropy beginning would be a fine-tuning problem.

Except that they obviously don't imply anything like that. What's wrong in the "fine-tuning" argument above is the assumption that all microstates are equally likely in the past. The only situation when microstates can be argued to be equally likely is in the (far) future of a physical system, after a sufficient time so that thermalization could have occurred. You can't prove the same thing about the past before the thermalization could have taken place or even about the beginning of the Universe. You can't prove it because it's not even right. It's obviously wrong.

The right fix is to use a better probability distribution for the initial states, one that implies that the entropy was actually of order one - which is remotely similar to other, well-known examples of naturalness. If your measure - defining your expectations - is such that the states with the entropy of order one (or any low enough entropy, for that matter) dominate the probabilistic measure for the initial state, which is natural, everything is compatible with all observations we have made. And it really doesn't matter which value of the order-one initial entropy you use.

In fact, it doesn't matter what you say about the Big Bang at all. The thermodynamic phenomena in your lab clearly have nothing to do with cosmology; they're all about the statistical properties of the atoms in the same lab.

But there are other situations where the expectations follow from much more successful and established principles - and the natural predictions don't agree with the observations unless we fine-tune the things brutally. Those things simply are problems that wait to be solved.

No comments:

Post a Comment