Thursday, May 2, 2013

The Reference Frame: Aaronson's anthropic dilemmas

If you read my previous observations on Scott Aaronson's book including all the comments, you will see my remarks about all the chapters up to Chapter 15 about the quantum computation skeptics ? where I agree with almost everything Aaronson writes although he seems to focus on the dumb criticisms and writes too little about the more intelligent ones (and e.g. about the error-correcting codes).

Chapter 16 is about learning; perhaps too much formalism if we compare it with the relatively modest implications for our understanding of the process of learning.

Chapter 17 is the most hardcore "computational complexity" part of the book and hopefully the last one that is intensely focusing on the complexity classes. It's about interactive proof systems. Aaronson often wants to present all of computer science as a "fundamental scientific discipline" so he tries to apply these superlatives to aforementioned "interactive issues", too.

I have a lot of trouble to get excited about these problems.

In the interactive proof system, two beings ? a verifier and a prover ? are exchanging messages whose goal is to ascertain whether a given string belongs to a language or not. The prover cannot be trusted while the verifier only has finite resources. It looks like an immensely contrived game ? from game theory ? to me. Detailed questions about such a game seem about as non-fundamental to me as the question whether chess is a draw.

The only true reason why I would want to prove \(P=NP\) or its negation (or even the numerous less important results of this sort) would be to get a million of dollars.

Needless to say, I think that Scott Aaronson is a world's top professional in the computational complexity theory ? and I think that the quantum aspect is an optional cherry on a pie for him, an extra X-factor that he adopted to feel rather special about the computational complexity theorists themselves.

But for me, this is a portion of mathematics that is completely disconnected from fundamental problems of natural sciences. I like to think about important scientific problems. But the complexity papers aren't really about the beef, about particular problems. They are thinking about thinking about problems ? and they don't really care what are the "ultimate" problems and whether they're true (e.g. in Nature). In this sense, suggesting that this is a fundamental layer of knowledge about the world or the existence is as silly as the proclamations of anthropologists who study dances of wild tribes in the Pacific but who also try to study the interactions among scientists. These anthropologists are trying to put themselves "above" the physicists, for example, even though in reality, they are inferior stupid animals in comparison ? people who completely miss the beef of physics and who may only focus on the irrelevant, superficial, sociological makeup on the surface. In some sense, Scott as a computational complexity theorist is doing the same thing as the anthropologists but with more mathematical rigor. ;-)

Moreover, the computational complexity theory seems to be all about a particular "practical" quantity I don't really care about much ? namely computational complexity. I am probably too simple a guy but I primarily care about the truth, especially the truth about essential things, and I don't really care how hard it is to find or establish the truth. So the whole categorization of problems to polynomially or otherwise easy ones ? and Aaronson defines dozens of complexity classes and discusses their relationships ? is just something orthogonal to the things I find most important.

But let me stop with these negatively sounding remarks about the discipline. Computer science is surely a legitimate portion of maths and Aaronson is talking about it nicely.

Chapter 18 is about "fun with the anthropic principle".

This part of the book doesn't need any physics background ? because this principle used by some physicists isn't about any scientific results, either. It's about their emotional prejudices and unsubstantiated beliefs in proportionality laws between probabilities and souls (which boils down to the fanatical egalitarianism of many of these folks).

The chapter is at least as wittily written as the rest of the book. The end of the chapter talks too much about complexity again but let's focus on the defining dilemmas in the early parts of the chapter. After a sensible introduction to Bayes' formula and its simple proof, Aaronson talks about some characteristic problems in which people's attitudes to the anthropic reasoning dramatically differ.

Hair colors in the Universe

At the beginning, God flips a fair coin. If the coin lands heads, He creates two rooms ? one with a red-haired person and one with a green-haired person. If it lands tails, He creates just one room with a red-haired person.

You find yourself in a room with mirrors and your task is to find the probability that the coin landed heads. Well, you look into the mirror that's a part of each such room. If you see you are green-haired, the probability is 100% that the coin landed heads because the other result is incompatible with the existence of a green-haired person.

What about if you see you are a redhead?

A natural (and right!) solution, one mentioned at the beginning, is that the probability is 50% that the coin landed heads. The existence of a redhead is compatible with both theories (heads/tails) so you are learning nothing if you see a redhead in the mirror. You should therefore return to the prior probabilities and both theories, heads and tails, have 50% odds by assumption.

In my opinion (LM), this is really the most correct calculation and justification one may get. I tried to "improve" Aaronson's justification a bit.

Now, one may also (incorrectly!) argue that the probability of heads is just 1/3 instead of 1/2 if we see a redhead. The tails hypothesis is twice as likely, 2/3, than the heads hypothesis because ? and again, this is an explanation using my language ? it makes a more nontrivial, yet correct, prediction of the observed hair color. The heads hypothesis allows both colors so the probability that "you" will be the person with the red hair color is just 1/2.

But I believe this argument is just wrong. It doesn't matter how predictive the hypotheses are! By assumption, the prior probability of heads and tails were 50% vs 50%. The tails hypothesis is more predictive because it allows you to unambiguously predict your hair color ? it has to be red because you're the only human in that Universe. But we know that this doesn't increase the probability of heads above 50%.

For that reason, we also don't need an additional "adjustment" of the argument ? and this adjustment is wrong by itself as well ? that returns the value 1/3 back to 1/2. We may return from 1/3 to 1/2 if we give the Universes with larger numbers of people ? in this case, the heads Universe ? a higher "weight". There is no reason to adjust these weights. The point is that the prior probabilities of heads and tails are completely determined here by an assumption so any inequivalent "calculation" of these prior probabilities based on the number of people in the Universe is wrong. We just know it to be wrong. We're told it is wrong!

Aaronson "calculates" the value 1/3 of the probability by Bayes' formula. But the calculation is just conceptually wrong because the prior probabilities of heads/tails are given as 1/2 vs 1/2 at the very beginning and the observation of a redhead provides us with no new data and no room to update the probabilities of hypotheses. The observation of a greenhead does represent new data. The arguably invalid update in the case of the observation of a redhead plays one role: to counteract the update from the green observation so that the probability of heads weighted-averaged over the people in the Universe will remain equal to the probability of tails. But it's not the redhead's "duty" to balance things in this way. By seeing his red color, he just learns much less information about the Universe than the greenhead (namely nothing) so he has no reasons to update.

Using slightly different words, I may point to a very specific error in the Bayesian calculation leading to the result 1/3, too. Aaronson says that the probability \(P({\rm redhead}|{\rm heads})\) is equal to 1/2 ? probably because in the two-colored heads Universe, there are two folks and they have "the same probability". But that's a completely wrong interpretation of the quantity that should enter this place of Bayes' formula. The factor \(P(E|H)\) that appears in the formula should represent the probability with which the hypothesis \(H\) predicts some property of the Universe we have actually observed, \(E\) i.e. the evidence. And what we have observed isn't that a random person in the Universe is a redhead. Instead, we have observed that our Universe contains at least one redhead; in particular, the predicted probabilities \(P({\rm redhead}|{\rm heads})+P({\rm greenhead}|{\rm heads})\) don't have to add to one because both "redhead" and "greenhead" refer to the observation of at least one human of the given hair color so these two colorful observations are not mutually exclusive. (You should better avoid propositions with the word "I" because this word is clearly ill-defined across the Universes; there's no accurate "you" or "I" in a completely different Universe than ours because the identification of the right Universe around you is a part of the precise specification of what "I" or "you" means; you should treat yourself as just another object in the Universe that may be observed, otherwise you may be driven to spiritually motivated logical traps.) The probability of this actual observation ? evidence ? is predicted by the heads hypothesis to be 1, not 1/2. With the correct value 1, we get the correct final value 1/2 that the heads scenario is right!

I must mention the joke about the engineer, physicist, and mathematician who see a brown cow outside the train. The first two guys say some sloppy things ? cows are brown here (engineer); at least one cow is brown here (physicist) ? but the mathematician says that there's at least one cow that's brown at least from one side in Switzerland. This is the correct interpretation of the evidence! The situation in the previous paragraph is completely analogous. (There's a difference: people are less afraid to say unjustifiable and/or wrong propositions that are probabilistic in character, e.g. "I am generic", than Yes/No statements about facts that are "sharply wrong" if they're wrong. But probabilistic arguments and conclusions are often wrong, too!) I am surprised that even Scott Aaronson either fails to distinguish the different statements; or deliberately picks one of those that actually don't follow from the observations! This is the kind of the elementary schoolkid's mathematical sloppiness that powers most of the anthropic reasoning.

At the end, the error of the Bayesian calculation may also be rephrased as its acausality. It effectively assumes that the probabilities of different initial states are completely adjustable by some backward-in-time notions of randomness even though they may be determined by the laws of physics ? and by the very formulation of this problem, they are indeed determined by the laws of physics in this scenario!

Madman

A madman kidnaps 10 people, puts them in a room, throws 2 dice, and if he gets 1-1 (snake-eyes), he kills everyone. If he gets something else, he releases everyone, kidnaps 100 other people, confines them, and throws again. Again, 1-1 means death for everyone, another result means that 100 people are released and 1,000 new people are kidnapped. And so on, and so on.

You know the rough situation and you know that you're kidnapped and confined in the potentially lethal room (but you don't know whether some people have already been released). What's the probability that you will die now?

Obviously, you know the whole mechanism of what will happen. He will throw dice. The probability to get 1-1 is obviously 1/36. That's the chance you will die.

Aaronson presents a different, "anthropic" calculation telling you that the chances to die are vastly higher, essentially 8/9. Why? Well, the madman almost certainly releases the first 10 people and then probably the 100 people as well etc. but at some moment, he sees snake-eyes so, for example, he kills 100,000 people and releases the previous 10,000+1,000+100+10 = 11,110 people. Among the folks who have ever been confined to the scary room, about 100,000/111,110 = 8/9 of them die. So this could be your chance to die; the ratio doesn't seriously depend on the number of people who die as long as it is high enough.

Which result is correct? Aaronson remains ambiguous, with some mild support for 8/9. I think that the only acceptable answer is 1/36. The argument behind 8/9 is completely flawed. It effectively assumes that you're a "generic" person among those who are kidnapped on that day ? there's a uniform distribution over those people. But that's not only wrong; it's mathematically impossible.

The average number of people who will die is\[

\sum_{n=1}^\infty 10^n \zav{\frac{35}{36}}^n \frac{1}{36}

\] but this is divergent because \(q=350/36\geq 1\). Chances are nonzero that the madman will run out of people on Earth and won't be able to follow the recipe. At any rate, the reasoning behind \(p=8/9\) strongly assumes that the geometric character of the sequence remains undisturbed even when the number of the hostages is arbitrarily large. It effectively forces us to deal with an infinite average number of people and there's no uniform measure on infinite sets because there exists no \(P\) such that \(\infty\times P = 1\).

I think that this is not just some aesthetic counter-argument. It's an indisputable flaw in the calculation behind \(p=8/9\) and the latter result must simply be abandoned. In this case, we know very well it's wrong. If the madman tries to causally stick to his recipe as long as it's possible, the probability for each kidnapped person to die is manifestly \(p=1/36\).

The wrong, anthropic results often make unjustified calculations based on the "genericity" of the people ? assumptions that some probability measures are uniform even though there is absolutely no basis for such an assumption and in our scenario, this uniformity assumption explicitly contradicted some assumptions that were actually given to us! And the anthropic arguments also tend to make acausal considerations.

Doom Soon and Doom Late

This is also the case of the "doomsday is probably coming" argument. Imagine that there are two possible worlds. In one of them, the doom arrives when the human population is just somewhat higher than 7 billion (Doom Soon). In the other one (Doom Late), the population reaches many quintillions (billions of times larger than the current population).

Again, just like in the hair color case, if we have reasons to expect that the prior probability of both worlds are equally or comparably large, then we have no justification to "correct" or "update" these probabilities. The existence of 7 billion people is compatible both with Doom Soon and with Doom Late. So both possible scenarios remain equally or comparably likely!

The totally irrational anthropic argument says that Doom Soon is 1 billion times more likely because it would be very unlikely for us to be among the first 7 billion ? one billionth of the overall human population throughout the history. This totally wrong argument says that we're observing something that is unlikely according to the Doom Late scenario ? only 1/1,000,000,000 of the overall history's people have already lived ? and our belief that we live in the Doom Late world must be reduced by the factor of one billion, too.

That's wrong and based on all the mistakes we have mentioned above and more. The main mistake is the acausality of this would-be argument. The argument says that we are "observing" quintillions of people. But we are not observing quintillions of people. We are observing just 7 billion people. If the Doom Late hypothesis is true, one may derive that the mankind will grow by another factor of one billion. But if we can derive it, then it's not unlikely at all that the current population is just 1/1,000,000,000 of the overall history's population. Instead, it is inevitable: \(p=1\). So the suppression by the factor of 1 billion is completely irrational, wrong, idiotic, and stupid.

The only theory in which it makes sense to talk about quintillions of people ? the Doom Late theory ? makes it inevitable that the people aren't distributed uniformly over time. Instead, they live in an exponentially growing tree. So there's manifestly no "intertemporal democracy" between them that could imply that we're equally likely to be one of the early humans or one of the later ones. We're clearly not. It is inevitable that in most moments of such Universe's history, the number of people who have already lived is a tiny fraction of the cumulative number of the people in the history (including the future).

Aaronson offers another idiotic argument that may sometimes be heard. A valid objection to the Doom Soon conclusion is that it could have been done by the people in the world when the population was just 1 million or another small number ? e.g. by the ancient Greek philosophers. And they would have been wrong: the doom wasn't imminent. Aaronson says that it doesn't matter because "most" of the people who make the argument are right.

But again, this is completely irrelevant. Whether most people say something is an entirely different question from the question whether it's right. And indeed, in this particular case, we may show that the probability is very high that the "majority" that uses the anthropic arguments is wrong! What's important is that the methodology or logic leading to the "doomsday is coming" conclusion is invalid as a matter of principle. It doesn't matter how many people use it! One can't or shouldn't invent excuses why these arguments are flawed by saying that some quintillions of completely different (and much less historically important, per capita) people at a different time would reach a valid conclusion. I don't care. I want to reach a correct conclusion myself and I don't give a damn whether some totally different people are right. Of course that they're mostly wrong.

Anthropic principle and a loss of predictivity: what is the real problem?

At the end, it's mentioned that the anthropic principle is often criticized for its inability to predict things. It's indeed unfortunate if a theory makes no prediction. But it's not a valid logical argument against a theory. The correct theory may make much fewer or much less accurate or unambiguous predictions than some people might hope!

The actual problem ? one that may be used as an argument against the anthropic principle ? is sort of the opposite one. A valid argument is that the alternative explanations that are more accurate, tangible, and predictive have not been excluded. There may be an old-fashioned calculation of the value of the cosmological constant, \(\Lambda\sim 10^{-123}\). And science proceeds by falsification of the wrong theories, not by "proofs" of correct theories.

We know that the anthropic explanation would have been wrong as an explanation of ? now "materialistically" understood ? features of Nature simply because we have better explanations that we know to be much more likely to be true than the anthropic one. And the same thing may happen ? and, I think, it is likely to happen ? in the future, too. If you can't really show that this expectation is wrong, you shouldn't pretend that you have proved it!

Perhaps, science will be forced to switch to anthropic arguments because beyond a certain point, there just won't be any old-fashioned explanations. Maybe quintillions of people will live in that future world and the claim that the "open problems are explained anthropically" will therefore be true for a "majority" of the mankind that will have lived throughout the history. But that won't change the much more important fact that the anthropic principle will have been wrong throughout the whole previous history of physics.

Aaronson is clearly close to all the anthropic misconceptions discussed above ? which may be correlated with herd instincts, mass hysterias, "consensus science", and other pathologies. This is also manifest in his humiliating comments about the role of Adam and Eve. Well, I don't want to discuss the literal interpretation of the Bible which I don't believe, of course. But he wants to suggest that there is some uniform measure that makes it less likely to "feel that I am an early human".

But this is just totally wrong. There is absolutely no justification for such a uniform measure and because the population was growing pretty much exponentially (demonstrably so), this fact indeed pretty much allows us to prove that each early human was exponentially more important than the current ones and we're more important than the future ones.

In recent years, I got sort of interested in the history, e.g. the local history, and I studied the villages etc. that existed on the territory of Pilsen and in its vicinity. There were just hundreds of people and a few lousy houses and the folks didn't have almost anything but they were clearly very important because the hundreds of thousands of people who live here today have arisen from the small number of ancestors. So each of the ancestors is just much more important than an average contemporary human in the overall historical scheme of things. "Adam and Eve" were clearly even more important, if I express it in this way.

If we divide some consciousness or soul or something based on the spiritual importance, it's totally plausible to say that Adam and Eve (plus Jesus and His close relatives, or whoever counts) have 50% of it and the rest is divided among later humans, if you allow me to express the point more concretely than what is really possible. The argument "I can't be special or early because it is unlikely due to some uniform measure on the history's humans" is completely wrong. It is acausal, it uses mathematically non-existent measures, and it uses uniform measures that have no justification and that sometimes contradict legitimately calculable measures.

So I agree with Scott Aaronson that the anthropic reasoning may be defined as some part of probability theory that is more about feelings and opinions than about solid results. Well, most of the people ? including Aaronson himself ? clearly end up with completely wrong arguments and results which is just another way to disprove the anthropic principle. ;-) I can't be generic because almost all people seem to be morons. In fact, even these would-be generic people may use the same arguments because almost all of these stupid folks are still much smarter than generic insects and bacteria that are far more numerous. The whole idea of "considering oneself generic in a set" is just a way to contaminate a correct or rational argument or result by an incorrect or irrational one that is believed by the inferior life forms.

Source: http://motls.blogspot.com/2013/05/aaronsons-anthropic-dilemmas.html

Samsung Galaxy S4 St Francis Anquan Boldin Pope Benedict Jesuits percy harvin percy harvin

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.