Comments on “Bayesianism is an eternalism”

Comments

Bayesianism

Bad Horse's picture

Well, this is a very odd thing to say. Bayesianism is an empiricist approach, one based on observations. Empiricism is the polar opposite of rationalism.

I think you need to discriminate between Bayesian math and Bayesian metaphysics.

Bayesian math is /correct/. It’s not something you can disagree with. It doesn’t have a metaphysics, because it doesn’t make claims about the real world.

Bayesian metaphysics is when mathematicians put on a philosopher’s hat and argue about how it applies to the real world. I haven’t bothered to see what they claim, because I don’t care, because the math works, and because I’m an empiricist, and in practice, decisions made using Bayesian analysis don’t change depending on your metaphysical claims about it.

Bayesian math is an explicitly empiricist approach. Bayesian metaphysics are what rationalists make up to try to fit it into their worldview. This often happens because Bayesians are usually mathematicians, and before statistics, all mathematicians were strict rationalists.

Bayesianism doesn’t try to rescue eternalism’s promise of certainty. I think you’re misinterpreting “convergence theorems”, which argue that Bayesian posteriors approach 1 or 0 in the limit. But that word “in the limit” implies “never”. A limit is a thing you never reach, and never can in this Universe, since it has only finite energy, hence a finite maximum number of computations can be done within it. Bayesianism actually proves you can /never/ have certainty, because that would require either beginning with at least one prior of zero or 1 (which then merely assumes the thing you’re trying to prove certain), or acquiring an infinite amount of information (which is impossible; see Finite Universe).

It is one of several approaches which shows the correct way to integrate math and epistemology: “Mathematical certainty” applies to formal systems. Statistical epistemology, of which Bayesianism is just one kind and one part, isolates the propositions that can be expressed in a formal system, keeping track of the assumptions made in doing so. It then proves propositions about that formalization. When you finish an analysis, you have a provable statement about a formal system, but you can never prove that the formalization characterizes the real world. Frequentist statistics does the same thing; it just comes up with different types of propositions involving things like confidence intervals. Again, frequentist /statistics/ should be separated from frequentist /metaphysics/; the math is a formal system, and doesn’t care what you think it means about the real world.

The prominent online LessWrong community is both Bayesian and rationalist. That, however, is a historical accident owing to the rationalism of its founder, Eliezer Yudkowsky.

Avoiding metaphysics

There is much here we agree about. The page you commented on is explicitly a stub; it does not yet make the argument, nor define the terms.

I agree that it’s important to distinguish Bayesian math (which is just math, like any other formal system) from Bayesian metaphysics (which is about how the formalism relates to the world). By “Bayesianism” I’m referring the metaphysics only. This is a common usage. Another common way to use the word is to refer to claims that one should use Bayesian statistics rather than frequentist statistics. That’s not what I’m referring to, and it’s not an issue I care about.

I am also using “rationalism” in a particular way, as including empiricism. This is also a common usage (probably the most common nowadays). See discussion here.

We probably disagree on substance at only two points (important ones, though). First:

I’m an empiricist, and in practice, decisions made using Bayesian analysis don’t change depending on your metaphysical claims about it.

To apply Bayesian math to the real world, you have to assume some metaphysics. If you ignore this, you implicitly choose some default metaphysical assumptions, which may be badly wrong. In a typical analysis, you assume a space of possible events and a set of variables that are evidence for those events and a set of possible values for each variable. Unless you are doing experiments at the quantum level, these are not True about the world. They are a metaphysical model. Your analysis can only be as good as the (implicit) assumptions that went into making it.

I wrote about that informally in “The probability of green cheese”; check it out if you’d like further discussion. It gives an example in which the relationship between the math and the world is highly problematic and cannot be ignored.

Second:

Bayesianism doesn’t try to rescue eternalism’s promise of certainty.

As mentioned, this is at the meta level. It acknowledges object-level propositions cannot be certain, but tries to deliver optimality guarantees. It cannot actually do that when applied in practice (because metaphysics). I believe that the pretense of optimality is an emotional substitute for object-level certainty. (I have only anecdotal evidence for this.)

Add new comment

Navigation

This page introduces a section containing the following pages:

This page is in the section Rationalist ideologies as eternalism,
      which is in Non-theistic eternalism,
      which is in Eternalism: the fixation of meaning,
      which is in Meaning and meaninglessness,
      which is in Doing meaning better.

The previous page is ⚒︎ The continuum gambit.

General explanation: Meaningness is a hypertext book. Start with an appetizer, or the table of contents. Its “metablog” includes additional essays that are not part of the book.

To hear about new content, Subscribe by email subscribe to my email newsletter, Follow Meaningness on Twitter follow me on Twitter, use the Syndicate content RSS feed, or see the list of recent pages.

Click on terms with dotted underlining to read a definition.

The book is a work in progress; pages marked ⚒︎ are under construction.