Comments on “The probability of green cheese”

Comments

Outputs as "probabilities"

Peter Corbett's picture

I’ve perpetrated more than enough machine learning systems in my time that spit out “probabilities”; that is to say there was once a justification for the method that involved some probabalistic reasoning and some very dodgy assumptions (often known to be false), but that justification was far in the background by the time the computer produced its results. Quite apart from the issue of not taking all factors into account, these “probabilities” would often be wildly overconfident or sometimes underconfident. Often I would use the euphemism “scores”, saying, “this may well correlate with the thing you’re interested with, but beyond that…”

Naive Bayes is in particular known for producing “probabilities” very close to 0 or 1 which are incredibly badly calibrated, but if you ignore the “probability” rubric and just pick a threshold the results are… kind of OKish by machine learning standards.

The problem of the priors

Bunthut's picture

I think you misunderstand subjective bayesianism. You treat priors as a sort of parameter that needs to be set to the right value so that the method gives the right results. But in terms of bayesian epistemology, your prior is just what you currently believe. But what if you’re wrong? A prior doesn’t need to be correct. The most correct prior would be to already know the answer, in which case why bother updating? The entire point of learning from experience is that priors don’t have to be correct , they can improve with learning.

The idea of “justified” priors just sort of copies the “What if I’m wrong?” reaction, but unlike “correct”, its content is unclear, which makes it hard to give the sort of one-paragraph explanation why that intuition is unhelpful.

I think the cause why it seems justified to have a prior of 1/6th on each outcome of the dice, and unjustified to be sure its 5, is a long chain of experience suggesting we can’t predict the result of a dice more accurately than 1/6 before it happens. This is bound up with the idea of justification because we often want to be able to rely on others reasoning, so we expect them to be able to show how they got to their conclusion from shared priors.

Of course, you can’t always do this, in some (many) cases it’s just too complicated or you don’t know of any shared priors. But this only means you can’t justify yourself to others, not that bayesian methods aren’t applicable. The idea of requiring justification for everything ends up somewhat surprisingly being a mistake, just like moving the entire earth east.

In case my explanation was bad, here is someone elses.

No one knows what their priors are

badocelot's picture

Bunthut,

The problem I see with subjective Bayesianism is that no one knows what their priors are in the vast majority of cases. For example, what are your priors for the proposition that I was born on Mars? No fair saying “extremely low” or some other qualitative answer, unless you can point me to a well-developed qualitative version of Bayesianism that maintains something like the quantitative version’s rigor.

Add new comment

Navigation

This page is in the section In the cells of the eggplant,
      which is in ⚒ Fluid understanding: meta-rationality,
      which is in ⚒ Sailing the seas of meaningness,
      which is in Meaningness and Time: past, present, future.

The next page in this section is Ontological remodeling.

The previous page is The Spanish Inquisition.

This page’s topic is Rationalism.

General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2019 David Chapman. Some links are part of Amazon Affiliate Program.