Probabilistic rationality was originally invented to choose optimal strategies in betting games. It’s perfect for that—and less perfect for other things.

If a game is fair, you have perfect knowledge of:

- everything you and the other players can do (a small number of different types of
*actions*) - everything that can happen as a result (a small number of different types of
*outcomes*) - how much you will win or lose in every possible outcome (
*payoffs*).

Also, you can estimate how probable each outcome is based on how the game has gone so far, if this is not a given.

Probabilistic rationality is the *absolutely correct* way to think and act if you are in a fair betting game. This is true only by definition. That is what it now *means* for a betting game to be fair. Betting games have evolved to conform to the gradually developing understanding of probability theory. For instance, for many centuries dice did not have equal probabilities for each side. It was not understood how important that was, because the idea of “probability” hadn’t yet been invented.1

The actions, outcomes, and payoffs postulated by probability theory are abstract mathematical entities. Formally, the domain of applicability of probabilistic rationality is restricted to formal systems that conform to the probability axioms. This includes nothing in the eggplant-sized world (although actual casinos come close).

There are no objective criteria for what counts as an action, outcome, or payoff in reality. These are not found in fundamental physics, nor are they reducible to it. In the eggplant-sized world, they are nebulous entities—more so than many. Consequently, probability theory, in application, is never an absolute truth. It is a metaphor, model, or way of seeing. Those are neither true nor false; they can only be useful, or not, in different circumstances.

So, to make effective use of probabilistic rationality, you decide to pretend that whatever you are doing is a betting game, and see where that leads you. Often, in a concrete situation, there are several different, plausible ontologies of actions, outcomes, and payoffs. You have to choose one. The quality of your conclusions will depend on your meta-rational skill in doing so.

Often, the framework doesn’t meaningfully apply at all. What are all the things you could do next week? The possibilities are unenumerable. What outcomes might each have? In most cases, you cannot even imagine them all, much less estimate probabilities or payoffs.

Advocates of probability as a general theory of rationality invoke the “Dutch Book Argument”: if someone *forces* you to treat a situation as a fair betting game, it would be irrational not to think and act according to probabilistic decision theory. That is true—by definition. Fortunately, in 1789, the Count of Flanders eradicated the scourge of roving Bayesian Thugs who forced fair bets on random citizens at swordpoint. Many were hanged in the Brussels central market.2

What sort of world would probabilistic rationalism be true of? One that was a single vast casino. For the probabilist, existence as a whole is a gigantic Bayesian Thug. Looking at things this way tends to cause paranoia, hyperactivity, exhaustion, and nihilistic depression—as we’ll see.

- 1. Better dice are an example of the theme of our engineering reality to fit rationality, rather than accommodating rationality to reality. Still, players must agree to ignore tiny imperfections in the dice, and
*declare*that the game is fair by fiat. If all players do agree, and are not deceived about circumstances, then the game is fair as a*social fact*—not, in the end, as a physical or mathematical one. For a general history of early probability theory, see Ian Hacking’s The Emergence of Probability. On irregular dice, J. W. Eerkens & A. de Voogt, “The Evolution of Cubic Dice,” Acta Archaeologica, 88:1 (2017), pp. 163–173. - 2. A “Bayesian” is a species of probabilist. The Bayesian Elimination Decree of 1789 is just my joke—in case you wondered.