Rationality is supposed to enable you to infer new things from what you already know. For example, you know that you can use rowboats to cross rivers, so if you know there’s a specific rowboat on a specific river, you should be able to conclude that you can cross.
While it is usually true that you can use a rowboat to cross a river, it is not true if you can’t find the oars, if there is a hole in the bottom, if it is locked to the dock and you don’t have a key, if you have a broken arm, if your section of the river is powerful whitewater leading to a fatal waterfall, if it is infested with hungry hippos, or if it is a contested national border patrolled by helicopters with shoot-on-sight orders.
There are hardly any universal absolute truths about the eggplant-sized world. Nearly every piece of general knowledge has exceptions. Nearly anything might be relevant to nearly anything else—although nearly everything turns out to be pretty well irrelevant in any specific case.
If you knew, before getting to the river, that the rowboat is in good working order, that the river is safe for boating, and so on, you could conclude that crossing would be possible. However, you cannot be certain until you get there and see. This defeats standard formal logic. Absolute-truth-preserving deduction is impossible in the face of ignorance about any factors that might be relevant.
You might instead be able to reason probabilistically about “known unknowns”—obstacles that you could, realistically, anticipate and assign probabilities to. You could not, realistically, anticipate “unknown unknowns,” for instance that someone has filled your boat with electric eels, although it is logically possible that they did. This defeats probabilism (as we’ll see later).
Unknown unknowns are unenumerable. The rowboat example comes from John McCarthy, one of the founders of artificial intelligence research. He devoted his career to addressing the problem, using non-standard extensions to formal logic. He wrote:
In order to fully represent the conditions for the successful performance of an action, an impractical and implausible number of qualifications would have to be included in the sentences expressing them… yet anyone will still be able to think of additional requirements not yet stated.1
This problem is fatal for rationalism’s hope of a correctness or optimality guarantee for inference in the eggplant-sized world.
Obviously, though, we do use rational inference successfully all the time. How? Three ways:
- We can make a closed-world idealization by pretending we know what all the relevant factors are, and we may get away with it
- We can re-engineer the world to more nearly fit the idealization by manufacturing less-nebulous objects and by shielding them from unexpected influences
- We can reality-check the necessarily-unreliable results of rational inference
All three of these are meta-rational operations. They can be done badly or well. Typically they are not thought through carefully, because meta-rationality is generally overlooked. I’ll discuss the first two in Part III on how rationality works in practice, and the last in Part IV on meta-rationality.
Alternatively… the reasonable thing to do is cross the river when you come to it. When you get there, you can look to see whether you can use the boat. Problems you could not reasonably have anticipated will be visible. If three cardinals of the Spanish Inquisition pop in to inform you that rowing is a sin punishable by the rack, you can probably find some alternative.
The reasonable “deal with it when the time comes” approach can sometimes lead you to painting yourself into a corner. Rationality is especially valuable when such mistakes are costly or likely.
- 1. John McCarthy, “Circumscription—A Form of Non-Monotonic Reasoning,” Artificial Intelligence 13 (1980), pp. 27-39.