Acting on the truth


Dancers courtesy Jakob Owens

Rationalisms are mainly concerned with thinking correctly. However, they are often also concerned with acting, and try to provide correctness or optimality guarantees for action as well.

Rationalist theories generally take action as deriving straightforwardly from your beliefs about the current state of the world and how your actions will affect it. If those beliefs are true, then you can calculate the optimal action with some simple mathematics. Four influential theories of this sort are:

  • Game theory, in which you and an opponent alternate in choosing from a small number of possible moves whose effects are fully known, in order to achieve a defined win condition
  • Decision theory, in which you choose a single action out of a small set, which will result in one of a small number of possible outcomes, but you may have only probabilistic knowledge of the world state and the outcome of your choice
  • Control theory, in which the world is taken to be a differential equation, your beliefs are values of some real-valued variables in the equation, your actions set some variables, and you aim to maximize a function of variables
  • Means-ends planning, in which you derive a program that will bring about a well-defined goal state by taking a series of discrete actions, each of which affects the world in a well-defined way.

The math in each case is conceptually trivial. This is why epistemology is central for rationalism: the main thing is to make sure your beliefs are true. If you can do that, optimal action is guaranteed.

Determining action effects is hard

Although the math for choosing optimal actions is conceptually trivial, it is computationally hard. The number of computational steps required to calculate optimal actions grows exponentially (or, usually, even faster than exponentially) as the number of possible actions and outcomes increases.1 In practice, the correct computation is infeasible. Instead, heuristics are used, meaning that you consider only a small subset of possibilities. Generally heuristics are not even approximately correct, in the sense of having a numerical bound on their error. Sometimes they work well, but there’s no available analysis for how well, or for when they do or don’t work.

Further, the effects of any real-world action are subject to unenumerable unknown and potentially relevant considerations. To apply a rational action theory, you have to make a closed world idealization and ignore all but a few possibilities. A rational correctness guarantee can only be relative to the idealization; any time an unexpected factor intrudes, the guarantee fails. Relatedly, in a probabilistic framework, uncertainty may compound rapidly through a sequence of uncertain actions and outcomes. Looking a few steps into the future, predictions may become effectively meaningless.2

These rational action frameworks may work well if you can enforce a small, closed world idealization by engineering the objects in it to behave reliably and by shielding your activity from external factors. Planning is effective and essential in aircraft manufacturing, for instance. We’ll explore how this works in Part Three of The Eggplant.

Knowing that and knowing how

Routinely, we act effectively without being able to predict or even understand the effects of actions. Riding a bicycle is a standard example. Almost no one can explain how they do it, and it is quite a difficult skill to learn. The physics is counterintuitive, shockingly complex, still a subject of research, and not yet fully understood by anyone.3 Some facts are proven: for instance, to turn left, you first have to momentarily turn the wheel to the right. All bicyclists do this, but few know they do, and many would actively deny it if asked.4

So you can steer a bicycle effectively while having actively false beliefs about what you are doing and why it works, and almost no relevant knowledge of the domain. Conversely, if you did not learn to ride as a child, studied bicycle physics intensively as a graduate student, and then got on one for the first time, you would fall over many times before getting the hang of it. Your true beliefs would be nearly useless.5

Cognitive scientists make a useful distinction between knowing that, or propositional knowledge; and knowing how, or procedural knowledge. Propositional knowledge consists of true beliefs. The ability to ride a bike depends almost entirely on procedural knowledge. You know how to do it, but true beliefs are irrelevant. These two types of knowledge seem to work quite differently.

Rationalists have generally tried to reduce procedural knowledge to propositional knowledge. Some rationalists argue that the ability to ride a bicycle does consist of knowing a set of propositions about physics, it’s just that you don’t have conscious access to them. This would make for a simpler, unified epistemology, with only one type of knowing. It would also preserve hope for optimality guarantees.

This theory can’t be definitively disproven, but there is strong evidence against it. First, there is the computational complexity problem. Neurons compute shockingly slowly, and bicycling requires rapid reaction when you hit a bump. There doesn’t seem to be time enough for your brain to perform the logical deductions necessary to derive new conclusions from a propositional model.

Second, there’s extensive neuroscientific evidence that knowing-that and knowing-how are stored differently in the brain. A famous case is the patient H.M. After suffering brain injury, he was unable to learn any new propositional knowledge. However, he readily learned new skills, such as solving puzzles or drawing from a mirror. Before each practice session, he would insist that he had never done anything like that before, and yet his ability steadily improved.

Coming from a rationalist point of view, you may find it difficult to imagine how it would be possible for procedural knowledge not to be reducible to propositional knowledge. Unhelpfully, we know little about how brains store know-how. Useful intuition may come from artificial intelligence. In the 1980s, my collaborator Phil Agre and I wrote a series of programs that had unambiguously zero propositional knowledge and yet carried out complex tasks in highly uncertain environments (video games).6 More recently, reinforcement learning programs have learned to play games using artificial “neural networks” in which, if there is any propositional knowledge, no one can find it.7

Rationalist theories of action are powerfully useful in certain highly-restricted sorts of situations. Overall, they are inadequate both as descriptive theories of what we do, and as normative theories of what we should do.8

Know-how is usually resistant to formal analysis, yet it can be reliably effective in practice. Part Two explains how that can be. Part Three explains how formal rationality, which deals mainly only with knowing-that, critically depends on knowing-how for support. Part Four explains how meta-rationality can improve that interaction.

Not to keep you in suspense, we organize situations so that relevant factors are easily perceptible, so we can figure out what to do as we go along, so actions are likely to work, and so errors are inexpensive to repair.

Ontologies of action and of activity

Old photograph of awkward dancers

Rationalist theories mainly consider the cognitive process that leads up to “taking an action,” and have little or nothing to say about what an action is, or what it means to “take” one. Decision theory, for example, is about how to choose the best action out of a formally defined set, given a formally specified set of facts.

Then you should “take” the chosen action; but the theory doesn’t explain what “taking” consists of or entails. Implicitly, “taking” is atomic, so you can just do it, and then you are done. You don’t have to figure out the details while you are doing it. You don’t get interrupted in the middle and have to deal with other hassles concurrently. You took the action, so the “problem” is “solved.” The circumstances that led up to it, and the fact that life goes on afterward, are not considered.

In the rationalist ontology, action exists outside of space and time. That you are doing stuff here and now is not part of the story. This is the power of rationality: its ability to abstract and generalize. It finds universal solutions that are equally correct anywhere, and at any time. This is also its limitation: rationality is oblivious to the unenumerable specifics.

Activity is not broken down into discrete actions—although it is sometimes useful to think about it that way. “Actions” are not objective features of the world. If you could repeat the same bodily motions in a different context, that would not be the same action at all. It would almost certainly be senseless.

Part Two develops an entirely different ontology. Activity is a continuous, meaningful flow that relies intimately on the unique details of a specific situation and time. For most of what we do, improvisational partner dancing is a better prototype than placing bets in a casino.


This page is in the section Part One: Taking rationalism seriously,
      which is in In the Cells of the Eggplant.

This is the last page in its section.

The next page in book-reading order is Part Two: Taking reasonableness seriously.

The previous page is Statistics and the replication crisis.

General explanation: Meaningness is a hypertext book. Start with an appetizer, or the table of contents. Its “metablog” includes additional essays, not part the book.

Subscribe to new content by email. Click on terms with dotted underlining to read a definition. The book is a work in progress; pages marked ⚒︎ are under construction.