Part One: Taking rationalism seriously

Beginning with the Ancient Greeks, rationalists strengthened and elaborated the theory that rationality is the correct or optimal way of thinking and acting. They developed arguments that, for instance, Reason should be the Monarch that rules the Passions. Although increasingly sophisticated, these explanations now seem naive and confused. By the mid-1700s rationalist theories were starting to turn up troubling anomalies.

Still, rationalism’s first 2400 years produced the modern world. It was a great and noble project. It’s just that when it was finally put to a serious test, it failed; and modernity failed with it. We live in postmodernity, which could equally well be called the post-rational world. This is precarious… But that’s getting ahead of the story.

It was only when armed with mathematical logic, developed in the late 1800s, that rationalists could tackle seriously the questions “so what is rationality exactly?” and “what proof do we have that it always works?” It was a major shock to discover that we can’t say exactly what it is, and it doesn’t always work. Tested seriously, rationalism fails not for one reason, but for dozens, any one of which would be fatal. The intellectual powertools logicians developed to prove rationalism correct instead proved the opposite. Serious rationalists recognized this by the middle of the twentieth century.

“Taking rationalism seriously” implies rigorous investigation of how and why and whether and when rationality works. The historical tendency has been to assume as an axiom that it must somehow always work. Since its deficiencies are now well-understood, rationalism is no longer serious.

The Eggplant is an alternative, meta-rational understanding of rationality, and of how to do it better. It’s based mainly on observations about how and when and why rationality does work, covered in Part Three. However, it’s also motivated by specific ways rationalism doesn’t. So Part One reviews some of rationalism’s failure modes, with an eye for how to avoid them.

I will repeatedly ask: “What sort of world would rationalism be true of? What would it take to make a guarantee about rationality that could stick?” In general, the answer is: a world without nebulosity; a world in which all objects, categories, properties, and relationships were perfectly definite. Nebulosity manifests in many different ways, which cause different sorts of trouble for rationality, so I’ll give more specific answers to these questions as I discuss particular issues.

Why does rationality work? In large part, because we do practical work to make it work. Modernity succeeded by altering the world to make it less nebulous, thereby making rationality more reliable.

The problems rationalism treats as theoretical and philosophical, for which it wants to find uniform, universal, formal solutions, meta-rationalism treats instead as practical hassles. Hassles can’t be “solved,” but they can be managed reasonably effectively by devising social practices and by engineering physical objects. As I point out each problematic manifestation of nebulosity in this Part, I’ll ask “How do we deal effectively with this theoretically fatal problem in our actual practice of rationality?” Part Three answers that in depth.

This is not philosophy

This Part of the book may sound like philosophy at first. Reading it like that might result in missing its point.

Rationalism is a philosophical position, as it has been since the Ancient Greeks. It’s difficult to point out problems in a philosophical theory without sounding like philosophy. Also, only philosophers have treated many of the topics of this Part.

However, The Eggplant is not philosophical. Its sources and goals are practical, not theoretical. The book is not about clever arguments, or seeking the ultimate truth of some matter. Its aim is mundane: more effective ways of thinking and acting in technical work. Rather than attempting to conclusively refute rationalism, Part One aims to show how it causes recurring patterns of practical problems when it collides with nebulosity. This motivates an alternative account (Part Three) that you may come to find more useful and more plausible.

Rationalism, taken seriously, requires operations that do not seem possible in practice. As a philosophy, rationalism suggests that they must nevertheless be possible in principle. Rather than arguing about that, I ask “what do these practical difficulties imply for the practical use of rationality?” Part One is about how rationalism fails, when it does, more than why it fails. Proof of theoretical impossibility is not necessary to motivate an alternative. Seeing specific ways rationalism is inaccurate or inadequate in practice points to specific features of that alternative.

I do believe these operations are mainly not possible even in principle, and sometimes I sketch reasons. These partial theoretical explanations provide intuitions, rather than a knock-down philosophical proof.1

The problems I point out are well-known and widely discussed. I review them here only because I couldn’t find a discussion elsewhere which covers the issues with minimal philosophical and historical baggage and at the right level of detail. For readers interested in exploring further, Part One frequently footnotes The Stanford Encyclopedia of Philosophy. Peter Godfrey-Smith’s Theory and Reality is a good introduction to the philosophy of science, and covers several of the issues I raise here.

My presentation of these well-known difficulties is unusual in pointing out how each stems from the same root: failure to take nebulosity into account. This pattern recurs because the root motivation of rationalism is to prove that rationality is guaranteed correct or optimal, and nebulosity makes that impossible. After explaining each problem, I summarize an alternative, meta-rational approach, which works with nebulosity effectively. These brief discussions foreshadow more detailed explanations in later Parts.

Those Parts will sound much less like philosophy. Parts Two and Three are based on detailed observations of how people actually do things, and sound more like anthropology. Parts Four and Five are pragmatic guides to how to use rationality, and sound more like engineering, or research management techniques.

Rationalism and its complications

The rationalisms I discuss each take their criterion of rationality from a particular formal system. They promise correctness or optimality based on the reasons the system is correct or optimal in its own terms. For instance, if you start from absolutely true beliefs, logic allows you to find other absolutely true beliefs. This fact about logic is true, absolutely.

Each rationalism fails when its idealized concepts of “belief” and “truth” collide with some nebulous aspect of reality. A proof that a formal system works correctly internally is irrelevant to the question of how it relates to concrete reality. Rationalisms tend to direct your attention away from that, because none of them have worked-out stories of how or when or why they do engage with reality. Meta-rationalism does explain how formal systems connect to reality, and emphasizes the value of paying attention to that interface.

Each rationalist theory has failed for practical, technical reasons. Encountering trouble, rather than saying “this whole project seems not to be working, we need to step back and come up with Plan B,” rationalists plowed ahead, creating more complex variants of the formal system. Each rationalism added more conceptual machinery of its favorite type, rather than asking if that type of machinery was suitable for the job.2

  • Whenever it became clear that standard mathematical logic could not handle a particular problem, logicists bolted on more logic-stuff that was supposed to address it. When you point out that the extension doesn’t actually come to grips with the problem, and that it’s incompatible with all the other extensions, logicists say “well, yes, but something like this has to work.”
  • When you point out that statistical inference always depends on subjective choices in setting up a problem description, probabilists suggest that you could try all possibilities in parallel. When you point out that this is impossible, they say “well, yes, but it’s the correct approach in principle.”

Viewed from inside a rationalism, each successive failure looked quite different, and it seemed reasonable that a technical patch could fix it. From the outside, it’s obvious that they all failed for the same reason. Namely: overlooking ontological nebulosity.

Rationalisms have some response to every objection. Critics point out that the responses don’t work. Rationalists respond in turn; such disputes often go many layers deep. To make Part One finite, we’ll only take the analysis a couple of steps in any direction. Eventually one just has to say “This is awfully complicated and doesn’t seem to work in practice. Perhaps you will be able fix it someday with even more machinery, but it seems increasingly unlikely. And we do have a better alternative!”

  • 1. Philosophical proofs occasionally change individual philosophers’ minds, but never seem to defeat philosophical positions. Ways of thinking eventually go out of fashion, but not because someone shows they are definitively wrong. Showing a better alternative is more effective. Even then, philosophy progresses one funeral at a time; no position is so silly that diehards will cease defending it.
  • 2. Philip E. Agre’s Computation and Human Experience, pages 38-48, discusses this pattern in depth. “Ideas are made into techniques, these techniques are applied to problems, practitioners try to make sense of the resulting patterns of promise and trouble, and revised ideas result. Inasmuch as the practitioners’ perceptions are mediated by their original worldview, and given the near immunity of such worldviews from the pressures of practical experience, the result is likely to be a series of steadily more elaborate version of a basic theme. The general principle behind this process will become clear only once the practitioners’ worldview comes into question, that is, when it becomes possible to see this worldview as admitting alternatives. Until then, the whole cycle will make perfect sense in its own terms, except for an inchoate sense that certain general classes of difficulties never seem to go away.”

Navigation

This page introduces a section containing the following:

This page is in the section In the cells of the eggplant,
      which is in ⚒ Fluid understanding: meta-rationality,
      which is in ⚒ Sailing the seas of meaningness,
      which is in Meaningness and Time: past, present, future.

The previous page is Introduction: Because rationality matters. (That page introduces its own subsection.)

This page’s topic is Rationalism.

General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2020 David Chapman. Some links are part of Amazon Affiliate Program.