Part Two: Taking reasonableness seriously

Systematic rationality often works. This is unquestionably true, and important. However, the analysis in Part One suggests that rationality doesn’t work the way rationalism mistakenly supposed. It must work some other way. How?

The answer—explained in Part Three—depends on an understanding of how everyday effective thought and action (“reasonableness”) work in practice. So Part Two looks at that, as a prerequisite to Part Three’s rethinking of rationality.

Reasonableness works directly with reality, whereas rationality works with formalisms. Rationalism assumes that a formalism somehow reflects reality, and glosses over questions about how that works. In fact, in technical work, the connection is often extremely complex, but is usually ignored in theoretical explanations of how science and engineering work. (It is not ignored in practice, but this aspect of practice is overlooked in rationalist explanations.) Part Three is about how rationality depends on reasonableness to connect with reality; first we need to understand how reasonableness works on its own.

Understanding reasonableness is also a prerequisite to understanding meta-rationality. They are not the same; in Part Four, I will develop a three-way contrast between reasonableness, rationality, and meta-rationality. (Sneak preview: meta-rationality is “reflective”—it stands outside all systems—and coordinates multiple reasonable and/or rational points of view; reasonableness does not. What’s distinctive about meta-rationality is its appropriation and altered re-use, or distinctive deployment, of rational methods, within a different overarching understanding of broader scope.)

Cross the river when you come to it

The rationalist framework overlooks contextual resources, which makes rationality artificially difficult.

Each of the problems faced by rationalism boils down to nebulosity giving rise to unenumerable potentially relevant factors, which cannot be accommodated in a bounded formal framework. However, almost none of the potential complexities arise in any specific situation. Further, most of the ones which do arise turn out to be irrelevant to your purposes at the time, so you can ignore the variations they engender. You can easily see, or check, how the relevant factors play out. So the details of the actual situation you are in are generally adequate to resolve the difficulties that a general rational theory could not. You can access these details using perception and dialog.

For example:

  • In context, it’s usually easy to resolve linguistic ambiguities (both syntactic and semantic) because you can figure out what someone is trying to say based on what they are trying to accomplish and which visible aspects of the concrete situation they must be talking about. When it’s not clear, you can ask.
  • More-or-less truths can usually be resolved into “more” or “less” based on situational specifics. Whether or not a generalization applies is usually obvious.
  • Uncertainty resolves into factual outcomes that you can usually deal with when they arise. You can figure out how to cross the river when you get there.

“Usually” is important here, of course. All these resolutions are error-prone. When reasonableness goes wrong, you may need to backtrack and clean up a mess. Sometimes it is better to plan ahead. Sometimes, mere reasonableness is inadequate, and it is better to apply systematic rationality! The point of this Part, though, is that most everyday activity, including much of the work of technical professionals, is handled reasonably, and that suffices. In Part Three, we’ll see how reasonableness is a necessary support for rationality as well.

The aim of Part Two is not to give a comprehensive account of reasonableness for its own sake. That would be fascinating, but out of scope for this book. Instead, we’ll concentrate on the features of reasonableness that contribute to its role in rationality and meta-rationality—which are what The Eggplant is about.

The structure of Part Two

Part Two begins with two chapters about the sort of explanation it offers, which might not be as you’d expect. The first chapter distinguishes it from cognitive science; The Eggplant is not a theory of mental processing. The second explains that Part Three does not cover quite the same subject matter as rationalism, which affects the sort of understanding Part Two offers.

The middle chapters of Part Two are obviously non-philosophical (unlike rationalism and cognitive science). They explain various aspects of reasonableness: its purposefulness; its public accountability; the powerful uninterestingness of routine activity; and the role of perception and of linguistic communication, particularly reference.

Then, two chapters reconsider the philosophical themes of ontology and epistemology in the light of our understanding of reasonableness.

The last chapter of Part Two, on instructed activity, serves as a bridge into Part Three.


outside all systems?

Lawrence D'Anna's picture

meta-rationality is “reflective”—it stands outside all systems

If what you mean by “systems” is something like “formal systems of thought devised by humans”, then I’ve got no objection.

But if you mean “systems” in a much broader sense, then I do object. Meta-rationality doesn’t stand outside the ecosystem. It doesn’t stand outside the central nervous system of the person doing it. Like any human activity, meta-rationality is contained in and constrained by the limits of the systems it is embedded in.

We can use a rationalist system of thought, called “biology” to study the ecosystem. And the formal methodologies used by biologists can be supervised by meta-rational reasonableness. But that is still taking place inside the brain of an animal. That animal needs to eat or it will die. Eventually it will die anyway. These are biological facts. The human is bounded and constrained by the ecosystem.

At the same time the ecosystem is bounded by the products of human thought. If a human gets the idea in his head to build a bunch of nuclear weapons and put them in rockets and gives one man the choice to launch those rockets, then well maybe it has the deterrent effect they were looking for and maybe it wipes out life on earth.

Systems of all kinds have hierarchical relationships with each other. The nest inside each other like Russian dolls. But if you pursue these relationships far enough things often get weird. Peano’s axioms (PA) is a formal system which nests inside Zermelo-Frankel set theory (ZFC). It nests in the sense that statements and objects and proofs inside PA can be translated into their equivalent in ZFC. Anything you can say or prove in PA you can do in ZFC, but not the other way around. But ZFC is also limited. For example Cohen proved ZFC can’t decide the continuum hypothesis. What axioms do you need to prove Cohen’s theorem? All you need is PA. Weird.

Every system exists in a context that bounds and constrains its operation. Often that context is consists of other systems. Other systems that supervise it, or physically contain it, or create the preconditions for its existence. But these hierarchical relationships do not fit together into some great chain of being with an uber-system at the top. Instead, because each of these hierarchical relationships can be of a different character than the others, the systems form an interconnected web.

But even calling it a web exaggerates the unity of the thing. It temps you to say “maybe the uber-system is the network”. That kind of systemization has its place. An operating system is an interconnected web of programs, which are each themselves systems of instructions. An ecosystem is an interconnected web organisms, which are all independent biological systems. Mathematical logic is an interconnected web of formal systems which can embed in, describe and prove theorems about each other.

But as well as systematization-as-network works, it can’t systematize everything. It can’t grow to encompass everything. Reality itself does not seem to be a system. Every system exists in some kind of environment and is limited by that environment. There is no one system to rule them all.

Systems in a formal sense

If what you mean by “systems” is something like “formal systems of thought devised by humans”

Yup, exactly! Elsewhere I defined “system” in the relevant sense:

by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.

When you write:

Reality itself does not seem to be a system. Every system exists in some kind of environment and is limited by that environment. There is no one system to rule them all.

… we are in profound agreement. This is the essence of my critique of rationalism, and the motivation for meta-rationality. That doesn’t attempt to create a meta-system, but rather works with multiple systems as all inherently limited, and as unable even in combination to fully grasp reality, which will always be more complex and nebulous than we can imagine.

Add new comment


This page introduces a section containing the following:

This page is in the section In the Cells of the Eggplant.

The previous page is ⚒︎ Part One: Taking rationalism seriously. (That page introduces its own subsection.)

This page’s topic is Rationalism.

General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2020 David Chapman. Some links are part of Amazon Affiliate Program.