Reasonable believings

Reno waterfront BELIEVE artwork
Image courtesy Beau Rogers

This chapter rethinks the category “belief” in ways that clarify the relationship between reasonableness and rationality.

Categories are a matter of ontology (understanding how things are), whereas belief is a matter of epistemology (how we know truths). The ontology of belief itself is “prior to” epistemology. That is, we ought to understand what sort of things beliefs are before making theories about when they are true.

Rationalist ontology supposes that beliefs are definite things that live in your head, and they form a single, well-defined category.1 It takes an individual’s beliefs to be a set of (proposition, truth value) pairs. This assumption is unthought and almost never questioned.

The virtues of the rationalist ontology of belief are that it is crisp and simple. Its defects are that it is metaphysical and wrong. Rationalism admits that propositions would have to be non-physical; and it fails to explain both everyday and scientific knowledge.

Understanding believing empirically

A better alternative must understand believing empirically, as a diversity of messy, contingent, natural phenomena: more like cell biology than like mathematics. The defects of such an understanding are that it is nebulous and complicated. Its virtue is that, with adequate empirical grounding, it can be roughly right.

In everyday usage, epistemological terms like “belief” are like “fruit” in having numerous “senses” that are not clearly distinct or definable, and which all permit nebulous boundary cases. As with “fruit,” the richness and complexity are due not to ambiguity between discrete word meanings, but due to unavoidable contextual interpretation, and due to the nebulosity of categories. In this chapter we’ll discuss only believing, among epistemological categories; similar stories could be told about the others.2

Most of this chapter consists of a collection of many different sorts of believings. It suggests that whether or not you believe something, what the belief is, and what it means to believe it, are all nebulous. It seems likely that no coherent, unified theory is possible. Different sorts of things called “beliefs” need different explanations, and we may gain a better understanding of particular sorts though examining their uses.

The collection is open-ended, giving a sense that it would take a book, or an extensive research program, to describe reasonable epistemology adequately. That might be interesting as philosophy or psychology, but it’s not relevant for The Eggplant, so I provide only a sketch.

The collection progresses from concrete, specific “beliefs” to abstract, general ones. We’ll see that the most concrete cases don’t, after all, reasonably count as beliefs. In everday usage, “belief” applies mainly to somewhat vaguer ones. We’ll also find that reasonableness, which works well for making breakfast, is ill-suited to reasoning about the broadest abstractions and generalities—which is why we need rationality.

Toward the end, we’ll see examples of reasonableness breaking down. When reasonableness fails, we can switch to some other mode of thinking and acting. Rationality is one; another is mythology, which is neither reasonable nor rational, but not necessarily irrational.

Reading this chapter is optional. The collection is quite long, and none of the rest of the book depends on its details. Epistemology is much less important in both reasonableness and meta-rationality than in rationality. Both are mainly about making sense of situations for practical purposes. Systematic rationality is distinctive in emphasizing decontextualized, purpose-independent knowledge.

However, I hope you will find the examples both entertaining and thought-provoking.

Also, although understanding reasonable believing is not important for understanding technical rationality—the subject of this book—it is important for understanding urgent social and cultural problems we face. Elsewhere, we’ll explore ways mythic, reasonable, rational, and meta-rational believings can function or dysfunction in political discourse, social policy, and cutural movements. The last couple of examples in this chapter, examining political believings going wrong, foreshadow that analysis.

Believing is a routine, reasonable activity

The ethnomethodological flip redirects attention from hypothetical things-in-the-head to observable activities. That suggests studying occasions of doing believing, rather than static beliefs.3

Believing shares the characteristics of other routine, reasonable activities:

The method of this chapter

Believings presumably involve things in your head (along with things outside your head). We can’t observe those, but we can observe activities that directly involve them, such as someone saying “I believe in free markets.” When someone says that, they might be lying or mistaken about their believing, but it seems likely that usually it is pretty accurate. That suggests we can use statements of belief as good evidence about believing itself. (And, in fact, the two seem not clearly distinct, once we relax cognitivist assumptions.)

Unfortunately, empirical studies of belief have mainly concerned which things people believe, and why.4 They rarely address the questions “what does believing consist of?” and “how do we use belief in routine activity?” Those have been left to philosophers. Rather than investigating empirically, philosophers invent examples to fit their theories and hope readers find them believable.5

Unfortunately, I haven’t done the necessary empirical work either. This chapter collects mainly invented examples, which I hope you find believable. (I also hope they are not misleading, and that someone does the empirical work to check!)

Thumbs

A: Do you believe you have two thumbs?
B: Do I … what?
A: Do you believe you have two thumbs.
B: I don’t know what you mean! That’s a really weird question. I don’t believe I have two thumbs; I do have two thumbs.6

The rationalist theory of beliefs tries to treat “I have two thumbs” and “congruent parts of congruent polygons are congruent” on the same basis, as instances of the same category. This doesn’t accord with the reasonable sense of “belief.”

“I have two thumbs” is what this chapter will call a fact, not a belief. A fact is a statement you are not accountable for.7 It would be unreasonable—a norm violation—to demand a justification for “snow is white.” That’s not a belief; it’s just a fact.

It seems—from the thumbs example—that what rationalist philosophers call “belief” not only does not correspond to the usual meaning, it is a category that most people don’t have, and find difficult to understand when they first hear an explanation.

Two possible objections:

  1. It might be that everyone’s mental machinery is most accurately described using the rationalist category, even though they don’t have any corresponding concept; or
  2. It might be just be a matter of semantics: normal people do have the same concept that rationalism calls “belief,” but don’t use that word for it, and use “belief” for another concept.

These are empirical questions, which have not been experimentally tested as far as I know. I think objection #1 seems plausible mainly only on the basis of rationalist theories of action, according to which knowing-how reduces to knowing-that, and actions are computed from beliefs. On that theory, you couldn’t use your hands without accurate beliefs about their geometry. But that’s probably not how hands work. The ability to grasp is know-how, not knowing-that, and doesn’t involve beliefs of any sort.

I expect that objection #2 will seem less plausible after we’ve been through more examples.

There is a related issue here, which is that regular people have various beliefs about how believing works. This “folk epistemology” is a more accurate account of everyday practical beliefs than the rationalist one, but fails catastrophically as an account of ideological beliefs. I believe this is a main cause of dysfunctional political discourse, and therefore important to understand better.

However, folk epistemology is not on the agenda for Part Two. Here we will investigate not how people think beliefs work. Rather, we ask how people actually do believing in the course of reasonable activity.

Socks

If you say “my socks are blue” and someone challenged that, you’d pull up your pants leg and say “Look, they’re blue! It’s a really dark blue, but it’s not black.” This is an account of why it was reasonable to say they were blue. Maybe the person who challenged you didn’t believe you initially, because the socks were so dark they did look black.

On the rationalist account, you believed they were blue, but it would almost never be reasonable to say “I believe my socks are blue.” That would be nonsensical, because there is no doubt. Believing doesn’t enter into it until there’s doubt or disagreement. What’s going on in this example is perceiving and remembering a fact, not believing a proposition. In routine concrete activity, you usually eliminate uncertainty about facts by going and looking and finding out. A theory of that will be a theory of active perception, not of belief.

Socks and thumbs differ in that usually I have no belief, in any sense, about what color my socks are. I don’t know, I don’t remember, and I don’t care. That’s efficient: it nearly never matters, and if I need to know, I can just look. Keeping track would be wasteful.

The file export bug

You might naturally say “I believe they finally fixed the file export bug.” Here there is doubt, and “believe” expresses some uncertainty.

If challenged, you might reply “I know they said they fixed it in each of the last three app versions, but it seems to be working now.”

Reasonable believing typically involves normative, interpersonal questions of trust and truthfulness. Should you believe the software company now? They might have been either confused or lying before. Anyway, they should have known it wasn’t really fixed.

George Washington

If you say “George Washington was the first President of the United States” and someone demanded an account, you’d say “everyone knows that!”

In the case of socks, you gave evidence, or more accurately you gave an opportunity for your challenger to see for themselves.

In this case, “everyone knows that!” is probably not meant as evidence. It’s meant to counter-challenge the demand for an account. Everyone knows, so asking for a justification is unreasonable. You shouldn’t do that!

How do you know George Washington was the first President of the United States? “Uh, I guess I must have learned it in school when I was a kid? I don’t remember.” It doesn’t matter, because everyone does know; so you wouldn’t say “I believe George Washington was the first President of the United States.” It’s not a belief, it’s just a fact.

What is the evidence that George Washington was the first President of the United States? Personally, I haven’t a clue. I suppose there must be some.8

The tarot

Well, I don’t believe in the tarot, but I do consult it regularly. I mean, it obviously works, so believing doesn’t matter.

You often hear things like this from people who straddle two modes of thinking, mythical and rational. I might have said exactly this at one time in my past.9

The point is not that this is reasonable—it’s arguably irrational—but that it is common; so a descriptive understanding of believing should encompass it.

What does the speaker believe? Do they have two contradictory beliefs, and don’t notice? “Believing doesn’t matter” suggests they do recognize an apparent contradiction. Perhaps they don’t believe the tarot works: that’s just an “obvious” fact, not a belief?

My guess (partly based on personal experience) is that the speaker holds two beliefs in different ways—they are not the same sort of thing. Or, perhaps they believe two different things are sort-of-true, but in different senses.

We might interrogate them to clarify what they really believe—but perhaps there is no determinate truth about that. What the speaker believes, and how they believe it, are both nebuous.

Holding contradictory beliefs, and acting contrary to one’s stated beliefs, are both common, and are not always unreasonable—nor always suboptimal.

The Skywalk

The Skywalk is a horseshoe-shaped cantilevered bridge that extends 70 feet out from the edge of the Grand Canyon into thin air, thousands of feet up. The walkway is transparent glass.

If you visit, you will probably believe it is perfectly safe. The glass is several inches thick, thousands of people have walked across it, and you can see there’s a bunch of them on there right now. Yet when you get there, you may be terrified and refuse to set foot on it. There is also some sense in which you believe you’d likely fall to your death. For anyone with any fear of heights, that seems entirely reasonable.

This is a famous example of a distinction between belief and “alief.”10 The alief here is that it’s dangerous.

How are we to understand this? You know that it’s safe, but you believe it’s dangerous? Do you have two contradictory beliefs? Or are these different sorts of things? Or what?

Is that a question that can be settled with evidence and reasoning? Or is it inherently nebulous, without any definite answer, because what it is to believe is nebulous? Is this a matter of linguistic ambiguity (one word “belief” having multiple well-defined senses), or ontological nebulosity?

Alief is more directly connected with both perception and action than belief is. Alief is more like knowing-how, and belief more like knowing-that.

In upcoming examples, we’ll see other examples of contrasts between stated beliefs, such as political ideologies, and concrete activity. Perhaps the motivational structure that drives what the person actually does would be best analyzed in terms of alief? This is not how the concept is typically deployed, though.

Whether alief is a species of belief, or a separate but similar category, may be an arbitrary terminological choice. In any case, a worked-out epistemology of reasonableness should aim to cover all these phenomena.

Meat, abortion, and taxes

It seems that whether you believe the Skywalk is dangerous depends on whether you are about to step onto it. Believing often appears to depend on context.

What are we to make of these?

It might be a failure of logic. Perhaps most vegetarians hold all three of the beliefs “a vegetarian is a person who does not eat meat,” “I am a vegetarian,” and “I eat meat,” and have failed to notice that these are contradictory. This does not seem very likely. Also, if this were true, you’d expect that if you pointed it out, they’d respond with surprise, regret, and perhaps gratitude. I have not observed this.

It might be hypocrisy. Perhaps anti-abortion activists don’t really believe it’s bad, well-off leftists don’t really want income redistribution, and vegetarians secretly think eating meat is fine. This may be true in some cases, but I don’t think it’s the main explanation. If it were, when people with apparently inconsistent beliefs are challenged, you’d expect shame and embarrassment. That’s not typical.

What you usually observe is anger, accompanied by a justifying account. “It’s not our Geraldine’s fault! She’s a good girl, and it was an accident”; “I’m not rich—it’s billionaires who should pay the taxes”; “I was being considerate around my meat-eating friends, and anyway turkey isn’t really meat, and it was organic!”

I think the anger is because you are perceived to be in bad faith in accusing them of hypocrisy. You are in bad faith because you know that the beliefs are not actually contradictory. You are pretending not to understand this, in order to try to shame them as a one-up status move, which is unreasonable and unethical.

The beliefs are non-contradictory because they are held in different ways and are only meant to be used in particular contexts to support particular sorts of thinking, feeling, and acting. Is there water in the refrigerator? The meaning of the question depends on what you are doing. Are you a vegetarian? The meaning of the question depends on what you are doing. Naturally, when you are not eating meat because you are a vegetarian, that’s what it means to be a vegetarian. But the point is to sincerely care about the suffering of all sentient beings, and therefore to be a good person. Vegetarianism is not an objective fact about transient acts; it is a persistent moral attitude, an identity, a social and cultural positioning. You may indeed, in the relevant sense, remain a vegetarian while eating a turkey sandwich; you still care, in the abstract anyway. There’s nothing hypocritical about that.

I’ve chosen particularly extreme examples here, for clarity and amusement value. Most people would consider it unreasonable for self-described vegetarians to eat meat daily. However, context- and purpose-dependent believing is common, and usually passes as reasonable.

Facts about the eggplant-sized world are only pretty-much-true, and only in-some-sense. Likewise, we also only pretty-much-believe things about the eggplant-sized world; and only in-some-sense, which varies according to circumstances. Believing is something we do when it’s useful, and not when it’s not. The rationalist idea that a belief is a static inscription somewhere in your brain, sitting like a book on a shelf when you aren’t using it, is more nearly true of memory for facts than of believing.13

The Aardvarks

If I ask “Do you believe the Aardvarks will win the tournament this year?”, the implication may be that the outcome depends on a vague supernatural force, which may come to the team’s aid or not, and that you may have allegiance to or not. (If the issue is pragmatic uncertainty, I might ask instead “Do you think the Aardvarks will win”?)

You may reply “Yeah, of course I believe, but it isn’t going to happen. They’d have to win every one of the rest of their games this season.”

What sort of belief is that? Is that one belief or two? Which do you “really” believe? Is this a meaningful question?

Belief often involves alignment, such as supporting a football team. And also often involves nebulous metaphysical forces and entities. As we shall see.

Our mission

We believe there is no limit to what any child can achieve. We believe every student can exceed all expectations. We believe in our mission to create an environment in which all children reach beyond dreams.

Perhaps not all the teachers believe all of that; but if most do, it seems fair for a spokesperson of the school to say that “we” believe it. That holds even if he or she does not personally believe it.

Perhaps no individual believes it; yet might it still be a sincerely held belief of the organization? If the organization acts on it, through policy and incentives—might this not be reasonably described as “believing,” just as much as other questionable cases we’ve discussed? As you may imagine, philosophers have argued with each other about that.14

Stating your believing is a concrete action you take in order to get practical work done. In this case, it is the actual work of the school. Expressing a “belief” is a powerful method for coordinating the interactions of funders, parents, teachers, and administrators. Its truth value—if it even meaningfully has one—may be entirely irrelevant.

It’s possible that no one involved has ever stopped to wonder whether it is realistically the case that there is no limit to what any child can achieve. It’s just what we are professionally accountable for believing, so its truth or falsity is not worth bothering to think about.

Raisins

The first time I tasted scones was elementary school, on Saint Patrick’s Day when a parent brought in a classic version. And they had raisins. Which I sort of believe shouldn’t even exist. Like why do we have to go and ruin grapes that way?
—Jessica’s Honey-Cheddar Scones With Black Pepper Recipe

I say “I sort of believe that” often. But maybe I’m weird? I did a web search to find out, and no, I’m perfectly normal, everyone says “sort of believe” all the time. (Jessica’s was the most fun example on the first search results page.)

A probabilistic rationalist would want to treat this as uncertainty about the sort-of-believed thing. Maybe, after numerically weighting all evidence, you think there is a 0.69352 probability that it is true that raisins shouldn’t exist? I think it is, rather, saying there is some sense in which you are believing they shouldn’t.

“Sort of believe” means you recognize it is nebulous whether or not what you are doing should count as believing. Jessica would sort of prefer a world in which somehow raisins don’t exist, but would not support making them illegal, for example. And she’s not sure how serious she is about even that; she meant this to be funny, but it also has some sort of truth to it.

Frankly, I totally agree. Raisins are nasty. I hated them in elementary school and I still do. On the other hand, my spouse loves them, and I wouldn’t want to deprive anyone of pleasure, so… it’s complicated, I guess?

Guilt

I don’t believe in guilt, I believe in living on impulse as long as you never intentionally hurt another person, and don’t judge people in your life.
—Angelina Jolie

There’s a lot going on here.

In part, this appears to express moral beliefs. Philosophy has considered moral beliefs problematic for as long as there has been philosophy, because moral beliefs don’t seem to behave the same way as beliefs about, for example, where on earth you might have left your phone.15 What you believe about the phone should resolve into a fact; what Angelina Jolie believes about guilt is … a moral fact? Whether “moral fact” is a workable category is much debated.

I find it more promising to consider moral believings to be themselves a different sort of thing than to consider them the same kind of thing, just with a different sort of thing believed. Upcoming examples may make this more plausible.

It’s also not clear what moral propositions Jolie asserted here. Maybe “I don’t believe in guilt” is an abbreviation for “I believe the moral proposition ‘guilt is not morally required’.” Or it might be an ordinary material belief, “I don’t believe that feeling guilt leads to pragmatically good outcomes.”

But I think it’s more analogous to “I don’t believe in capitalism”: what’s meant is that she doesn’t endorse guilt. That is not a material statement or a moral one. It’s more like saying what sort of person she is. This seems particularly likely in the case of “I believe in living on impulse.”

Do you think she had a clear idea whether she was asserting a material proposition, a moral proposition, a lifestyle preference, or a personality trait? Do you think there is an objective truth about which of those things she meant, regardless of her intentions? Do you think better philosophical analysis, or psychological experiments on her, could—even in principle—sort this out?

I don’t. I think it’s all inherently nebulous.

Often, perhaps always, believing has a feeling component. “Feeling” both in the sense of an emotion and of a bodily sensation. There are feelings of confidence and doubt, and of pleasure, fervency, or dispassion, that accompany believing. Accompany—or are these part of the believing?

Perhaps they are the believing? Philosophers analyze belief as a “propositional attitude”: it is a stance toward something that might be true.16 Let us recognize that this “attitude” is not a mere assignment of a truth value; it’s a complex of emotions, associations, and actions.

Believing often means having feelings about an idea.17 Feelings are notoriously complex, vague, contextual, purpose-relevant, and changeable; beliefs, considered as feelings about ideas, likewise.

God

I believe God is Love.

Someone might reasonably demand “what does that even mean?” Apparently this is a common question, because if you do a web search for just “God is Love,” you get a lot of web pages whose actual title is “What does it mean that God is love?” Many different sorts of answers are possible. Here’s one:

Well, nobody really thinks there’s an old man with a beard in the sky, but you’ve got to believe in something. I choose to believe in Love.

If “I probably left the phone in the refrigerator again” is a belief, then whether or not you left the phone in the refrigerator is highly relevant, as a matter of material truth. What matter of truth (material or otherwise) is relevant to “I believe in Love”? Someone might reply “Love is the Higher Truth”; but that transports us to a metaphysical dimension in which “truth,” “belief,” and “meaning” itself cease to have any meaning.

“I believe in Love” seems to express a feeling, a personal alignment with some sort of nebulous attitude toward niceness in general, in which truth plays no role. It’s not something that could be either true or false in any sense.

However, if we’re charitable we’ll agree it’s commendable and not meaningless. If the function of believing is to get things done, then “God is Love” encourages us to be kinder. The next several sorts of believing, considered below, also seem to have no particular relationship to truth, but to be methods for getting various sorts of work done.

These are, interestingly, the sorts of things that normal people are most likely to call “beliefs.” “What are your beliefs?” often means “what religious group are you a member of?”

This is the opposite of rationalists’ story, in which “I believe my socks are blue” is the prototype, and they hope to treat “I believe God is Love” as a pathological variant. In everyday usage it seems more like “I believe God is Love” is the prototype, and the closer you get to “my socks are blue” the less people are willing to consider it a belief.

Many psychologists have considered religious “belief” to be a distinct phenomenon, on the grounds that, for instance, it’s immune to evidence and reason. It’s controversial whether it even counts as “belief” at all.18 On the other hand, many non-religious believings seem more similar to religious ones than material ones, in terms of how they work and what they do (rather than in terms of what they are about).

It is usually considered impossible to choose to believe false facts. If you can see you are wearing blue socks, you can close your eyes and try as hard as you like, but you can’t believe you are wearing red ones. On the other hand, some religious authorities say that you can, and indeed can only, choose religious beliefs; you cannot, or should not, be compelled by evidence, peer pressure, or any other factors. Does this differing role of choice depend on the sort of thing you believe, or on the type of believing you are doing? Does that question express a valid distinction itself, or are “the type of thing you believe” and “the way you believe it” inextricably entangled?

Our proponent of Love held that “you’ve got to believe in something,” although apparently they thought you can choose which. This seems more-or-less true in certain domains. You probably feel no compulsion to believe anything about the melting point of sodium borate. By contrast, if you do not feel compelled to hold religious beliefs, it’s likely you feel compelled hold some other dubious ideology—a fervent political identity, or perhaps rationalism itself—in the way other people hold their religion.

Many people have a vague attitude that believing as such is virtuous, largely regardless of what you believe, much less whether it is true. The artwork at the head of this chapter—the word BELIEVE in twelve-foot-tall rusted-steel capital letters, given pride of place on the Truckee River waterfront in downtown Reno—expresses this attitude. “Believing,” on this view, is a nebulous stance of general emotional and communal positivity. Skepticism is disagreeable and anti-social.

Sushi

In ordinary usage, “belief” refers to abstract ideological alignments more often than factual knowledge. Religions are just one example. Often a “belief” sounds like a fact claim, but functions instead as a statement of emotional endorsement of the ideology overall.

Some people say they believe white people eating sushi is cultural appropriation. That’s another case in which “what does that even mean?” might be a common response. “It’s just racism! You need to read Edward Said.” This account is similar to “everyone knows that”: to question is unreasonable, specifically because it’s immoral and ignorant. You are accountable for believing the right things for the right reasons.

Ideological “belief” frequently serves social and emotional ends unrelated to the truth value of the claim:19

Claims whose truth value cannot readily be determined, or seem irrelevant in practice, are particularly valuable as social beliefs. These include claims where evidence is lacking, scarce, or disputed; where the truth does not obviously affect you or people you know; moral claims whose relationship to evidence is at best unclear; and claims whose content is highly nebulous (“what does that even mean?”).

The same ideological “belief” could be held in different ways—as different types of believings. Some believers might have sincere, thoroughly reasoned cases for it, with reasonable or even rational responses to all objections. Others might vociferously proclaim the racism of sushi, but be unable to explain. They seem to “believe” it just because that’s the locally popular thing to do. Does this count as belief? One could argue about that, about whether they “really” believe it, but doesn’t it seem better to recognize that the category “belief” is nebulous?

If someone asks “why do you believe that?”, you are accountable for coming up with an answer that counts as reasonable. Sometimes this could be valid evidential support, but it could be any of many other things. Different sorts of reasons count as more or less reasonable in different contexts. In some, supplying accurate objective evidence counts as “well-actuallying,” which is accountably unreasonable.

Reasonable believing is inseparable from reasonable reasons for believing. Believing that racism is wrong because it is bad for the economy is unreasonable (regardless of whether it is true). You are accountable for believing that racism is wrong for the right reason, which varies locally. Racism is wrong because—according to different communities—everyone is the same; it prevents the Word of God from spreading to men of all nations; it is contrary to compassion for all sentient beings; races don’t exist; it causes people to make judgements about others on the basis of culture; and so on. Which you are accountable for supplying varies.

Skill in reasonable argument is effective in ideological conflicts, which are mainly uninterested in truth. Reasonableness, accountability, and their negotiation are an inadequate method of inference when applied to abstractions, if the goal is to find truth.

That is what rationality is for.

The rationalist account of belief is woefully inadequate as a descriptive theory of everyday believing. However, it is invaluable as a normative framework, in those cases in which rationality should displace reasonableness—even though it is an inaccurate theory of even scientific knowledge.20

Chakra balancing flower essences

Your True Body is a beautiful pattern of light energy that flows inside your physical one. The chakras are seven energy centers arranged vertically along your spine, in the order of the seven colors of the rainbow. Each chakra also relates to a domain of experience, from coarse to fine: from safety and survival at the base of the spine, ranging upward through sex, feelings about the self, compassion, communication, creative intuition, and Cosmic Consciousness.

When your chakras are balanced, brilliant energy—vibrating colored light—flows easily through them, empowering the body, mind, and soul. Physical issues arise from energy imbalances. You can find their deeper meaning by exploring their chakra domain. For example, digestive problems are closest to the self chakra at the center of the torso, so you may need to work on self-esteem or integrity questions.

Chakra balancing essences are seven healing extracts from flowers that relate to the seven chakras. For example, the self chakra is yellow, and its essence is distilled from sunflowers, daylillies, and buttercups. Place four drops under your tongue to flood the chakra with vibrational frequency light energy.

Like most pseudoscience, this is entirely reasonable. There is a coherent account for every aspect. It makes perfect sense; you can go as deep as you like, understanding the functions of each chakra in great detail, and you won’t find any contradictions.

I’d guess that, no matter how much sense it makes, you don’t believe it, “because it isn’t true.” Someone who does believe, who hasn’t yet learned to think in the mode of scientific rationality, can only hear that as “I can get away with putting you down and ridiculing your beliefs, because the belief system you chose to express your values has lower social status than the one that expresses my values.” (“Now we see the violence inherent in the rationalist system!”)

Reasonableness overvalues reasoning at the expense of facts.21 Pseudoscience is mostly about making sense, whereas rationality is about peculiar uses of the word “true” that often make no sense. Much of the appeal of pseudoscience is the exhilarating sense of figuring things out, and coming to understand important things ordinary people don’t. That is illusory, although it is also one of the best motivations for doing science and other rational work.

We’ll see in Part Three that learning rationality requires a further leap into emotional willingness to accept senseless claims as “true”; and to reject reasonable ones as false, irrelevant, or meaningless. Much of science, “Western Medicine,” and other systems we critically rely on simply aren’t reasonable—just true.

The cannibal witch goddess

In some sense, I believe I met a thousand-year-old cannibal witch goddess in a Starbucks cafe in San Francisco in 1997. I wrote about that in “Meeing Naropa’s dakini” on my Buddhism for Vampires site.

Obviously, there are no thousand-year-old cannibal witch goddesses, so my belief is, in some sense, false. However, in the relevant sense, non-existence is not a defect.

Here I am speaking in the mode of myth, which is often the best way to address Big Questions of meaningness: purpose, ethics, and value. The cannibal goddess had much say about that.

Myth works with narrative, imagery, symbols, and archetypes. Myth has its own logic—its own norms of inference—and its own sort of truth. These are not the same as those of rationality, of course, but also not the same as those of reasonableness. The meeting was entirely unreasonable, and I can give no acceptable account for it. Cannibal witches belong in fairy stories, not Starbucks.

Myth is unreasonable, but not necessarily irrational. It can be enormously valuable, but a book on technical rationality is not the place to explore its logic further.

I did need to say this much, so I could explain Bill Gates.

Bill Gates

A: Did you know Bill Gates invented the covid virus as an excuse to inject microchips in everyone so he could control their brains?
B: Where did you hear that?
A: It was on Facebook, in the Truth Community.
B: Do you believe it?
A: Well, yeah! Of course he would do that, it just makes sense! People need to wake up and learn the real truth of what’s happening! I mean, did you know the Iranians created covid to fix the US election?

I’ll discuss four aspects of this rant: reasonableness, truthfulness, myth, and their roles in political dysfunction.

I invented some examples in this chapter, but these are exceptions. In 2020, startlingly many people say they believe the Bill Gates story, and/or that covid was developed as a bioweapon to control the American election. Nevertheless, these theories are not reasonable: a majority of Americans would (still) account them as “batspit insane.” Perhaps the stories count as reasonable within the Truth Community, but its members recognize that most people consider them irrational.

In this invented dialog, B sounds skeptical. “It just makes sense” is A’s vague retort that the theory is, in fact, reasonable; but he makes no attempt to provide a reasonable account, nor (presumably) could he. Instead he invokes the truth value of conspiracy theories in general.

Conspiracy theories are a response to reasonableness breaking down.

For reasonableness, “truth” is more about truthfulness than about facts. Factual truth was mostly irrelevant in the discussion of whether Yvonne should go on a date with Harold. Whether she was being truthful about being “totally uninterested in him” was highly relevant.

Earlier I wrote that “reasonableness depends on assumed good faith and moral trust; there’s no guarantee for those.” In 2020, there has been a catastrophic loss of moral trust in official authorities. We have all learned that—to a greater extent than was already obvious—they are not only disastrously incompetent, they lie pervasively, automatically, and without remorse.

So it may come to seem that the opposite of whatever authorities say is probably true. Then you join the Truth Community, devoted to finding the real truth about the UFO coverup, Bill Gates’ mind control scheme, and the lizard space aliens who control the United States government. What sort of truth is that?

The study “Dead and Alive: Beliefs in Contradictory Conspiracy Theories22 found that people who believed Princess Diana faked her own death also often believed that she was murdered by MI6, the British spy agency; and that if someone believed Osama Bin Laden was already dead when US special forces supposedly killed him, they were more likely to also believe his death was faked and he was still alive.

Such contradictions are unreasonable, but that’s irrelevant for conspiracy theorists. All the beliefs support, and are supported by, the overarching meta-belief that official sources lie about everything. Which is… too nearly true.

“Bill Gates invented the covid virus so he could inject microchips in everyone to control our brains” draws on the logic of myth. Myths are not about what factually happened. That’s irrelevant—which is why it’s not a problem that covid was also bred in a secret underground lab by the Iranian military who injected mutated bat spit into the Ayatollah to make him immortal.23

The function of myth is to help us understand what sort of world we live in, at the level of Big Picture meaningness. For millions, Gates fuses two modern mythic archetypes: The Evil Capitalist and The Mad Scientist. His narrative roles explain our existential fears about giant secretive unaccountable corporations, artificial intelligence, unjust inequality, para-state overcontrol, incomprehensible biotechnology running amuck—in terms of cosmic heroes and villains, eternal morality, and struggles to the death with forces of Evil.

This is Saturday morning cartoon stuff. It’s mythology, but it is crude and dysfunctional mythology. Sophisticated mythologizing can be a power tool for positive social coordination. Unfortunately, Western culture rejected myth,24 by demand of both our main ideologies, rationalism and Christianity. They each insist instead on their peculiar concepts of Absolute Truth. A productive relationship with myth begins by understanding that those sorts of truth are irrelevant to it—and vice versa.

Climate change

Recently it has become compulsory to have a strong opinion about climate change. Two opinions are available; one of them is reasonable, and the other is unreasonable, irrational, and immoral. Which opinion is reasonable and which is irrational depends on your community. Climate change was conscripted in the culture war, and your beliefs about it must conform to the catechism of your tribe.

If you are reading this book, you are probably a highly rational person. Statistically, that means your opinions about climate change are probably exceptionally irrational—in a formal, technical sense.

A series of studies have found that the better you are at formal reasoning, and the more you know about science, the more likely you are to have an entrenched opinion—one way or the other. Increased knowledge of the evidence does not, on average, result in convergence in views. You may assume that people with the opposite opinion are ignorant and irrational, but extensive evidence shows that you are mistaken about that.25

Since it is morally compulsory to have the locally-reasonable beliefs about climate change, we do whatever we can to ensure we do. The better we are at technical rationality, the more proficient we are at using it to find ways to dismiss discordant evidence and refute opposing arguments. This is identity-protective cognition: using rationality to maintain our membership in good standing in our tribe.26 This misuse of rationality is found across many other culture-war issues for which scientific evidence is relevant, such as evolution, nuclear power, and genetically modified foods.

These are situations in which the truth of abstract claims has enormous practical implications. Reasonableness is entirely inadequate. Rationality is vital. And yet it is failing.

“Well, those guys are doing it wrong!” Certainly at least one side is doing it wrong. But what “doing rationality wrong” means is subtle in the case of something as nebulous as the climate. It’s not like getting the wrong answer on a high school algebra test. Considering how complex the issues are, and how few people could possibly actually understand the science and evidence, almost certainly almost all non-specialists on both sides of the debate are doing it wrong. We distort rationality to serve merely-reasonable ends.

The rationalist myth of a correctness guarantee probably contributes to the conflict. Both sides think they are reasoning rationally, and so are confident in their beliefs. Still they get opposite results, so clearly rationality is not guaranteeing correctness. This does not mean that rationality is useless, or that there isn’t a correct answer; just that technical rationality doesn’t reliably result in truth. We’ll explore that in Part Three, and in Part Four we’ll see how meta-rationality may help with problems like this.

One method for identity-protective cognition is deliberately choosing not to learn about contrary evidence.27 A study gave subjects a choice of news articles to read, with headlines that suggested they would either confirm or contradict their beliefs about climate change. Subjects were more likely to chose the confirming articles, regardless of their scientific knowledge and reasoning ability.

There’s some good news, however: subjects who were more curious about science were much more likely to choose to read about surprising, ideologically uncomfortable evidence.28

  1. 1.Or, sometimes, rationalists posit at most a handful of clearly-distinct types of belief, such as factual and moral beliefs.
  2. 2.The standard rationalist theory of knowing-that is that it’s a type of belief, so what I say about the nebulosity of belief mostly applies to that as well. Reasonable inference consists of accounts and potentially their negotiation, as we saw in “You are accountable for reasonableness.” We’ve already seen the nebulosity and diversity of truths in “The truth of the matter” in Part One. In the eggplant-sized world, there are many sorts of truth, and the absolute sort rationalism posits is scarce.
  3. 3.The usual rationalist theory is that an occurrence of believing is simply a static belief rising to the surface and becoming manifest. Since we’re not concerned with things-in-the-head, it doesn’t matter whether or not this is true. Some of the examples I’ll give tend to suggest it isn’t right, or isn’t the whole story, but nothing depends on that. Cognitive scientists might balk at studying believing rather than beliefs on the grounds that what we need to know is “the neural correlate of a belief.” That would be interesting, if there is such a thing, but current neuroscience is not capable of ascertaining it.
  4. 4.For example, psychologists have put great effort into understanding why people have supernatural beliefs despite lack of evidence. One influential discussion is Scott Atran and Joseph Henrich’s “The Evolution of Religion: How Cognitive By-Products, Adaptive Learning Heuristics, Ritual Displays, and Group Competition Generate Deep Commitments to Prosocial Religions,” Biological Theory 5:1 (2010), pp. 18–30.
  5. 5.There is a large, recent, controversial literature on “epistemics” in ethnomethodological conversation analysis, which does draw on collections of recorded examples. I hoped it would be relevant, but confess I did not find it enlightening. If you would like to dive in, Discourse Studies devoted two special issues to the topic, 18:5 (2016) and 20:1 (2018).
  6. 6.The slightly different question “did you know you have two thumbs?” is silly because the answer is “obviously yes!” (assuming you do have two thumbs). However, it seems meaningful in a way “do you believe you have two thumbs?” doesn’t. Belief is a category error here; knowledge maybe not as much.
  7. 7.This use of the word “fact” roughly accords with everyday usage, but I’m using it somewhat “technically” in giving it a non-obvious definition. “Fact” is given other technical definitions in philosophy, law, history, and other professional fields.
  8. 8.My essay “Judging whether a system applies” examines ten arguments that George Washington was not the first President of the United States, from a meta-rational point of view. Meta-rationality is sometimes prior to evidence; it may consider how evidence is to be evaluated, rather than doing the evaluation. The essay mostly doesn’t examine any, so I was able to preserve my ignorance.
  9. 9.My current belief is that the tarot works, but it’s evil. It’s a pack of Platonic Ideals. If you have one, I strongly advise you to stab it, burn it, mix the ash with salt, and scatter it in running water.
  10. 10.The term was introduced by Tamar Szabó Gendler in “Alief and Belief,” The Journal of Philosophy, 105:10 (2008), pp. 634-663.
  11. 11.Ella H. Haddad and Jay S. Tanzman, “What do vegetarians in the United States eat?”, The American Journal of Clinical Nutrition, 78:3 (2003).
  12. 12.For non-U.S. readers: everyone in America has to waste several days in April doing painful, pointless paperwork to fill in tax forms. Along with payment, the forms are due April 15th.
  13. 13.As a two-bit cognitivist theory: perhaps a “belief” is a memory of a believing?
  14. 14.The seminal paper in this area is Margaret Gilbert’s “Modeling collective belief,” Synthese, 73:1 (1987), pp. 185–204. In subsequent work, she has used this as the basis for an interesting theory of social action.
  15. 15.For a review, as usual I recommend the Stanford Encyclopedia of Philosophy article, “Moral epistemology.”
  16. 16.The term was invented by Bertrand Russell. It’s not like logical positivism is some sort of ancient history we can ignore because it’s irrelevant to contemporary rationalism…
  17. 17.I learned this from meditation teacher Michael Taft. Pseudonymous twitter user NeuroMyths pointed me to David Hume: “An opinion or belief is nothing but an idea, that is different from a fiction, not in the nature or the order of its parts, but in the manner of its being conceived…. An idea assented to feels different from a fictitious idea, that the fancy alone presents to us: And this different feeling I endeavour to explain by calling it a superior force, or vivacity, or solidity, or firmness, or steadiness.” (A Treatise of Human Nature (1739-40), Book I, Part Three: Section VII. Emphasis in original.)
  18. 18.One fierce thread of current academic debate began with Neil Van Leeuwen’s “Religious Credence is not Factual Belief,” Cognition 133:3 (2014), pp. 698-715. “I argue that psychology and epistemology should posit distinct cognitive attitudes of religious credence and factual belief, which have different etiologies and different cognitive and behavioral effects.” Many philosophers and psychologists expressed strong feelings about this in print. Their own beliefs about belief are apparently not neutral, but inextricably entangled with political and religious opinions. See, for instance, Maarten Boudry & Jerry Coyne, “Disbelief in belief: On the cognitive status of supernatural beliefs,” Philosophical Psychology, 29:4 (2016), pp. 601-615.
  19. 19.Philosophers, psychologists, and sociologists keep rediscovering this insight. The earliest publication I’ve found is Robert P. Abelson’s “Beliefs Are Like Possessions,” Journal for the Theory of Social Behaviour, 16:3 (1986) pp. 223-250. I expect there are earlier examples. Important, more recent work on the topic has been done by, for example, Jonathan Haidt, Dan Kahan, and Kevin Simler and Robin Hanson. I also wrote about this in “‘Ethics’ is advertising,” based loosely on Geoffrey Miller’s Spent: Sex, Evolution, and Consumer Behavior.
  20. 20.See Part Three, and also Catarina Dutilh Novaes’ The Dialogical Roots of Deduction: Historical, Cognitive, and Philosophical Perspectives on Reasoning.
  21. 21.Ironically, reasonableness is closer to the old fashioned sense of “rationality,” when it was opposed to “empiricism,” than scientific rationality is.
  22. 22.Michael J. Wood, Karen M. Douglas and Robbie M. Sutton, in Social Psychological and Personality Science 3:6 (2012). This study has been extensively footnoted in 2020 academic studies of covid conspiracy theories.
  23. 23.I just made that one up. But it’s exactly the sort of thing The Cruel Oriental Tyrant always commands The Craven Royal Sorceror to do, so I’m sure you know someone who would believe it if you told them.
  24. 24.Myth persists in genre fiction, which may yet save the world. Inspiring science fiction in which competent, rational heros accomplish important purposes by being competent and rational (rather than by being The Good Guys) could intervene in the current Western culture of complacent helpless cynical depression in the face of systemic failure and looming dystopia.
  25. 25.The seminal paper in this area was Kahan et al., “The polarizing impact of science literacy and numeracy on perceived climate change risks,” Nature Climate Change, 2:10 (2012), pp. 732–735. Many later studies have replicated and extended the findings. They have been found to hold across several different, well-validated measures of rationality.
  26. 26.Dan M. Kahan, “Climate-Science Communication and the Measurement Problem,” Advances in Political Psychology, 36:1 (2015). Kevin Simler’s blog post “Crony Beliefs” (Melting Asphalt, 2016) is an engaging and accessible introduction to similar ideas. He proposes methods for finding your own “crony beliefs” that you hold for their social value rather than because they are true.
  27. 27.For a review, see Jennifer Jacquet, Monica Dietrich, and John T. Jost, “The ideological divide and climate change opinion: ‘top-down’ and ‘bottom-up’ approaches,” Frontiers in Psychology 5 (2014) p. 1458.
  28. 28.Kahan et al., “Science Curiosity and Political Information Processing,” Advances in Political Psychology, 38:1 (2017), pp. 179-199.