Total understanding—the feeling that everything makes sense—is one of the most seductive promises of eternalism. The feeling is wonderful, but unfortunately the understanding is illusory.
Recent research shows how illusions of understanding arise, what their effects are, and how they can be dispelled. Most concretely, this includes studies of illusory understanding of everyday physical causality: common natural phenomena and household devices. That isn’t directly relevant to Meaningness. However, the same patterns of illusory understanding also apply to issues of meaningness, such as ethics and politics.
Understanding and explanation
Certainty, understanding, and control are closely linked promises of eternalism.1 If you are certain an explanation is correct, you have a stronger feeling of understanding. If you have an explanation for why something means something, it increases your certainty that it does mean that. If you understand something, you feel that you can control it. Psychology experiments show that people feel they can control events they definitely can’t, so long as they understand them.
Personal accounts of conversion to communism—an eternalist political ideology—provide fine examples of the emotional power of illusory understanding. Conversion brings newfound optimism, joy, insight, and all-encompassing comprehension.
The new light seems to pour from all directions across the skull; the whole universe falls into pattern like the stray pieces of a jigsaw puzzle assembled by magic at one stroke. There is now an answer to every question, doubts and conflicts are a matter of the tortured past—a past already remote, when one had lived in dismal ignorance in the tasteless, colorless world of those who don’t know.2
After all, every minute aspect of daily life is caught up in systems of material production, and therefore can be subjected to Marxist analysis. Waiting at the bus stop, the scheduled times for three buses go by, and then two appear all at once. Why? Because of capitalist exploitation. Everything is because of capitalist exploitation.
(Exercises for the reader: (1) Figure out why capitalist exploitation explains this common pattern of bus arrivals. (2) Figure out a better, non-Marxist explanation.3)
Marxism, like Catholicism, is an extremely well-worked-out system. Countless brilliant intellectuals, working for centuries, have already figured out explanations for everything. Well, almost everything. If you are willing to swallow a few camels, Jesuits will strain out all the gnats for you. In other words, if you accept a few giant absurdities, they can give coherent, logical explanations for all details.
Newer, less-elaborated ideologies—UFO cults, for example—may provide a strong, if vague, feeling of understanding. However, they have few explanations to back that up. This is one reason they mostly only work as closed subcultures. If you are a communist or Catholic, you can talk to outsiders without your belief system collapsing, because you have answers to their objections.
Illusions of understanding: everyday causality
Leonid Rozenblit and Frank Keil, in an influential 2002 paper, showed that people believe they understand familiar manufactured objects (such as can openers) and natural phenomena (such as tides) much better than they actually do. The researchers had subjects rate their understanding of various objects and phenomena, and then asked them to give an explanation. After that, the subjects rated their own understanding again. Their second ratings were much lower. Most subjects were surprised to find, after trying and failing to explain, that they understood much less than they had thought.
You might like to try this before reading on. On a scale of 1 to 7, how well do you think you understand a can opener? 1 would mean you know what it is for, but have basically no idea how it works. 7 means you know everything that anyone would know, short of being a can opener designer.
Now, explain how a can opener works. You could write this out in words, or draw a can opener from memory. Label the parts with what they do. (No fair looking at the picture at the head of this page! And for a fair trial, you need to do this on paper or screen; as we’ll see, it’s almost impossible not to cheat if you do it in your head.)
When you are done: has your estimate of your depth of understanding changed?
Now go look at an actual can opener, and at least put it up against a can as if you were about to open it. Turn the handle and watch how the mechanism moves. Then re-rate your written understanding. And how well do you think you understand now, after examining the reality?
I did this after reading the Rozenblit paper, and was surprised to find that my explanation had some details wrong, and significant missing parts. I also discovered, after playing with two ordinary manual crank-turning can openers, that they worked on completely different principles. I’ve used both types a million times, and never noticed this, because you use them exactly the same way. My rating of my original understanding went from 6 to 3. I’m estimating my new understanding at 6, but I’m worried I’m still overconfident!
It turns out that for most everyday objects, we have some vague mental image, but not an actual causal understanding. Here’s the Rozenblit paper:
Most people feel they understand the world with far greater detail, coherence, and depth than they really do. … [They] wrongly attribute far too much fidelity and detail to their mental representations because the sparse renderings do have some efficacy and do provide a rush of insight.
(“A rush of insight”… Remind you of something? A spectre haunting Europe, perhaps?)
We think we understand a can opener because we can play a mental movie of using one. That feels as though it is almost as good as actually watching. But:
The mental movie is much more like Hollywood than it is like real life—it fails to respect reality constraints. When we try to lean on the seductively glossy surface we find the façades of our mental films are hollow card-board. That discovery, the revelation of the shallowness of our mental representations for perceptually salient processes, may be what causes the surprise in our participants.
Unless you are a kitchen tool engineer, there’s no reason to actually understand how a can opener works. What everyone else needs is to know (1) what it is for and (2) how to use it. So most of the time “understanding” is really “comfort with.” It means you know how to interact with it well enough to get by, and you are reassured that it is not going to explode without warning. This comfort is provided mainly by familiarity, not understanding. Having used a can opener many times convinces you that you understand it, because you can almost always make one work, and you almost never cut yourself. Tellingly, Rozenblit and Keil found that their subjects did not overestimate their “how-to” knowledge, only their “how-it-works” knowledge.4
Learning how things work is usually a waste of time, from an evolutionary perspective. And total understanding is never even possible. The “illusion of explanatory depth” may have evolved to tell us when to stop:
We have to learn to use much sparser representations of causal relations that are good enough to give us the necessary insights: insights that go beyond associative similarity but which at the same time are not overwhelming in terms of cognitive load. It may therefore be quite adaptive to have the illusion that we know more than we do so that we settle for what is enough. The illusion might be an essential governor on our drive to search for explanatory underpinnings; it terminates potentially inexhaustible searches for ever-deeper understanding by satiating the drive for more knowledge once some skeletal level of causal comprehension is reached.
This doesn’t always work right; our brains’ guesses about when to stop can go wrong. Education theorists find that students often stop trying to understand too soon, when they merely feel “familiar” with the material, because the modern classroom demands a depth of understanding beyond what would have been useful to our ancestors. Conversely, my interest in Precambrian evolution is probably a pathological result of mild autism—a brain abnormality.
If you look closely at a can opener in operation, you can see immediately how it works. Then you forget as soon as you look away. Knowing that you could figure out how something works, whenever you need to, is a good reason not to bother until then—and not to remember afterward. Rozenblit and Keil hypothesized that our brains confuse vague visual memory with understanding, and that this was the source of the illusion they found.
This was confirmed in Rebecca Lawson’s study of people’s understanding of bicycles. She found that most people have no clue what a bicycle looks like, much less how one works, even if they own one. (I know that sounds implausible; the results in the paper are dramatic.) We all have a memory of seeing a bicycle, and on that basis think we know what one looks like—but few people can draw something that’s even approximately similar. The bicycle-like things they do draw could not possibly work.
You might like to try this before reading on. Don’t bother being artistic; the picture can just show how the main parts (handlebars, frame, seat, pedals, chain, wheels) attach to each other.
Lawson found that people can easily understand how a bicycle works, and draw one accurately, if there’s one in front of them. She writes:
We may be using the world as an “outside memory” to save us from having to store huge amounts of information. Since much of the information that we need in everyday life can be found simply by moving our eyes, we do not need to store it and then retrieve it from memory.
(This point will be important, by the way, in my explanations for how meaning works, much later in this book.)
Here’s a bicycle drawn by someone who rides one most days:
This “bicycle” couldn’t turn, because the front wheel is connected to frame struts that form two sides of a rigid quadrilateral. Mistakenly, it has the chain going around the front wheel as well as the back one, which also would make turning impossible (among other problems, with gearing for instance).
Perceptual understanding isn’t possible for all devices—for example if they have hidden parts, or are very complicated, or run on invisible forces such as electricity.5
So what about meaning, which is also invisible?
Illusions of understanding: ethics
Most people think they understand ethics reasonably well. However, their ethical explanations often don’t make sense; they depend on weird assumptions, use dream logic, or skip over major issues. My story “The puzzle of meaningness” includes some humorous examples. Our feeling of understanding ethics is largely illusory; we don’t notice that our own ethical explanations are incoherent.
The following question is a classic of moral psychology research:
A woman was near death from a special kind of cancer. There was one drug that the doctors thought might save her. It was a form of radium that a druggist in the same town had recently discovered. The drug was expensive to make, but the druggist was charging ten times what the drug cost him to produce. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman’s husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about $1,000 which is half of what it cost. He told the druggist that his wife was dying and asked him to sell it cheaper or let him pay later. But the druggist said: “No, I discovered the drug and I’m going to make money from it.”6 So Heinz got desperate and broke into the man’s laboratory to steal the drug for his wife. Should Heinz have broken into the laboratory to steal the drug for his wife? Why or why not?
You might like to write out your own brief explanation before reading on.
This exercise may seem risky or embarrassing. A feeling of moral competence is often close to the heart of one’s sense of self. However, most experts say there is no right or wrong answer, although there are interestingly different kinds of answers.
It might also help to know that even professional theologians and moral philosophers are often unable to give coherent ethical explanations. Despite fancy footwork, theological answers boil down to “because that’s what God wants,” with no clear reason He wants that. Secular academic theories of ethics are all known to be wrong. Moral philosophers must support some theory, arguing “this is less bad than the others,” but most admit that their professional expertise is rarely useful when dealing with everyday ethical problems. Evidently, professional ethicists are afflicted with a powerful illusion of explanatory depth.
Why do people think they mostly understand ethics, if they can’t explain it coherently? As with can openers, we know what it is for, and we know how to use it well enough to get by. The feeling of understanding is an illusion based on familiarity and comfort. We know through experience that we can navigate ethical issues reasonably reliably, and they are not going to suddenly explode. As with devices, this is adequate for most people most of the time.
Ethics sometimes does explode on you—for example, if you are caught having an affair. It’s not just that there will be bad consequences; there will be many difficult choices and judgements in sorting out the mess, and the inadequacy of your ethical understanding may become obvious. Sometimes such crises lead to psychological growth, including developing a more sophisticated ethical understanding.
Research in moral psychology has found that people’s ethical understanding passes through a predictable series of stages. The stages are defined not in terms of what people consider right or wrong, but what sorts of explanations they use to justify those judgements. The Heinz story was invented by Lawrence Kohlberg as a way of eliciting such explanations. He assigned them to six stages of moral development. There are some problems with Kohlberg’s theory—mainly, it is too rationalistic—but the conclusion that people advance from lesser to greater ethical understanding seems correct, and important.
Disquietingly, research has found that most adults get stuck somewhere in the middle of the developmental sequence. The illusion of ethical understanding is one reason they may not progress. As with bicycles, if you think you know how ethics works, and can use it well enough most of the time, there seems no reason to try to understand better.
Robert Kegan has extended and improved Kohlberg’s framework. He describes an ethical equivalent to Rozenblit and Keil’s discovery that attempting to explain things can reveal one’s own lack of understanding. The illusion of understanding sometimes dissolves when you have to give an ethical explanation.
Realizing your explanations are inadequate opens the possibility of a forward ethical stage transition. This happens only rarely, however. One reason is that it is easy to recognize that your understanding of a bicycle is wrong, by visually comparing your drawing with a real one. It is much harder to reality-test moral understanding, because ethics are far more nebulous than bicycles.
Eternalism, by promoting a reassuring illusion of ethical understanding, hinders moral development. This is most obvious in religious fundamentalism, which denies the nebulosity of ethics, stranding you in a childish moral understanding. Rationalist eternalism typically fixates some moral theory that is also obviously wrong, but does have some coherent systematic justification. These are adolescent rather than childish; utilitarianism is a common example.
Fortunately, eternalist ethical systems have become less credible, so it’s easier to advance to more sophisticated understandings. Unfortunately, “easier” is not “easy,” and ethical anxiety—a sense of being lost at sea when it comes to ethics—is increasingly prevalent. That is a major topic of the upcoming chapter on ethics.
Illusions of understanding: politics
Contemporary “politics” is mostly about polarized moral opinions.7 It is now considered normal, or even obligatory, for people to express vehement political opinions about issues they know nothing about, and which do not affect their life in any way. This is harmful and stupid.
“Political Extremism Is Supported by an Illusion of Understanding” (Fernbach et al., 2013) applies the Rozenblit method to political explanations. After subjects tried to explain how proposed political programs they supported would actually work, their confidence in them dropped. Subjects realized that their explanations were inadequate, and that they didn’t really understand the programs. This decreased their certainty that they would work. The subjects expressed more moderate opinions, and became less willing to make political donations in support of the programs, after discovering that they didn’t understand them as well as they had thought.
Fernbach et al. found that subjects’ opinions did not moderate when they were asked to explain why they supported their favored political programs. Other experiments have found this usually increases the extremeness of opinions, instead. Generating an explanation for why you support a program, rather than of how it would work, leads to retrieving or inventing justifications, which makes you more certain, not less. These political justifications usually rely on abstract values, appeals to authority, and general principles that require little specific knowledge of the issue. They are impossible to reality-test, and therefore easy to fool yourself with.
Extreme, ignorant political opinions are largely driven by eternalism. I find the Fernbach paper heartening, in showing that people can be shaken out of them. Arguing about politics almost never changes anyone’s mind; explaining, apparently, does.
This suggests a practice: when someone is ranting about a political proposal you disagree with, keep asking them “how would that part work?” Rather than raising objections, see if you can draw them into developing an ever-more-detailed causal explanation. If they succeed, they might change your mind! If not—they might change their own.
How does eternalism create illusions of understanding?
Eternalism promises to make everything make sense. It sometimes does deliver an illusion of universal understanding (as in the account of conversion to communism, above). Usually it can’t quite manage that, because almost all eternalism is wavering. The curtain that is supposed to conceal the illusionist is translucent. Most people realize they don’t understand everything. Still, eternalism does trick most people into believing they understand many things they don’t.
Somehow, we don’t notice that our explanations don’t make sense. How does eternalism manage that? I don’t have a complete answer, but I do have pieces of an answer. In fact, there is no one answer; eternalism has a big bag of tricks. The main part of this chapter describes a series of eternalist ploys: ways of thinking, feeling, talking, and acting that stabilize the stance. All of these are tricks for deliberately not understanding meaningness.
The rest of this page discusses some other mechanisms that don’t fit into this “eternalist ploy” category.
Research like Lawson’s bicycle experiment shows that genuine understanding usually depends on perceptual support. It comes from exploring concrete examples by looking and poking. To some extent we can transfer that understanding by mental visualization; but as Rozenblit found, this is sketchy.
Direct perceptual support is generally impossible for meaningness (ethics, purpose, and so on). However, we do use mental images to help understand these issues too. Thinking about the Heinz story, I generated an image of his children watching their mother dying, for example. Likewise, when thinking about life purpose, we fantasize scenes of accomplishment, or imagine dying without having gotten anything much done.
These images are emotive, but probably mostly unrealistic and unhelpful. (The Heinz story didn’t even mention children, for example; maybe he didn’t have any!) I suspect eternalism leads us to take these mental movies much more seriously than they deserve. (How? I’m not sure.)
Mystical experiences of total understanding
People in non-ordinary states, produced by psychedelic drugs or meditation, often proclaim sudden, unshakable, universal understanding. They rarely or never can explain their supposed understanding. I think these are probably mostly illusory. Such experiences may give genuine but ineffable insight into some things. I’m reasonably sure they involve no actual understanding of most things.
Eternalist systems are often led by people who have such visions; but most of their adherents don’t. Ordinary eternalists have to rely on the cosmic understanding of special people.
Socially distributed (mis)understanding of meaning
Understanding of the physical world is socially distributed. You don’t need to understand how to build a bicycle frame, because there are people whose job that is, and you can rely on their understanding.
You may remember the story of Onan, who spilled his seed on the ground. You may also remember that the story is not about masturbation, but coitus interruptus. (That’s confusing.) You may recall that masturbation is a sin against chastity, and that the only proper use of the genitals is procreation. Or maybe also conjugal love. Why?
This is a pesky, impertinent question. You are (or should be) quite certain that you are correct, even if you can’t give a coherent explanation.
You don’t need to be able to give an explanation, because you know that if you go to a Jesuit, he will (or should)10 be able to explain in detail, with extensive logic, and answers to all objections. Your certainty can rest on your knowledge that an explanation is available, without having to know the details.
Although... for nearly everyone, it’s obvious that whatever explanation a priest gives for the evil of masturbation, it will be nonsense. It will be verbiage that sounds like explanation, but isn’t. Only loyalty to the eternalist system—the will to believe—could fool anyone into thinking it’s meaningful.
The same is true for most political opinions. Individuals are usually incapable of producing coherent explanations; but why should they?
You have heard experts on TV explaining Benghazi and Keystone, and they seemed to make sense; and you know they are good and trustworthy and smart people, because they share your fundamental values. You might not be able to explain those issues in detail, but you are confident that they can. But perhaps those explanations are about as accurate as the priest’s?
Agreeing to agree about meanings
Because eternalist delusion is so desirable, people collude to maintain it. We all agree to agree—vociferously—to whatever meanings our social group comes up with. That is a genuinely compassionate activity. We all want to save each other from nihilism.
Agreeing violently about political opinions is a major social activity. Groups of friends get together and regurgitate political explanations they’ve heard on TV or read on the web. This reinforces certainty and the illusion of understanding.
Talent for regurgitation gives you social prestige; people think it’s an important life skill. Imagine—if you got good enough at it, you could go on TV and vomit opinions in front of millions of people! Mostly, though, this is a collaborative, improvisatory, small-group activity.
Similarly, ethical explanation is mainly a social activity. Moral philosophers want ethics to be about rational individual decision-making, but it mostly isn’t. (This is one reason academic ethics is so useless.)
Research by Jonathan Haidt and others shows that ethical explanations are mostly used to justify actions we have taken or want to take. This “social intuitionism” is a descriptive theory, about how ethics works in practice. It’s not a good account (even according to Haidt) of how ethics ought to work.
In the ethics chapter, I’ll ask “what is ethics for?” if not social justification, and not rational individual decision-making. I’ll argue that genuine understanding is genuinely valuable.
- 1. Nevertheless, certainty, understanding, and control all seem to be separate innate psychological drives. We seek certainty, even when understanding is entirely unavailable. We seek understanding, even when control is obviously impossible. Personally, I love understanding things like supernovas and Precambrian evolution, even though there’s nothing I can do with them.
- 2. The God That Failed is a famous collection of accounts by Western intellectuals explaining why they converted to communism and later became disenchanted. I’m relying here on the summary in Baumeister’s Meanings of Life, p. 299. The quote is from Arthur Koestler, p. 23 in The God That Failed according to Baumeister; italics in original.
- 3. This is called “bus bunching”; the Wikipedia has a fascinating explanation. The dynamical chaos theory used there is also important in my explanation of how meaning works.
- 4. This result is actually a bit surprising, because the psychological literature generally finds that most people are overconfident about most things. Rozenblit and Keil did find overconfidence effects for some other types of knowledge, such as geography, but overconfidence about causality was much larger.
- 5. Rozenblit and Keil found preliminary evidence that subjects were less likely to overestimate their understanding in these cases. I don’t know whether this has been confirmed by subsequent studies.
- 6. If you know the least bit about drug development, this story will seem absurdly unrealistic. That annoys me. Maybe this absurdity is not relevant to the essential ethical dilemma, which you are supposed to somehow abstract from the details. However, I worry that unrealistic scenarios—the famous “trolley problems” are another example—give misleading results. In fact, I suspect artificial “thought experiments,” even if they weren’t obviously silly, may be worse than useless for understanding ethics. I’ll suggest later that observation of real-life ethical deliberation and action in “ecologically valid conditions” is needed instead.
- 7. I’ll analyze this important, unfortunate development repeatedly, at various points later in the book.
- 8. This will be central in my eventual explanation for how meaningness works. Interestingly (to me), my PhD thesis—titled Vision, Instruction, and Action—is also about perceptual understanding during improvised activity, and socially distributed understanding (communicated through over-the-shoulder instructions) of that activity.
- 9. See Catechism 2396 if you are in doubt.
- 10. Disastrously, some priests have gotten wobbly on masturbation, and are leading millions into damnation.