Meaningness mainly re-presents material well-understood elsewhere. I have gathered ideas from several fields and explained them in terms a different audience will understand. Since most of the book is not yet written, you may want to go back to my sources to fill the gaps. You may also want to know where the ideas came from, to understand them in their original context; or go deeper and further than Meaningness ever will.
This page describes some of the texts that have most influenced the work, with brief explanations of how they are relevant. Some are articles, but most are full-length books. (I’ve linked those to Amazon, who send me about $3/day in exchange. So they want me to say “As an Amazon Associate I earn from qualifying purchases.”)
I have roughly categorized them by subject. I plan to add more texts, and more categories, as work on Meaningness proceeds. Here are links to the current categories:
- Fundamental texts
- Rationality and meta-rationality
- Computation, AI, and cognitive “science”
- Society, culture, and politics
These are all brilliant, major works. Historians agree that they represented significant intellectual breakthroughs at the time.
Most are also extremely hard going. That is at least partly because their authors were working at the edge of what was thinkable at the time, and struggling to explain insights that were at the limits of the authors’ own understanding. In several cases, I recommend alternative, secondary sources that re-presented these breakthrough works in later eras, when the ideas had been worked through and became better understood. Reading the originals is valuable, but may prove impossible without a guide.
Friedrich Nietzsche introduced the problem of nihilism and eternalism—the fundamental theme of Meaningness—to Western thought. It’s not much of an exaggeration to say that, ever since, the Continental branch of philosophy has consisted of working through Nietzsche’s ideas.
Nietzsche is fun and easy to read. Working at the edge of the thinkable, much of what he says is obviously wrong. It is often unclear whether he has made an actual mistake, or was joking, or was insane; or if he wasn’t sure—and didn’t care—whether he was serious.
I’ve read almost all his books, and recommend almost all of them, although his last few works are the best.
My favorite is Twilight of the Idols, or, How to Philosophize with a Hammer, which summarizes much of his thought. It is probably his most straightforward presentation of nihilism and eternalism. The single-page chapter “How the ‘True World’ finally became a fable” is an intense summary of his summary—and also of the whole Western philosophical tradition and what is wrong with it.1 He thought he was about to work out the solution to nihilism, and proclaimed it as:
Bright day; breakfast; return of bon sens and cheerfulness; Plato’s embarrassed blush; pandemonium of all free spirits.
Unfortunately he had a total, permanent mental breakdown a few months later, and so never wrote up the answer.
Nietzsche’s most famous work is Thus Spake Zarathustra, which is also the only one I wouldn’t recommend. It’s a tedious melodramatic parable, and the thinking is atypically muddled. People seem to like it because it’s a story.
In Buddhist philosophy, the problem of nihilism and eternalism goes back a couple thousand years. I use Buddhism’s word “eternalism” because there’s nothing equivalent in Western philosophical language. That’s because eternalism has been Western thought’s main topic from the beginning. Fish have no word for water, and the two main Western ideologies—Christianity and rationalism—are both eternalistic. India had both eternalistic and nihilistic ideologies, and Buddhism positioned itself as the “neither of the above” alternative.
Unfortunately, the Buddhist analysis of nihilism and eternalism is a godawful mess. The first major author, Nagarjuna, was severely confused, but he was so extremely holy that you aren’t allowed to contradict him. So there’s two thousand years of brilliant thinkers trying to understand and explain the issues without quite saying that Nagarjuna got everything wrong. Despite that constraint, they made considerable progress over the centuries.
The Nyingma branch of Buddhism, to which I belong, considers Ju Mipham’s Beacon of Certainty the definitive text. I think it gives a simple, obviously correct solution to the problem of eternalism and nihilism that Nietzsche first raised in the West. My original idea for Meaningness was to write a short, straightforward explanation of Mipham’s answer. I have failed spectacularly: Meaningness is several hundred thousand words so far, and is maybe 15% finished.
The Beacon of Certainty may be the most difficult book I’ve ever read. I absolutely do not recommend it—although I’m including it here because it is the root text for Meaningness. To make any sense of the Beacon, you need to have spent years studying less-difficult Buddhist texts.
Unfortunately, there is no less-difficult text I can recommend.2 The whole field sucks. Your best bet is to get oral explanations from someone who has mastered it. They are sometimes willing to say things in person that they wouldn’t dare write.
Mipham and Nietzsche wrote their major works around the same time in the late 1800s. Their life stories and works are parallel in fascinating ways.3 They both wrote abstruse academic philosophy and they both wrote wild, prophetic, heterodox quasi-religious allegories. I wish I could introduce them to each other.
Martin Heidegger’s Being and Time was the first and most important Western attempt to solve Nietzsche’s problem of nihilism and eternalism.
The first part of Being and Time analyzed what life is like in a completely new way, which I think points toward the solution. Heidegger abandoned the fundamental eternalist assumption that meaning must come from some ordering principle such as God or rationality. He showed how life is structured instead by “circumspection,” a non-dual awareness in which everyday circumstances show up as always already meaningful in our interactions with them. This understanding of meaning as neither objective nor subjective, but interactive is fundamental to Meaningness.
Then Heidegger took a wrong turn. The further analysis of meaning he developed in the second half of the book was definitely mistaken (as he later acknowledged).
Being and Time was probably the most influential philosophy book of the 20th century. Jean-Paul Sartre completely misunderstood it and based his Being and Nothingness on his further distortion of Heidegger’s most-mistaken part. That was the root text for mid-20th-century existentialism, and a lot of subsequent pretentious and harmful intellectual nonsense. More productively, Michel Foucault—discussed below—mainly wrestled with Nietzsche’s and Heidegger’s problems.
Being and Time is extremely hard going—up there with the Beacon of Certainty. I’d recommend reading Hubert Dreyfus’ Being-in-the-World first or instead. That is an explanation of the first, accurate part of Heidegger’s book. It’s not easy, but it’s much easier than Being and Time itself.
I’ve written about how Heidegger and Dreyfus influenced Meaningness briefly here.
Ludwig Wittgenstein wrote two main books. His first, Tractatus Logico-Philosophicus, was one of the central texts of logical positivism—the major rationalist-eternalist movement of the first half of the 20th century. Later, he realized that couldn’t work, and wrote Philosophical Investigations to explain why. The book is probably the most influential in all of analytic philosophy. (The two major schools of 20th century Western philosophy were the Continental (French and German) and analytic (English-speaking) traditions.)
Working in parallel with Heidegger, but independently, Wittgenstein analyzed everyday practical activity, and came to the same conclusion. Meaning resides in interaction, rather than in our heads or in objects.
In a weird parallel, just as 20th century existentialism began as a drastic misunderstanding of Being and Time, analytic philosophy not only missed Wittgenstein’s main point, but has mostly promoted its exact opposite. Philosophical Investigations argues that language acquires its nebulous meaning only in everyday practical use, and that philosophical problems mainly derive from taking it out of context. Analytic philosophy has tended instead to attempt to eliminate nebulosity by taking language out of context, in order to figure out precisely what it should mean. Wittgenstein was too radical for his age, and his supposed followers headed straight back to the apparent comfort of rationalist eternalism.
Philosophical Investigations is difficult, but not impossible to read if you have a basic knowledge of 20th century philosophy. I don’t know of a good summary or introduction to it. (If you do, please leave a comment below!) I would say that if you are going to read only one of this or Dreyfus’ Being-in-the-World, go for Dreyfus. It’s less difficult, more clearly relevant to current concerns, and—this is a controversial call—Heidegger is more important than Wittgenstein.
Harold Garfinkel founded the discipline called ethnomethodology, which is the empirical study of everyday practical activity. Like Heidegger and Wittgenstein, Garfinkel found that meaning lives in interaction. But whereas they derived their conclusions from informal reflection on personal experience, ethnomethodology observes other people doing meaningful things in meticulous detail—typically through obsessive analysis of video tapes. Particularly interesting for me are the many ethnomethodological studies of laboratory scientists running experiments.
Garfinkel’s major work is Studies in Ethnomethodology. It’s completely incomprehensible until you have got the main ideas from a less dense text by someone else. John Heritage’s Garfinkel and Ethnomethodology is the best theoretical introduction, although it’s still not easy, and does not cover all important aspects of the field. Kenneth Liberman’s More Studies in Ethnomethodology is a collection of examples, and could be a good way to get into the field bottom-up. Lucy Suchman’s Plans and Situated Actions—discussed below—might be an alternative starting point, uniquely accessible to the STEM-educated, although it was not intended for the purpose.
Garfinkel was probably strongly influenced by Heidegger and Wittgenstein,4 although he didn’t acknowledge that. He was a coyote trickster… Carlos Castaneda wrote his first two books of fictional psychedelic anthropology as his Bachelor’s and PhD theses under Garfinkel’s supervision. Some scholars believe Castaneda’s imaginary guru, the “Yaqui sorcerer” Don Juan Matus, was based partly on Garfinkel.5 Don Juan advised Carlos to erase his personal history; Garfinkel seems to have followed that same advice, and it’s hard to figure out quite where his ideas came from.6
It’s also hard to figure out quite where they went. Ethnomethodology imploded in the early 1990s, for reasons I only partly understand. I want to encourage its recent revival.
Rationality and meta-rationality
Rationalism, an eternalist ideology, is false, as Heidegger and Wittgenstein explained. However, formal rationality often works. Indeed, it’s the basis of modern civilization, and therefore hugely valuable and important. So how and when and why does rationality work? Meta-rationalism is the empirical investigation of that question. It has found some preliminary answers. Meta-rationality is the use of that understanding to improve the use of rationality.
Julian E. Orr’s Talking about Machines: An Ethnography of a Modern Job is a detailed ethnomethodological study of circumrationality. That is the informal, “merely reasonable” work needed to make a rational system work. (My “Parable of the Pebbles” introduces this idea.)
The book describes the work of Xerox copier repair technicians. The rational systems they make work are (1) the operation of high-tech, rationally-engineered office equipment, and (2) the formal relationships between Xerox, customer companies that run the copiers, and the technicians themselves.
The engineers designing the copiers had little knowledge of how they were used in practice. Their products worked great in the lab. In the real world, they broke down every few days or at most weeks, and a Xerox technician had to drive to the customer site to repair them. The design engineers did not take into account relevant, uncontrollable context: customers ran them too much or too little, too sporadically, loaded supplies upside down, put in the wrong kind of toner to save money, pushed the wrong button, forgot to remove staples, housed the copiers in unventilated rooms where they overheated, squirted oil in random holes in hope of fixing the machine when it broke down, …
Xerox supplied the repair technicians with manuals with detailed instructions for how to diagnose and repair failures. These manuals were written rationally, from first principles, on the basis of what engineers thought might go wrong, rather than what did go wrong in practice. Most of the time they were unusable, due to not covering common failure modes, giving instructions that made no sense or that were physically impossible to carry out, suggesting a fix that would work but was more complicated or expensive than the practical one; or being outright false.
Circumrationality bridges the unavoidable gap between a tidy rational system and the nebulosity of reality. A copier malfunction report is highly nebulous. Is it actually not working, or is the customer confused? Is it not working because it is broken or because its environment is hostile? If it is broken, what is wrong with it? This is initially uncertain and may never be definable. Copiers are enormously complex, and even individual design engineers do not understand every aspect of one. Taking bits apart, cleaning them, and putting them back together may solve the problem without your ever knowing what actually caused it.
Repairing a copier is usually improvisational; the rational plan in the manual won’t work. It’s done by finger-feel and by ear and by eye, as well as by constructing a plausible causal narrative from practical experience and reflection on a pattern of symptoms.
Talking about Machines is fairly short and easy to read. Orr omitted nearly all the dense jargon and peculiar syntax most ethnomethodologists employ. If you have experience fixing machines, you will enjoy numerous moments of recognition, as technicians gradually diagnose puzzling failures and improvise solutions.
Orr’s investigation of how repair was accomplished led to a major meta-rational remodeling. Xerox eventually accepted that technicians’ experience, understanding, and improvisational fixes were critical. Its computer science laboratory PARC built a wiki-like system that let technicians exchange this knowledge globally. The system produced $15 million per year in savings for the company, as problems were diagnosed faster (decreasing labor costs), more accurately (so fewer expensive replacement parts required), and more reliably (so the machines broke down less often, making customers happier).
Tradition says rationality consists of thinking in accord with a formal scheme. Ideally, you close your eyes, put the grubby material world aside, and enter the metaphysical realm of pure abstractions. Discovering Eternal Truth amongst the Platonic Forms by way of Transcendent Reason, you return triumphantly to mundane reality with a Solution for a Problem, and hand it off to lesser beings to implement.
That’s not how any of this works.
Mainly, formal rationality consists of writing mathematical squiggles on paper, staring at them, cursing, crossing them out, reading them over again, and writing some more squiggles. Or it consists of typing lines of code at a computer, running them, reading the debugging output on the screen, cursing, reading your code again, adding a semicolon, and running it again. Or of transferring quantities of chemical reagents from one tube to another, ticking them off on a worksheet as you go, lest you lose track of where you are in the laboratory protocol.
The actual practice of rationality is just as material, perceptual, and error-prone as copier repair is.
The question then is why this works. How does covering a page in mathematical notation make possible feats of discovery and invention far beyond what “mere reasonableness” is capable of? Metaphysical answers should not satisfy. What, actually, are we doing? How does ink on paper causally produce a new semiconductor device or cancer treatment?
Key parts of this puzzle are put in place by Catarina Dutilh Novaes in her Formal Languages in Logic.
Humans are innately terrible at multi-step reasoning about novel or distant circumstances. In fact, we’re terrible at multi-step anything, and also at anything novel. Our brains evolved for routine activity in concrete situations, in which we could immediately perceive what action to take next. Brains are excellent at assessing local context to find the relevant factors. They are also great at retrieving relevant background knowledge, derived from experience of similar situations, to make sense of the current one.
We’re mostly only capable of multi-step procedures if each step changes the perceivable situation in some way that makes it clear what comes next. (“Doing being rational: Polymerase chain reaction” discusses this, with video analysis of a biologist losing track of what she’s doing, and examples of circumrational methods for staying on track.)
We are mostly only capable of single-step inference, consisting of interpreting our situation as meaningful in terms of relevantly similar past situations.
Formal rationality is a collection of technologies for overcoming these limitations by (1) blocking misleading distractions from perceived and/or background factors that our brains want to claim are relevant, but that actually aren’t; and (2) making visible where we are in multi-step procedures.
If you are looking at a piece of paper you have covered in mathematical formulae, you are specifically not looking at the concrete problem, and can’t be overwhelmed with the details of its specificity. The terms in the formulae are inherently meaningless, preventing your brain from insisting you consider details of past situations. The page lays out the steps of the procedure in order; you can’t lose your place. The bottom formula on the page is the one you should be working on!
Formality is largely a mechanism for avoiding thinking (contrary to the rationalist tradition), because we’re so bad at it. Dutilh Novaes quotes Alfred North Whitehead:
By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems. By the aid of symbolism, we can make transitions in reasoning almost mechanically by the eye, which otherwise would call into play the higher faculties of the brain. It is a profoundly erroneous truism that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. (p. 185)
Dutilh Novaes works out the mechanisms and consequences of these insights in detail, in the domain of mathematical logic. That’s one I personally find exceptionally interesting, but I believe the argument applies to rational practice quite generally. If logic isn’t your thing, you might want to read the introduction and chapters 5-7, after skimming or skipping chapters 2-4.
Bowker and Star
Rational systems must view the world though some formal ontology. Usually these include a classification system, which demands that objects belong to categories. Due to nebulosity, no categorization can be perfectly consistent, complete, or accurate. There are always borderline cases, which could reasonably be assigned to either of two categories, and weird outliers that don’t fit in any of them.
In such cases, practical use of the rational system requires “merely reasonable,” non-rational, informal, circumrational work to figure out how best to classify the anomalies. Or, when that breaks down, meta-rational work to remodel the categories or the circumrational support practices.
Geoffrey C. Bowker and Susan Leigh Star’s Sorting Things Out: Classification and Its Consequences explores the ways classification systems are constructed and used in practice. They discuss particularly the “infrastructure” a rational system requires to function. Infrastructure includes both circumrational human practices and artificial technologies such as paper forms, software programs, mechanical sorting devices, and process manuals. They also discuss in detail the meta-rational work of constructing and maintaining rational systems.
The book discusses several classification systems used by governments, with momentous, sometimes horrifying, and sometimes hilarious consequences. These include medical diagnostic categories and racial categories. Despite enormous efforts at rationality, classifying diseases is always sketchy. Because diagnoses are intertwined with criminal and welfare law, boundary cases and anomalies can result in appalling injustices. South African apartheid was monstrous; its application—the practices of racial reclassification of individuals—becomes absurd when examined in detail.
Here we see how meta-rationality can be a liberatory practice, by freeing us from classifications that were originally designed according to some political agenda, and which have come to seem rational, natural, and inevitable.
Things and people are always multiple, although that multiplicity may be obfuscated by standardized inscriptions. In this sense, with the right angle of vision, things can be seen as heralds of other worlds and of a wildness that can offset our naturalizations in liberatory ways. (p. 307)
No one has read Thomas “Paradigm Shift” Kuhn’s The Structure of Scientific Revolutions because everyone thinks they know what it says. It doesn’t say that.
Kuhn had two big ideas, as I discussed in “A meta-scientific revolution.” The first was that if you want to know how science works, you have to look and see what scientists do. Rationalist theories explain how science ought to work, according to armchair theorizing from first principles. But it doesn’t work like that at all. And once you understand how it does work, from empirical investigation, you can see that it couldn’t and shouldn’t work the way rationalism prescribes, either.
Kuhn’s second big idea was that science sometimes requires ontological remodeling, and the type of reasoning scientists use for that is quite different from the type of reasoning they use when their ontology is adequate. During crisis periods, when an ontology breaks down, scientists evaluate, select, combine, modify, discover, and create alternatives. “Revolutionary science” requires meta-rational thinking, whereas rationality is adequate for “normal science.”
Because he said that scientific progress depends on non-rational processes, Kuhn was widely misunderstood as advocating irrationalism—the only well-known alternative. In a Postscript, added in the second edition, he explains clearly the difference between his view and anti-rational relativism. If you read the book, don’t skip the Postscript! In fact, it might be the best place to start.
Donald Schön’s The Reflective Practitioner: How professionals think in action is the closest thing we have to a manual of meta-rationality.
Schön observed in detail how experts in five technical fields addressed nebulous problems. He found that technical rationality—“the formulas learned in graduate school”—doesn’t cut it. Those methods only apply when a problem has already been well-characterized—that is, translated into a formal vocabulary. That is not what a civil engineer encounters in the field: what you find there is water and rocks and dirt, and it’s a mess. It’s not what a project manager encounters in a tech company: what you find there is a bunch of people squabbling about a slipped schedule, and it’s a mess. Rationality solves formal problems, but that’s not what expert professionals do. They transform nebulous messes.
Meta-rationality requires understanding the relationship between a particular clear-cut rational system and a particular messy, nebulous reality. The “solution” to a slipped schedule undoubtedly involves fiddling with a GANTT chart, or some similar project-management formalism. However, the mess can’t be “solved” entirely, or mainly, in this formal domain. The manager needs to understand how the GANTT chart relates to what people are actually doing.
There can be no fixed method for this; it’s inherently improvisational. That does not imply mystical intuitive woo. It means a lot of well-thought-out practical activity, immersing yourself in the mess, and reflecting on how specific rational methods could work in this concrete situation.
Mastery of professional practice is not the ability to solve cut-and-dried problems. That’s for junior staff, straight out of school. Professional mastery is the ability to re-characterize a nebulous real-world situation as a collection of soluble technical problems.
Robert Kegan’s model of adult psychological development profoundly shapes my understanding of meta-rationality—as well as ethics, relationships, and society. I wrote about his work overall here.
His two major books are The Evolving Self and In Over Our Heads: The Mental Demands of Modern Life.
Kegan’s account of meta-rationality is frustratingly abstract, but his explanation of the ways it restructures the self gives insights not available elsewhere.
I’ll discuss Kegan’s work again below, in the sections on psychology and ethics.
Computation, AI, and cognitive “science”
Hubert Dreyfus was both the foremost English-language Heidegger scholar and the most incisive critic of cognitive science, especially artificial intelligence.
What Computers Still Can’t Do was the most recent in his series of explanations of how AI went wrong. His arguments were dismissed as idiotic philosophical misunderstandings by the field for decades, but were mainly proven correct by time. It was AI that was an idiotic philosophical misunderstanding…
Being-in-the-World, which I mentioned earlier as a guide to Heidegger’s Being and Time, also explained in detail how Heidegger’s understanding of everyday activity refutes cognitive “science.” (I put the word “science” in quotes to indicate that the field’s overall program was not scientific, but ideological, mistaken, and harmful. Lots of good and genuine science was done under the rubric “cognitive science” despite that.)
Dreyfus’ All Things Shining: Reading the Western Classics to Find Meaning in a Secular Age, written with Sean Dorrance Kelly, has nothing to do with AI, but it’s much easier to read than his other books. It’s an inquiry into the problem of meaningness: how to avoid both eternalism and nihilism, by recognizing the inseparability of nebulosity and pattern. I wrote a long series of tweets about it, with excerpts from the book, starting here.
Lucy Suchman’s Plans and Situated Actions is a remarkable cross-disciplinary synthesis. Originally trained in anthropology, Suchman also studied ethnomethodology, was a student of Hubert Dreyfus, and had thoroughly assimilated Heidegger’s account of everydayness and Dreyfus’ critique of cognitivism.
But she worked at Xerox PARC. In the 1970s, essentially all the elements of modern computer systems were either invented at PARC, or given their first practical implementation there. (See Fumbling the Future: How Xerox Invented, then Ignored, the First Personal Computer for a history.) PARC’s visionary director John Seely Brown then built an AI and cognitive science team that surpassed all but the top few university programs of the time.
So Suchman also learned to think and talk like a cognitive scientist, which made her uniquely positioned to bridge the conceptual gap between rationalist and situated accounts of practical activity. Her book was the biggest direct influence on my PhD thesis work, and much of my understanding of everydayness I owe to her.
In the early 1980s, Xerox bet its future on a physically huge, incredibly expensive, and vastly complicated new office copier. Unfortunately, no one could figure out how to use it.
AI to the rescue! Some of the foremost experts in AI action theory developed an intelligent user interface / tutoring system that told you exactly what you needed to do.
Suchman filmed famous cognitive scientists trying to use it… and the bafflement and swearing that ensued. If you remember Microsoft’s rage-inducing Clippy The Intelligent Office Assistant, you can imagine the scene.
By careful analysis of what went wrong in their interactions with the system, she showed how breakdowns were consequences of mistaken rationalist assumptions, and how they could be understood in terms of ethnomethodological conversation analysis.
Suchman’s relatively STEM-friendly language made philosophically sophisticated theories of action available to computer professionals. That changed the course of AI research. Plans and Situated Actions was even more influential in the fields of human-computer interaction and user experience design.
Agre and Chapman
As I recounted in “I seem to be a fiction,” Phil Agre and I eventually got Dreyfus’ critique of AI, with Lucy Suchman’s help. In the late 1980s, together we set about reforming the field to incorporate their insights.
Agre’s Computation and Human Experience is the overall best account of his work, and of our joint work. It’s a unique masterpiece. Like Suchman’s book, it’s a synthesis of Continental philosophy, empirical ethnomethodology, and deep insights into what can and cannot be computed by brains—but in Agre’s book, there’s code too.
My book on our work was Vision, Instruction, and Action.
A brief theoretical overview was “Abstract Reasoning as Emergent from Concrete Activity,” available on this site.
Winograd and Flores
In the late 1960s, Terry Winograd designed SHRDLU, perhaps the most impressive AI system of all time. In the mid-’80s, he recognized that Dreyfus’ critique was mainly correct.
The first half of his Understanding Computers and Cognition, written with Fernando Flores, is a short, clear meta-rational account of human activity. It is written for the STEM-educated, and may well be the best overall introduction if that’s you. For some readers, it may be a bit too short, with not quite enough detail to enable you to grasp meta-rationality.
(The second half of the book is based on speech act theory, a rationalist account of language that seems to clash with the meta-rationalism of the first half.)
I took the title of my book In the Cells of the Eggplant from a dialog in Understanding Computers and Cognition:
A. Is there any water in the refrigerator?
A. Where? I don’t see it.
B. In the cells of the eggplant.
Was “there is water in the refrigerator” true?
That question can only be answered meta-rationally: “True in what sense? Relative to what purpose?”
Brian Cantwell Smith is the foremost philosopher of computation. Actually, as far as I know, he is the only philosopher of computation. “Philosophy of Computation” is a field that doesn’t exist.7
“Right—because Church and Turing said everything that can be said about that!” Nope.
What is a computer? A computer is a machine that means things. If you fight your way through six CRUD screens on a hotel reservation site, you reach a web page that means you have reserved a room on Woolloomooloo Wharf next weekend. If it doesn’t mean that, the page is meaningless and you will be greatly discommoded when you arrive and find the hotel is sold out.
Computers are meaning machines—and you will notice that our existing Theory of Computation, which derives from Church’s and Turing’s work, has nothing at all to say about meaning.
Cognitive science assumed that brains are computers, more-or-less, and that brains and computers mean things the same way. How? Philosophers of mind assumed that AI guys knew how computers mean things—but we didn’t. We assumed that the philosophers of mind knew—but they didn’t. Once Smith (originally an AI guy) realized this disconnect, he set out to figure out how computers do mean things. Which turns out not to be easy; but he’s still making progress.
On the Origin of Objects is his first major report. One observation central to Meaningness, and to pretty much every work in this reading list, is that objects are not objectively separable. Yet meanings are about objects—Woolloomooloo Wharf, for instance. The objectness of the wharf is not inherent to it, but arises during your interactions with it. On the Origin of Objects includes an account of how. My account will be somewhat different—but Smith is one of the few people to ask the question clearly, and to offer a serious and detailed proposal.
Douglas Hofstadter’s Gödel, Escher, Bach is a uniquely playful exploration of the philosophy of artificial intelligence. Much of the book is presented in the form of comic dialogs between characters taken from Lewis Carroll. (Plus Terry Winograd, who appears as “Dr. Tony Earwig.”) But it also asks serious and deep questions about the nature of intelligence and computation, and gives insightful answers unlike those proposed by anyone else.
I don’t think Hofstadter’s overall approach was at all right, but all other known AI approaches also look like dead ends to me. If I were forced to choose one to work on, his might be the least unpromising.
I discussed some of Hofstadter’s best ideas in “A first lesson in meta-rationality.”
Roy Baumeister’s Meanings of Life is the project most similar to Meaningness in subject matter. It’s an exploration of the ways people think about the same set of topics I cover—purpose, value, self, ethics, sacredness, and so on.
I was annoyed all the way through it, because he says many things I was going to say, which I thought I had thought of first, and which I still haven’t had time to write up.
Mostly, he does not attempt to resolve these problems. Meaningness does. Or will. Any decade now.
Kramer and Alstad
Joel Kramer and Diana Alstad’s The Guru Papers was mis-named. It discusses gurus only in passing.
Their book is a sprawling but brilliant discussion of the major topics of Meaningness—unity and diversity, self and other, sacred and profane, life-purpose, ethics, ultimate value, and so on. It is a memetic nosology—a classification of contagious harmful ideas, attitudes, and practices.
I wrote a brief introduction, plus extensive quotes, here.
Robert Kegan’s The Evolving Self is the most sophisticated explanation I’ve found of the ways we relate self and other, and the ways we relate to our selves.
The book strikes many readers as a major revelation. It’s not only intellectually fascinating, making sense of so much of our lives—it’s also useful in practice as a guide to radical personal transformation.
Other readers find nothing meaningful in it. Tentatively, I suspect that’s not because they miss the point, but because Kegan’s framework simply doesn’t apply to everyone.
I wrote a detailed summary here.
George Ainslie’s Breakdown of Will is one of the best books I know on what it means to be a self.
Selves are inherently nebulous. They begin as incoherent masses of conflicting impulses. We are functional to the extent that we can get those to agree to head in the same general direction most of the time, and not constantly sabotage each other. Kegan’s book is one account of how to do that. Ainslie’s is another. Their perspectives are extremely different, but—I think—compatible.
Robert Bly’s A Little Book on the Human Shadow is another outstanding explanation of what it means to have a self. Again, the question is how to resolve internal conflicts. It’s written from an extremely different point of view (Jungian folklore interpretation) than Ainslie’s (mathematical game theory) and Kegan’s (Piagetian developmental psychology).
The Little Book was the basis for my nine-part series on “Eating the Shadow,” which begins here. It was also a major influence on my series on dark culture. Eventually I’ll present the same material quite differently in Meaningness.
Geoffrey Miller’s Spent: Sex, Evolution, and Consumer Behavior explains how vast swathes of everyday activity are unconsciously devoted to advertising our personal qualities to others—rather than enjoying ourselves or making ourselves useful. It’s a fast, fun read, and you will discover things about yourself that are simultaneously horrifying and humorous.
Spent inspired my piece “‘Ethics’ is advertising.”
Mihaly Csikszentmihalyi’s Flow: The psychology of optimal experience is a bit dated, and a bit pop, but contains useful insights into enjoyment.
I discussed it in relationship to Vajrayana Buddhism here.
Nearly everything that has been written about ethics, whether from a religious or secular rationalist point of view, is eternalistic. That is, it assumes that there must be some correct system of ethics that defines what is morally right. That assumption is mistaken and harmful: there obviously is no such system currently, and there are good reasons to believe there never can be one.
A very few people claim to be ethical nihilists, but they are trolling, psychopaths, or merely confused.
Only a handful of thinkers have tried to work out non-eternalist, non-nihilist accounts of ethics. The mostly-unwritten ethics chapter of Meaningness will develop this possibility.
So far, my most extensive writing on ethics has been a debunking of the modern Buddhist version. That series of posts does also include positive proposals, summarizing the Meaningness ethical approach, however.
Nietzsche wrote extensively on ethics. In the popular imagination, he was a nihilist and therefore wicked, but in fact he rejected nihilism. His ethical thinking pointed at a complete stance that avoids both ethical eternalism and ethical nihilism.
Among his ethical works, I recommend Beyond Good and Evil and The Genealogy of Morals. Confusingly, there are now many English translations of each. I read the ones by Walter Kaufmann, the only ones available at the time. Both are included in the collection Basic Writings of Nietzsche, a bargain at $3.99 on Kindle. The more recent translations may be better; I don’t know.
Kegan, again again
Robert Kegan’s work began as an extension of Lawrence Kohlberg’s theory of moral development. I think Kegan’s stage 5 is the most sophisticated ethical framework available. It requires meta-rationality: relating different ethical systems to each other, and reflecting on their relationship with reality.
Among his several books, only The Evolving Self discusses ethics.
In Finding Our Sea-Legs: Ethics, Experience and the Ocean of Stories, Will Buckingham writes that “we have always been at sea” when it comes to ethics. For thousands of years, philosophers and prophets have proclaimed the possibility of finding land: solid ground. But no one has ever reached any.
It is time, he says, to turn away from that eternalistic fantasy of ethical certainty. Instead, we can make genuine progress in our actual, groundless situation. Metaphorically, we can learn to be better navigators. We can study the winds and the waves and the stars; and can learn to steer around shoals, thunderstorms and whirlpools, guiding our ships into calmer waters where we can gaze at the sea and the sky and watch fish play.
When we recognize that ethics can only ever be a nebulous muddle—but is no less important for that—we can work together to resolve difficulties “with all the kindness, patience, and care that we can muster.” Buckingham concludes that “there is no way out” of the ocean, yet ethics offers “not an intolerable burden” but “the possibility of joy.”
Finding Our Sea-Legs is a fun, easy, sometimes-touching read. I wrote an extended review here.
Society and Politics
Adam Seligman, working with other authors, has made major contributions to the understanding of nebulosity, porous boundaries, and meta-rationality, specifically in the political realm.
The two books of his I know are Rethinking Pluralism: Ritual, Experience, and Ambiguity and Ritual and Its Consequences. I reviewed Ritual here.
Michel Foucault was the most important philosopher in the lineage of Nietzsche and Heidegger. Unfortunately, his deliberate obscurity has allowed tendentious idiots to misuse his subtle ideas in support of simplistic political agendas.
The best introduction to his work may be Michel Foucault: Beyond Structuralism and Hermeneutics, by Hubert Dreyfus and Paul Rabinow.
Unfortunately, Foucault’s premature death (of AIDS) prevented what might have become a complete meta-rational presentation. His last work—the multi-volume, unfinished History of Sexuality—is the best. It’s only incidentally about sexuality; it’s about self and society, knowledge and power, language and experience.
Jean-François Lyotard’s The Postmodern Condition: A Report on Knowledge is one of the two root texts for postmodernism. Knowing this, you might not suspect that it was commissioned by the government of Quebec as a report on the influence of information technology on the exact sciences. Written in 1979, it’s astonishingly prophetic about the then-future impact of the internet—but that is not the reason to read it. You might also not suspect that, unlike the voluminous obscurantist blather of later postmodernists, it’s only 70 pages and reasonably clearly written.
Lyotard’s main topic is the breakdown of the systematic worldview in the face of nebulosity, and the persistence of multiple, functional, partial systems despite that. He aims for “a politics that would respect both the desire for justice [pattern] and the desire for the unknown [nebulosity].” This remains unfulfilled, and obstructed not least by the subsequent development of postmodernism—but I think still a worthy goal.
Ross Haenfler’s Subcultures: The Basics is a short, easy-to-read, fun and insightful overview of one of the most important cultural forces of the past few decades. Most discussions of this topic are either pomo-academic and abstractly theoretical; or else are pop surveys describing the contents of specific subcultures (“here’s what goths wear”) without analysis of implications. Haenfler is a sociologist, and his book is about the structure and functions of subcultures, but he avoids jargon, irrelevant theory, and allusions to obscurantist French dudes. It helps that he’s an enthusiastic participant in some of the subcultures he describes, not an ivory-tower observer.
Haenfler makes many of the points I intend to cover in my chapter on subcultures. If you were intrigued by the hints in my introduction, but frustrated that I haven’t yet delivered on them, you’ll probably enjoy his book.
Eric Hobsbawm’s The Invention of Tradition, about fake history, is both insightful and very funny, in a dry and British way. I discussed it here.
- 1.Because it’s highly condensed, it may be incomprehensible without some knowledge of the tradition. One key to understanding is that “Königsbergian” is a reference to Kant specifically. The supposed “true world” of Nietzsche’s stage 3 is Kant’s ding an sich, “the thing in itself.” That is the inaccessible “noumenon,” or true reality, as opposed to the defective “phenomenon” that appears to the senses. This is a catastrophically bad idea, which leads straight to nihilism.
- 2.There are several books that claim to explain the Beacon. I haven’t read any of them all the way through, but I’ve skimmed the ones I could find, and I would not recommend any of them. They miss the point, as far as I could tell.
- 3.I would very much like to know whether Western thought was a significant influence on Mipham. There was much more Western cultural influence in Tibet at the time than is usually recognized—because, a bit later, both Tibetan conservatives and Western Romantics propagated the myth that Tibet was a special pure realm untouched by modernity. I’ve had to work hard to stop myself from digging into Mipham’s personal intellectual history. It’s not realistic that there was any direct influence between Mipham and Nietzsche in either direction, but it’s not completely implausible that they developed independent, somewhat-similar responses to the same distinctively-modern conceptual problems.
- 4.Since originally writing this, I’ve read more about his intellectual history, for example in Anne Rawls’ introduction to Ethnomethodology’s Program: Working Out Durkheim’s Aphorism. She makes a good case that Garfinkel’s insights were entirely independent of Wittgenstein’s; and that, while he studied the phenomenological school that included Heidegger, other members were bigger influences on him.
- 5.And George Lucas based the character Yoda partly on Don Juan Matus. Since learning this, I cannot help reading Garfinkel in Yoda’s voice.
- 6.Ixtlan, maybe.
- 7.Since I wrote this, I’ve learned of William J. Rapaport’s Philosophy of Computer Science, which is not quite the same thing, but adjacent. It covers, as it says, the philosophy of computer science as a field, more than the philosophy of computation as such. It also covers the 1980s-era arguments over computationalism in the philosophy of mind. I haven’t read it, but it looks like a useful summary resource for the mainstream views on these topics. My thanks to Jake Orthwein for drawing my attention to it.