Comments on “Your questions about meta-rationality?”


I'd like to see the stages of

Nick's picture

I'd like to see the stages of psychosocial development covered. And maybe some rationale for why meta-rationality is something different from common sense, considering that choosing the most appropriate rational system for the problem at hand is something regular people do all the time, if not always well.

And of course ideally some coverage of tantra and how it's meant to provide a meta-rational tool set, but I guess that's entirely out of the scope of this book?


J's picture

Too many words. I've been following for like a year now, because you identified the phenomenon of the nihilistic trap people get stuck in post-rationality. Great job naming a real problem! But I've actively and passively kept an eye on the site since then and never felt like you had a clear answer about how to get out of it. I think whatever your answer is, it won't work unless it's something I can intuitively grasp, and that means it probably needs to be expressible in a thousand words. Perhaps also some contrived stories of people successfully making that transition (that is, explicit examples of how it could play out). But I don't think an expansive philosophical tome surveying historical approaches is something I'll be able to get through.

Uncertainty vs. fuzziness vs. illegibility vs. nebulosity vs...

Josh Brule's picture

"Nebulosity" is, um, nebulous.

Consider a Strawman!Scientist; they'll come up with a theory, defend it against objections until the weight of counterexamples finally piles up high enough to where they can't write it off as "weird errors", then they jump straight to a new theory which they defend against all objections until...

The idea of probability theory / uncertainty is nebulous to them. They might even be force their way through a little bit of the math, but they don't grok it.

Or, consider someone who's studied probability theory, but never seen fuzzy set theory. They put a probability distribution over everything - they're a "level" more sophisticated than a hardcore dualist, but they still believe that all categories are distinct and anything to the contrary is merely a statement of your own ignorance. The idea that a statement itself could have a degree of truth is nebulous to them.

I suppose my point is that "nebulosity" seems to cover a lot of different things and would benefit from more discussion/description. Of course, the nature of nebulosity makes that a difficult project.

I guess I'm concerned that rationalists attempting to make the jump to meta-rationality will keep trying to "formalize away" anything that is nebulous. It's not exactly what I have in mind, but I'm reminded of "What the Tortoise Said to Achilles". Sometimes, I want to tell people, "Don't be the tortoise!"

Instrumental meta-rationality

Marshall's picture

I love learning about systems and working inside systems, so it's very easy for me to point at ways that stage 4 rationality is useful to me. Even when I can see that it makes sense to apply multiple systems to something (e.g. a common case is when there is a kind of reductionist view of something and a high-level abstract view) I usually stick to thinking in terms of the union of all such systems that I understand. It would be interesting to me to read what kinds of situations trigger you to think in a meta-rational way.


Marshall's picture

I frequently feel confused by how you use the word "meaning." I understand the idea of causality, and I understand the idea of taking actions pursuant to a goal. I understand that people have meta-preferences about what kinds of goals are important or worthwhile. I also understand the idea of communicating an understanding to someone (or yourself) by taking an action or that "means" something to them; linguistic "meaning". But some of your writing makes it sound a lot more confusing than that. For example, regarding the different varieties of nihilism, you write about "higher" and "lower" meanings, and you talk about rejecting the idea that anything has a meaning, and so on. I don't think I understand these at all. And I don't just mean that I have no formal understanding but of course I sort of nebulously know what you are getting at -- I really have no idea whatsoever.

It might be a lot to ask for the book to clear this up, but it's certainly confusing to me!

Reasonableness and random thoughts

Dan's picture

I'm most interested in the explanation of reasonableness. I'd also be interested in a "manual of meta-rationality", but I guess that has to wait. +1 for Marshall's point about showing meta-rationality in a practical context.

I think "function and structure" should be excised from the finished book. It seems useful for (1) having conversations like this while the book is in progress, (2) gesturing at further reading in case you don't finish writing the book, or (3) introducing an extremely large work of thousands of pages. But since Eggplant is shortish, I doubt it will add any value to announce what you're going to say instead of just saying it. The one thing to save would be the recommendation about who should skip part 1.

It applies to me: I'm Gen Y, and my friends and I are now agreed that we adopted a "meta-rational" approach to truth (though not necessarily to other things) at some point in high school. So the entire first half of the book is superfluous for me personally, and maybe for a lot of younger readers. Also it sounds like it's going to be all stuff you've already covered in Meaningness. (Obviously it needs to be there for Eggplant to work as a stand-alone thing; I'm just voting to make it relatively low priority.)

"The aim is not to refute these theories in detail—because it is uncontroversial that each did fail." Judging partly from Meaningness comments over the years, I'd guess that many of these things are uncontroversial for students of the history of philosophy, but not for your target audience. It might be a good idea to recommend books that spell out any marvellous refutations which this ebook is too small to contain. :)

"It uses mainly STEM examples, and occasionally gets quite technical. However, most of it can be understood without any particular STEM knowledge." My friends and I have a running joke/pet peeve that math textbooks always say things like this and the second sentence is always a lie—I suspect you shouldn't make that claim unless you've field-tested it on real live non-STEM people.

Overall, the book looks really promising and I'm excited to read it. Cheers!

Thank you!

Thank you all very much—this kind of feedback is really helpful!

The book will be able to address some, but not all, of the issues you've raised. (As you suspected!)

To structure the book in

matt's picture

To structure the book in terms of its uses I think could make sense, as it won't distort anything about the content itself. Although meta-rationality can't be broken into pieces, I suppose the book is trying to capture its essence and weld it into a tool for humans, and that tool can be used in lots of different areas, so those areas can acceptably be the structure as they are practical, not epistemological, and are already implicitly part of the work itself - if you catch my drift.

Also I know you've mentioned that it's intended for STEM people, but the scope of it is realistically going to be larger and more diverse than just that, so it might be useful to clearly state how it could be of use in humanities subjects, general public discourse, in people's daily lives etc.

Heidegger, AI, and rationality

I know you worked on "Heideggerian AI" decades ago, and that's now basically the dominant mode of AI in the form of machine learning.

Sorry if I'm somewhat abusing the term, but it's so evocative.

We need even more embodiment, as Dreyfus argues, but my contrarianism—combined with a kind of ressentiment against Google's computational power—leads me to want to reappreciate the cognitivist take on intelligence and experience.

Heideggerianism seems anti-rational, at least anti-rationalistic. It places primacy in deep embodied adaptation, rather than conceptual systems. There's real truth to this, but it also seems like a source of bias.

I don't want to write a whole essay in a comment, and I'm not sure I have enough clarity to do so anyway, but I wonder if you kind of see where the argument could lead, and whether the metarationality paradigm could help negotiate between the Heideggerian stance and the cognitivist stance...

I understand it's not the

Kyle's picture

I understand it's not the goal of the book, but I'm mostly interested in a "manual of meta-rationality", or at least writing which helps one to gain competence in meta-rationality. Perhaps by following a narrow path which leads from rationality to meta-rationality, I could come away with some useful, practical insights towards that goal or at least a better way to generate those insights.

Use of Wep- I mean Instrumental Metarationality

Holbach's picture

The thing I'm curious about, is this: am I to understand metarationality as a distinct skillset? Or is it just a generalized fuzzy-wuzzy intuition looking for serendipitous coincidences/convenience?

In the subject I brought up Use of Weapons (recommended, blast(hehe) of a novel) because, well, it is concerned with this kind of cross-cutting cleverness/capability.
So is Karate Kid.
So is Django Unchained.
Matter of fact, it seems to my memory that this is a common element in Hero's Journey type stories, or really, any story concerned with showing a character's growth when it isn't (only) about their morality.

From personal experience however, I do not believe that actively trying to apply such cleverness is a valid approach to anything, barring certain specific types of structured tests (name eludes me). Mastery, in my view, is only possible to attain in very, very specific contexts; a skill is just a skill, nothing more. If you have more of them, they're bound to overlap more often- but is that metarationality?
In fact, is my relentlessly instrumental approach at all valid here? Or is metarationality something, well, less literal and concrete than that? I think I might be missing some key part here. Or missing the key for the part. Trying to grab hold of a cloud, maybe.

A distinct skillset?

am I to understand metarationality as a distinct skillset?

Yes, I think so! I hope the book will make it clear what that is.

Jam tomorrow, but never jam today

Richard Kennaway's picture

My response to all of your writing on these subjects is similar to that described in the "tl;dr" comment, extended over several years. There is an endless process of deferral, of jam promised for a tomorrow that never arrives.

I have yet to see anything to persuade me against Scott Alexander's criticism, that what you are calling "rationality" is what is known on LessWrong as "straw Vulcanism", and what you are calling "meta-rationality" is no more -- in fact, a lot less -- than what LessWrong and its successor LesserWrong are actually about and contain a large amount of material on. And now you promise "a gradual introduction", which will "lead gradually from rationality into meta-rationality", which is "not going to look in detail at the main concern of meta-rationality" and "also not address the main concerns of rationality", instead limiting itself to "the questions “what are truth and belief anyway?”"

It is as if the imaginary reader in your head is always someone stuck in "rationalism", and however many words you write, you are still addressing that unchanging imaginary figure.

So, what I would like to see is the jam. Not "a few waypoints sketching a narrow path through the jungle towards a beachhead", but the promised second part, "the meta-rational alternative: a different account of how rationality works and how best to use it." Because if not now, when?

Empty words are not enough

anonymous's picture

Addressed to the commenters of 'tl;dr' and 'Jam tomorrow…', from my own personal experience rather than David's: I don't think you've read the rest of David's work right. Fixation and denial, nebulosity, continuum gambit, eternalism and nihilism, once you understand why these things fail, and understand where David's criticisms of these things are coming from, the method falls into your hands fully formed. I'm not convinced it's possible to explain it in mere words; meta-rationality is inherently very practical. That's what's meant by the article about ontological remodelling. The answer is there, but it's catch-22, you need the answer to understand the answer, and that's tricky to get.

As for less{,er}wrong, I'll believe it when I see it. All I've seen from them so far has been posturing, or useless systematic stuff that won't work outside of its limited domain.

Re: Jam Tomorrow and Re: "useless systematic stuff"

Kyle's picture

Re: Richard

I have yet to see anything to persuade me against Scott Alexander's criticism, that what you are calling "rationality" is what is known on LessWrong as "straw Vulcanism", and what you are calling "meta-rationality" is no more -- in fact, a lot less -- than what LessWrong and its successor LesserWrong are actually about and contain a large amount of material on.

I brought something like this up recently. David has stated before that his definition of "rationality" is NOT the same thing as what LW calls "rationality", and that he isn't really specifically targeting LWers with his writing and aims for a wider audience. I don't think he's ever claimed that meta-rationality is supposed to somehow compete with the writing on LW. The term "rationality" has a long and well-defined history outside of LW.

Re: anonymous
I don't know if the phrase "useless systematic stuff" makes much sense from the perspective of the meta-systematic mode. Someone in the meta-systematic mode uses systems, after all, and LW certainly has systems by the bucketful.

"useless systematic stuff"

anonymous's picture

Perhaps I could've phrased that better. I meant something more like "misapplication of systemic methods to domains in which they do not function"; see David's articles on Bayes for further reading on that topic.

Accusations of metarationality appear non-transitive

Dan's picture

David Chapman says Scott Alexander is metarational. Scott Alexander keeps saying he learned his alleged metarationality from Eliezer Yudkowsky's Sequences. David Chapman keeps saying the Sequences are not, in fact, metarational.

This puzzles me. Shouldn't the difference between meta and non-meta be pretty dramatic and agree-upon-able?

Knightian uncertainty: still rational

Sytse Wielinga's picture

Meta-rationality comes with the realization that if you perceive something without a proper method for looking (so that you get an ontological view appropriate to what you're looking for), no amount of rational thinking will get you anywhere (but if you think hard enough, you may fool yourself into believing that you got somewhere).

Luckily, a "proper method" for looking is kind of built in to humans, which is why "common sense" works quite well for people with skin in the game, who experiment with things for their own purposes.

Knightian uncertainty is an example of what partial breakdown of rational systems looks like to a rationalist. To a meta-rationalist, it's a sign that you're not looking in the right place (if what you're looking for is creating a functional rational system); alternatively, it's a sign that you're looking in exactly the right place if you want to be somewhere exciting (for living organisms, uncertainty leads to growth, partly because it leads to the death of dysfunctional parts of you).

Some examples of cryptic meta-rationalist advice, to illustrate the above:

If your thoughts don't make sense, look again. If your thoughts do make sense, look again.

First be, then look, then act, then think.

David: does any of this correspond to the idea you have about meta-rationality, or is that something else? Does any of this make sense at all, in the context of meta-rationality?

Do "meta-rational thoughts" exist?

Sytse Wielinga's picture

It seems to me, that the word meta-rationality describes something that has always been a fundamental part of human nature.

I wonder if "meta-rationality" might even have preceded rationality itself?

The existence and necessity of a logic-free process of revising your view of the world doesn't depend on rational thought or logic, but only on perception and concept.

On the other hand, maybe you could say, that until recently (the last one or two hundred years or so, maybe a bit longer), the only meta-rational process available worked over fairly long periods of time, with dreaming (during sleep) playing a central role in this process, and that it had little capacity to deal with abstraction; but that now, some people have developed the ability to look at the process of thinking rational thoughts with sufficient clarity and complexity, that they can use a rational thought process against some problem, and in the moment, work to modify and fine-tune the rational thought process?

This would provide an answer to Dan's question, "Shouldn't the difference between meta and non-meta be pretty dramatic and agree-upon-able?": it is pretty dramatic, but it's not an aspect of linguistic thoughts and pieces of writing (those are always either rational or pre-rational), but an aspect of how a person uses his brain to arrive at a certain rational thought process.

Its agree-upon-ableness is a big problem, because when somebody derives a wrong conclusion, it may be very difficult or even nearly impossible to find out what caused him to derive this wrong conclusion: did he fail to use meta-rational methods or not? Did he fail to use proper meta-rational methods or fail to use them at all? Or is it the case that he used some of the best meta-rational thinking that anybody could do, but fell prey to cognitive dissonance, or a character flaw such as stubbornness, or a lack of knowledge? Or is his view perfectly appropriate, but inadequately explained?

And if what he says does make sense, did he just learn and correctly use somebody else's rational system by accident, or by being embedded in a social system where this type of thinking still works very well, or did he use meta-rational thinking in order to derive his conclusions?

My hypothesis about the subject question: "meta-rational thoughts" do not exist, but "meta-rational thinking" does.

Multiple replies

Richard Kennaway — My understanding is that "straw Vulcanism" is the Romantic criticism: "But poetry! But Love! But Awe! You can't explain that with your 'rationality' can you?" Whatever I'm doing, it's not that.

the meta-rational alternative: a different account of how rationality works and how best to use it.

Yup, that's in there!

This puzzles me. Shouldn't the difference between meta and non-meta be pretty dramatic and agree-upon-able?

I find Scott's loyalty to 2008-era LW stuff a bit puzzling too. I assume that it's because EY is a great guy, and because 2008-era LW was a revelation to Scott when he was in his early 20s.

2017-era Scott clearly doesn't have an explicit understanding of meta-rationality (as he says, explicitly). I hope the eggplant book will make it clear to him. I don't know whether it will or not.

I do think 2017-era Scott does reason partly meta-rationally. His recent posts "Concept-Shaped Holes Can Be Impossible To Notice" and "Does Age Bring Wisdom?" are squarely meta-rational: they are about the limits of rationality, and how best to apply it given those limits. This, from the wisdom post, points pretty directly at what I mean by "meta-rationality":

We’ve been talking recently about the high-level frames and heuristics that organize other concepts. They’re hard to transmit, and you have to rediscover them on your own, sometimes with the help of lots of different explanations and viewpoints (or one very good one). They’re not obviously apparent when you’re missing them; if you’re not ready for them, they just sounds like platitudes and boring things you’ve already internalized.

mjgeddes — FWIW, I'm not a fan of Dempster-Shaffer theory. As far as I know, no one has found it useful in practice. And, it's a formal system. Formal systems are great, but by my definition they are not meta-rational. (Meta-rational = effective non-rational ways of thinking about and applying formal systems.)

Sytse Wielinga

does any of this correspond to the idea you have about meta-rationality

Yes, I think so!

Re your second comment... My guess is that "reasonable," formally rational, and meta-rational thinking are not greatly different in kind or mechanism. It's more about what sorts of material you are processing, and what standards you apply to your own processing of them. You can't be meta-rational unless you can be rational, simply because meta-rational thinking is about rational thinking (that's the material it processes). But rational thinking is much less dissimilar to "reasonableness" than rationalists believe.

The rhythm of rationalism

Richard Kennaway's picture

Mark Chapman

"My understanding is that "straw Vulcanism" is the Romantic criticism: "But poetry! But Love! But Awe! You can't explain that with your 'rationality' can you?" Whatever I'm doing, it's not that."

The use of the term on LessWrong (where it was created) is wider than that. Spock (the Hollywood rationalist for whom the phrase is named) regularly states impossibly precise probabilities (usually low probabilities for outcomes that Kirk as regularly achieves). The straw Vulcan has exact numbers calculated by exact reasoning for everything all the time. This is well recognised, e.g. by Eliezer, to be impractical and a caricature. But at the same time, the Bayesian rules apply and give guidance even when you don't and can't have the numbers. As he says, you have to sense the rhythm of it.

What is metarationality?

Sarah Constantin's picture

You've quoted my writing as metarational, which surprises me a bit; I thought of what I was doing as "just thinking", at the level of fuzziness which seems appropriate to the amount I know (or, rather, don't know). I still can't reliably predict what things you're going to refer to as "metarational."

There's a thing about "the smarty-pants know-it-alls don't actually know it all" that I see as a "post-rational" political position, and it's not generally a place I choose to put myself. So if that's meta-rationality, then it's not my philosophy.

If meta-rationality is a claim about what minds do -- for example, that we do not generally reason from axioms or "raw sense-data" to derived conclusions according to a finite set of legal moves -- then I might be a meta-rationalist.

Meta-rationality is abduction?


Look at this informal equation:

Deduction x Induction = Abduction!

If we take deductive logic (methods for reasoning from axioms to certain conclusions) and combine it with inductive logic (methods for reasoning under uncertainty by generalizing from specific observed examples), is the end result abduction? (method for working backwards from observations to best explanations).

Induction and abduction

Sytse Wielinga's picture


Aren't induction and abduction both purely epistemic rationalist constructions?

I would define induction as 'the ability to predict with high likelihood of success whether something is true' and abduction as 'the ability to find the One True Cause based on observations'.

Both depend on the problem statement: they will give different results depending on what you're looking for.

Neither concept is meaningful outside of a rational system of logic, with a fixed (or semi-fixed) ontology.

Problems related to induction and abduction that you will need meta-rationality to solve, include:

  • Deciding on what kind of ontology you're going to apply to the problem (in a 'who killed Jane?' type of problem, you may have to decide on motives; but that depends on how you analyze human minds -- more sophisticated analyses don't even have a simple concept we can call 'motive')
  • More crucially, what kind of system provides the justification and context for solving the problem in the first place, which is necessary to justify that the problem statement is appropriate (after solving the problem of Jane's death/murder, should someone go to jail? Do you trust the legal system? What are the boundary conditions for 'guilt'/when 'is' someone a murderer? Is jailing someone justified at all?)

And none of this 'logic' thing, including abduction, is a good way of thinking about how to design ontologies/knowledge bases/rhetoric/practices in order for them to be useful to others.

Or to put it another way: if you model other people as engines that apply 'induction' and 'abduction' based on what they 'know', and who can only be in trouble if they either 1) don't 'know' something such that induction and abduction will fail, or 2) use the 'wrong' method for applying induction or abduction, then you're going to be very bad at predicting whether you are being helpful to them or not.

Actually, 'thinking' isn't even a good way at all: experience teaching others, or seeing others trying to apply for themselves whatever they gleaned from what you said/wrote/did, is necessary to get really good at doing those things that actually help others solve the problems that are meaningful to them.

Abduction isn't a strictly formal system

Hi Sytse,

No abduction isn't about looking for 'one true cause', it's about applying creativity to generate multiple hypotheses. So it's not strictly a formal logical system.

Deduction and Induction are the formal systems of logic (I think your definition of induction is OK).

Any meta-rationality still has to relate back to the formal logic systems. Induction and Deduction can still be elements of meta-rationality, even though meta-rationality might transcend them both. I think it's using creativity to combine Induction and Deduction that isn't strictly a formal logical processes.

If then, we say that the 'x' (multiplication) sign represents using creativity to combine Induction and Deduction, thus obtaining a meta-rational system (abduction) that transcends formal methods, then the secret to meta-rationality is in that 'x' sign!

Deduction x Induction = Abduction!

I think what the book really

Unkempt 's picture

I think what the book really needs is solid examples of what meta rational thinking actually is, as compared to rational and irrational thinking. As in Bob (stage 3 tribal irrational thinker) and Jane (stage 4 systematic rational thinker) and Clarence (stage 5 fluid meta rational thinker) are all confronted with the same problem. What are they thinking? What are their solutions to the problem? How are their solutions workable and not? How is this typical of this sort of thinking?

Understandably your target audience is STEM folks, but by demonstrating how this all works across technical and interpersonal and personal issues you can make it relevant to anybody and show them the objective. You're aware there isn't currently a good path to meta rationality, but by concretely showing the destination, people who get it will find that path, and help spread knowledge of it to others.

At some point this all needs to come down to earth where normal people live and not just technically oriented typically poorly socialized engineers and mathematicians.

What meta-rationality is

Sarah Constantin

I still can't reliably predict what things you're going to refer to as "metarational."

Since you asked, I checked the definition on the glossary page. I wasn't completely happy with it, so I've revised it some, and it now reads:

Meta-rationality means thinking about and acting on rational systems from the outside, in order to use them more effectively. It evaluates, selects, combines, modifies, discovers, and creates rational methods. Meta-rationalism is an understanding of how and when and why rational systems work. It avoids taking them as fixed and certain, and thereby avoids both cognitive nihilism and rationalist eternalism.

Is it now predictable what I will describe as either meta-rational or not?

We can improve the definition some more if you point out ways in which it is unclear or incomplete.

What the book really needs


Thanks, this is really helpful! The Pluto section is one example like that, but it's probably not as clear as it could be. I do have others in the draft, but your comment made me see that this is particularly critical to understandability.

What you said about non-STEM examples was highly apropos, because I'm planning to discuss one in a blog post (not part of the Eggplant).

It comes from A Guide to the Subject-Object Interview by Kegan's research team. That's the scoring manual for their stage system. It sub-divides each stage transition into five sub-phases.

The book is mostly embarrassingly bad, but it does contain one gem. It's a brief monologue taken from a (fictional) psychotherapy session, in which the speaker describes an event in her relationship with her husband. The authors successively transformed the monologue from stage 3.0 to 5.0 through the ten substeps in-between. Since each transformation is so small, you can see extremely clearly how the logic of each stage develops.

I'm not sure how interesting or useful this will be for a STEM audience, but I'm planning to try and see.

I can't currently think of a way I could do something equally detailed in the STEM domain—but I will continue to think about that.

One certainly could do something precisely analogous in the domain of engineering management.

Add new comment

To post a comment, you must enable Javascript and reload this page.


You are reading a metablog post, dated October 30, 2017.

This was the most recent metablog post.

The previous metablog post was Fake insights.

This page’s topic is Rationalism.

General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2017 David Chapman.