Comments on “Ontological remodeling”

Comments

Randomization

Dan's picture

It’s interesting—but probably not significant—that the earliest advocates for what was later considered the first step in the Scientific Revolution were all woomeisters, who were right for the wrong reasons.

I think it’s probably very significant! As you’ve explained, Ptolemy+Aristotle was a strong local maximum in theory-space. If you practice the “half-baked speculations I came up with after reading the Timaeus too many times” method of astronomy, you’re more likely to random-walk yourself out of it.

I wonder if that’s a general pattern: given an accepted theory that hasn’t been meaningfully improved upon for centuries, if a better theory is discovered it will always be discovered by crackpots investigating really terrible ideas.

This is excellent.

Josh Brule's picture

This is excellent.

The “Gerrymandering the solar system” section in particular knocks it out of the park. It’s a good example of nebulosity and how scientists don’t follow the “Scientific method” in practice, it’s very accessible and it’s entertaining. Thanks for writing this.

Also, all this talk of ‘rationality’ makes me think of a potential failure mode (although, I think you’ve covered this one elsewhere): I’m guessing the (LessWrong) rationalists who’s first solid exposure to a systematic way of thinking is Bayes theorem/probability theory will probably jump to thinking something along the lines of, “Oh, well, of course the categories aren’t clear: you haven’t put a probability distribution over satellite sizes. Once you have that, you can say that Pluto has a low probability of being a planet, but something like Jupiter or Earth or even Mercury are very likely planets…”

As you point out, bootstrapping the concept of ‘ontological remodeling’ is especially difficult. I like what I’ve read so far of “In the cells of the eggplant”, and I’ll probably buy it and I think it’s a good thing to be written, but I suspect that the rational to meta-rational shift requires something a little more ‘adversarial’ than a book. I’m not quite sure what that is, but I have a vague sense of needing to be repeatedly “kicked out of my eternalism comfort zone”. People with an especially strong sense of curiosity will do that to themselves, but I wonder if there’s a way to give everyone else a bit more of a push.

Heading off "so what?"

This is great, and I’m really looking forward to reading the rest of it!

I have the same concern as Josh Brule above (though of course I don’t know what you talk about outside the extract).

I think the obvious response to ‘things don’t always have sharp defined borders’ is ‘mate I already knew that, so what?’ Particularly from the rationalist community — I feel like Yudkowsky wrote various things along those lines, and there’s Scott Alexander’s ‘The categories were made for man…’ post.

And yet. You actually are pointing at something subtle and important, and the ‘things don’t have sharp defined borders’ point is really relevant to it (in the light of what you’re talking about, rather than as an earth shattering insight in itself).

I don’t know, though, it’s not like I know a better way of going about this. And I guess you hammer the difference between epistemic uncertainty and ontological nebulosity elsewhere. It might be worth putting effort into deliberately warding off the “this guy’s just spouting platitudes about things being a bit fuzzy at the edges” reaction, if you aren’t already. (That was my first impression of the site a couple of years back, fwiw.)

Feyerabend

Also, just out of interest, have you read Against Method? I remember it as being a defence of, as the title suggests, complete methodological anarchy — which would be unsatisfying in the same ‘everything’s as good as everything else’ way that cultural relativism is unsatisfying.

But I haven’t read it since undergrad, so maybe it’s more interesting than that. It’d be good to hear your thoughts on it.

Ontological replies

Dan — Really interesting idea! I hadn’t thought of that at all. Makes sense. Would be worth looking at/for other examples to see if it is a pattern!

Josh

Pluto has a low probability of being a planet, but something like Jupiter or Earth or even Mercury are very likely planets…

I dunno, I’d like to think the modal LWer isn’t that confused!

Fuzzy set theory is sort of the ontological equivalent of probability theory (which is epistemological). It would say “Pluto is .3741 a planet, whereas Mars is .9806 a planet.” It has all the same problems as probability theory (like, where did you get those numbers, man?) plus others of its own. There was a vogue for it in the late 70s but it seems to be ancient history now. I searched LW and there seems to be no mention of it.

sense of needing to be repeatedly “kicked out of my eternalism comfort zone”

That’s a great insight!

something a little more ‘adversarial’ than a book

There’s some of us scheming to create “experiential meta-rationality seminars.” Maybe that will help… although I hope “more immersive” rather than “more adversarial”!

drossbucket

‘mate I already knew that, so what?’

Well, hooray for the people who do already know! Not everyone does.

The book addresses a nebulous and heterogeneous audience. It’s not mainly addressed to LW (who are likeable and interesting, but probably historically insignificant).

Scott Alexander’s ‘The categories were made for man…’

Yes, it’s fab! He calls himself a rationalist, and that category is (of course) nebulous, but he’s well ahead of my possibly-straw “stage 4, love it or leave it for stage 3!” caricature of a rationalist.

[not] an earth shattering insight

Well, this is the first bit of discussion of meta-rationalism in a book that is supposed to be a first introduction to meta-rationalism. It’s meant to be the first baby step beyond “Bayes is the ultimate answer to everything!”

Unfortunately, I haven’t got any earth-shattering insights to offer (depending maybe on where a reader is at). I’m a popularizer; nearly everything I have to say has been said by someone else decades ago. In Heidegger-speak or Garfinkel-speak or Dzogchen-speak, so it needs translation.

That said, there are things to say about how you work with ontological nebulosity to create better models, and they are not well-known.

I guess you hammer the difference between epistemic uncertainty and ontological nebulosity elsewhere.

Right; that will have been covered in the first half of the book. (Plus a third category of nebulosity: linguistic ambiguity. There’s some backward reference to that in this short middle bit.)

have you read Against Method?

I don’t think so. Possibly 30 years ago. I’ve read many summaries, from which it seems to be a mixture of entertaining trolling and nihilistic relativism. If that’s accurate, it doesn’t seem worth the effort.

But it might not be accurate! The standard take on Kuhn misses much of his point. I (re)read him while preparing this piece. I’m actually not sure whether or not I read him 30 years ago; but if I did, I seem to have missed some important points then myself.

One of Feyerabend’s points seems to be that there is no set The Scientific Method, which I agree with strongly. There is no fixed way of carrying out ontological remodeling, which is what makes it non-rational. You absolutely have to improvise. If he’s got substantive things to say about how, I would definitely want to (re)read him!

Fair point

hooray for the people who do already know! Not everyone does.

Oops, fair point. For some reason I’d got it into my head that you were aiming squarely at the LW audience.

I’m a popularizer; nearly everything I have to say has been said by someone else decades ago. In Heidegger-speak or Garfinkel-speak or Dzogchen-speak, so it needs translation.

Yep, those sources definitely require translation for me! It might be popularisation, but of a rare sort, so I’m finding it really helpful.

And thanks for the Feyerabend comments. ‘Entertaining trolling’ is what I remember, but if he’d been doing something else I probably wouldn’t have been able to tell the difference. I plan to read Kuhn properly, based on your recommendation. Maybe I’ll do the same with Feyerabend.

Kuhn's Postscript

If you’ve read Kuhn before, start with the 1969 Postscript. It’s a significant clarification (and, arguably, a partial retraction of his flirtation with relativism in the original).

That said, there are things

bathyscaphic's picture

That said, there are things to say about how you work with ontological nebulosity to create better models, and they are not well-known.

This. I eagerly anticipate hearing more about these things! How do we develop patterns for dealing with ontological nebulosity?

The practicality of a given ontology for a given discipline is a feature not entirely of the ontology. It’s a feature of the relation between the two.

So, yeah - different disciplines can, and should, use different ontologies for the same domain when there are practicality gains in those disciplines. Makes sense, that is definitely better.

But, it seems that a better set of ontologies is a set that has as much uniformity as possible, to facilitate communication and collaboration across disciplines in the domain. Also, since generating an entirely new ontology for any new approach seems absurd, a better set should also exclude sufficiently redundant ontologies.

So there are a set of concerns:
1) balancing the virtues of the set with the virtues of the individuals, 2) determining when to individuate and when to merge,
3) determining or evaluating that the cost/effort of remodeling has or may exceeds the potential benefit to tractability.

This reminds me of so many things! I employ heuristics on this front constantly. But they’re just that - internal notions of feasibility that I would struggle to distill into language. I’m not even sure that a given strategy is universal (enough) to use as a technique. But - maybe there is a set of techniques, each with their own relative practicality? ;)

Looking forward to more. This article was awesome, can’t wait for the full book!

Ontological Remodeling of Rationalists / Rationality

Kyle's picture

First of all, thanks for the new posts. Not to get too personal, but I’ve massively benefited from the writing on this site (it brought me out of a fit of crushing, nihilism induced depression) and am looking forward to the ebook. I’ll definitely pick up “Understanding Computers and Cognition” to tide me over in the meantime.

On to my point…

In reading some of the writing on meta-rationality where you discuss rationality (such as this post), I’ve found that the writing doesn’t seem to make much sense unless I give up my intuitive understanding / definition of what “rationality” means and who a “rationalist” is. This was a big stumbling block when I first encountered your usage of the term, but gradually it became natural for me to just drop my definition whenever I’m reading your writing vs. reading any of the LW-affiliated writing. If there’s one thing that I suspect puts your target audience off of this writing (other than mistakenly seeing it as a criticism of a massively useful thing, as I was disappointed to see happening with slatestarcodex), it might be that they bounce off when they see how your usage of those words doesn’t match their own sense of them.

And, even if I drop my pre-existing definition, what I’m left with is a barely-workable, very fuzzy understanding of what you mean by those words. I’m also left in a state where I feel that I cannot point to anything in reality and use it as an example of your “rationality”, nor can I point to any specific person as an example of your “rationalist”.

I see this as an issue because I think it’s good to avoid non-intuitive / non-standard usages of a word. Maybe a different word or qualifier should be used, or maybe an explanation is in order which explicitly calls out how your definition differs from how the average LW-diaspora member would define it. As far as I know, you don’t ever acknowledge that your definition is different. Maybe you don’t think it IS different?

For example, take this post. As mentioned by drossbucket, it seems like what you’ve described is something that is obvious to the average rationalist community member (i.e. the people on LW and related sites). I would extend that remark to suggest that, according to my own understanding of rationality / rationalists, what you’ve suggested already IS a rationality technique. In fact, when I read the LW post several years ago called “37 ways words can be wrong”, it got me to understand the concept you’re explaining here and had a very positive impact on my way of thinking. It taught me to view words as mere tools for discussion and mental modeling. For example, I regularly use ontological remodeling as part of my “bag of tricks” for problem solving.

Another way of saying it is that I cannot imagine myself going up to someone who considers themselves to be a rationalist, explaining this concept to them, and getting a response other than “yeah I already knew that” or “oh, that’s quite useful, I’ll use that idea in the future”. As I understand it, by your definition of rationality, I shouldn’t get any response other than confusion, disagreement, or a new convert to meta-rationality! I just can’t imagine that anyone I consider to be a rationalist would have such a reaction. I don’t even think I can imagine ANYONE, regardless of whether they’re a rationalist, not being able to understand this idea or its usefulness. If meta-rationality is truly a stage 5-exclusive perk, this hurts the usefulness of the “5 stages” model. The stages are supposed to be competencies, which means people can use stages below the one they’re in, but not above. I guess that means that everyone I know is a stage 5 person but hasn’t realized it yet, since they can understand this idea!

Obviously, my preconceived definition of rationality / rationalist (which I expect is shared with most of the LW diaspora) differs from your definition. If that’s the case, then I think your usage of those words is causing problems in trying to reach your target audience. I see this happening in many of the various comment threads involving discussions of meta-rationality by people in the rationality community, where it seems like people decide to ignore the writing because they don’t see the discrepancy between their definition and yours. It’s quite frustrating to see this close-mindedness, but I can’t blame them - all things considered, why shouldn’t they assume you mean the same thing as them, considering that you have seem to be coming from having spent time reading LW-style rationality writing and considering that they have no reason to suspect you mean something different.

So, I’d like to know - I won’t expect you to try to explicitly define your “rationality”, but at the very least, ARE there actually people who you can point at as specific examples of the “rationalist” you are talking about? Are they really typical of the rationality community? I’d also like to suggest that you (or someone) takes care to call out this discrepancy or find a different word before it’s too late (and a bunch of potential readers bounce off the material).

…or perhaps I am very out of touch with the LW-style rationalist community and your definition actually DOESN’T differ from theirs. I may have become so accustomed to your writing that I can’t even conceive of what LW-style rationality actually is anymore! Honestly, when I think back to the me 5 years ago who decided to abandon the HCI field because of their lack of formal, robust, scientific, evidence-based practices (a quote I remember being said to me to convince me to stay was “you can’t use science to create the iPhone”, suggesting that there was no formal method that could be followed to design great things or to do useful HCI research, despite my LW-inspired conviction that there was), perhaps I was the “rationalist” you are trying to talk about. Coupled with my LW-inspired utilitarianism, it eventually lead to my previously-described disillusionment and nihilistic depression.

Straw and steel rationalists

a fit of crushing, nihilism induced depression

Over the past six months, I’ve been torn between three writing projects: on meta-rationality, nihilism, and the remainder of “How Meaning Fell Apart” (subcultures and atomization). I’m glad the extremely incomplete nihilism chapter of Meaningness was helpful for you. There’s a ton more to say about that—it appears to be 50 web pages worth, according to the outline—and I do want to finish it sooner rather than later. Shall I take your comment as a vote to up its priority, relative to the other two?

I’ll definitely pick up “Understanding Computers and Cognition”

Unfortunately I can’t actually recommend it strongly. The first half of it is a brief summary of the state of the art in meta-rationalism as of 30 years ago. I’m afraid it’s probably too sketchy to be useful; but may be of some use. The second half of the book is a proposal for a sort of business social network based on speech act theory, which is pretty bogus IMO.

understanding / definition of what “rationality” means

The book semi-explains that. It’s a nebulous category, so no precise definition is possible.

But: “rationalism” is a term with hundreds-to-thousands of years of history. I’m addressing that whole thing, which is big and serious, and hugely historically significant as the cognitive basis for the modern world. “Rationalism” is reasonably well-defined and well-understood within philosophy and in the history of ideas. I think I’m using the word in a way consistent with that mainstream tradition.

I’m not specifically talking to an LW audience. The LW community is diverse, and I don’t know it well. My impression is that many-to-most members do fit in the category of “rationalists” as I am using the term, but it’s quite possible that I am wrong. Some people—Scott Alexander and Sarah Constantin, to name two—clearly do not.

In any case, the LW community is small and unserious and (so far) historically insignificant. Not dissing anyone; just stating an obvious fact. I’m delighted that many members of the community are interested in what I write, but I aim for a much broader readership.

Re “37 ways a word can be wrong,” the discussion starting at https://meaningness.com/fluidity-preview/comments#comment-1738 may be helpful?

by your definition of rationality, I shouldn’t get any response other than confusion, disagreement, or a new convert to meta-rationality!

Anyone who has developed decent skill in using formal rationality also uses meta-rationality frequently. These are not opposed alternatives; and meta-rationality is not an esoteric subject requiring some special mental acuity. Although, unfortunately, all the available presentations of it have been esoteric, so far.

The consequence of that is that is that mostly people don’t notice when they are doing something different, and don’t deliberately develop that sort of reasoning. It’s not taught anywhere, so it’s hard to get good at it. I’m hoping to rectify that.

If meta-rationality is truly a stage 5-exclusive perk

Just about everyone in a modern culture has some ability to think rationally. Most people aren’t very good at it. In the cognitive domain, “stage 4” means getting good enough at it that you can apply formal methods in most cases where they are appropriate, mostly get correct answers, and you mostly remember to do this in situations where it would be helpful.

Just about everyone who has well-developed rational abilities also has some ability to think meta-rationally. “Stage 4” means you aren’t very good at it. “Stage 5” means you are good enough at it that you apply meta-rationality in most cases where it is appropriate, and so on.

Honestly, when I think back to the me 5 years ago who decided to abandon the HCI field because of their lack of formal, robust, scientific, evidence-based practices … perhaps I was the “rationalist” you are trying to talk about. Coupled with my LW-inspired utilitarianism, it eventually lead to my previously-described disillusionment and nihilistic depression.

This is lovely! It’s a great description of the 4->4.5 STEM nihilism arc.

Not everyone experiences that, but enough of us do that it seems important to address.

Shall I take your comment as

Kyle's picture

Shall I take your comment as a vote to up its priority, relative to the other two?

The nihilism section already had enough to be useful, for me at least. I’d say I’m much more excited about the meta-rationality project right now.

“Rationalism” is reasonably well-defined and well-understood within philosophy and in the history of ideas. I think I’m using the word in a way consistent with that mainstream tradition.

That certainly clears it up!

Re “37 ways a word can be wrong,”…

I see the distinction now.

Just about everyone who has well-developed rational abilities also has some ability to think meta-rationally.

This explanation of the 5 stages model suddenly makes it seem much more useful. For whatever reason, I had been thinking that the model specifies that a person at each stage cannot use ANY skills from the stages above it, which obviously doesn’t hold true.

a great description of the 4->4.5 STEM nihilism arc.

I also found myself transitioning from nihilism to materialism in the same way as you described on “190-proof vs lite nihilism”.

Thanks for clearing all that up!

Getting it through our thick eggplant skulls

Duckland's picture

Over the past six months, I’ve been torn between three writing projects: on meta-rationality, nihilism, and the remainder of “How Meaning Fell Apart” (subcultures and atomization).

I believe a Twitter poll would be appropriate here. I would vote enthusiastically. Even if you don’t listen to the results, I’m personally, at least, curious about the results.

Unrelated: I often read comments about your articles being vague. You often reply to the effect that there’s more coming that will complete the thoughts, or that the topic is inherently vague. People also comment that the ideas seem sort of obvious in retrospect (hey I knew that, did I really learn anything). I, also, personally believe (in addition to the aforementioned) that I read your articles quickly and excitedly when they come out then mostly forget them.

In math or CS I have often had the experience that I read a chapter, think it’s straightforward, try some exercises, then say “Oh what was that definition again?”, “Oh wait these ideas are distinct for a reason”, “Uhh how does this go again”, etc.

This is all to ask: do you think exercises of some sort could be appropriate for your format? I did enjoy the Bongard puzzles; I believe I understood that article better because of them.

You say this whole project is meant to be practical not theoretical. I personally don’t have much faith that most people can read an online book and benefit much. At the least, I have more faith in benefits coming from some sort of work or experience. I’m willing to bet research would/does back this up.

I know this sort of thing is associated with cheesy self-help books. I don’t think it can be helped.

I’m not really sure what form the exercises would take but a few occur to me, “Find another example in the history of science of ontological remodeling”, “Find two examples for each of the three types of ontological remodeling”, “Find an online argument where someone is confused about ontological remodeling”, “Write a convincing argument that the new official definition of ‘planet’ is sensible. Write a convincing argument the it’s idiotic.”, etc.

If anyone agrees with me here about exercises being helpful, please say so.

Map, territory, exercises

Reflecting on comments here, and on twitter, I suspect some readers have misinterpreted the point of this page as pointing out that models are never complete, or entirely accurate. Or, in Korzybski first put it, “the map is not the territory.”

That point is taught in every engineering class, so it certainly won’t be news to most readers of the book! And it’s taken for granted in The Eggplant from the beginning.

The first half of the book instead explains some specific sorts of trouble map/territory mismatches cause, and why rationalist theories were not able to overcome them. The second half is about ways of dealing with particular kinds of map/territory mismatch.

It’s obvious that you have to revise the map.

Slightly less obviously, you sometims have to revise the language of the map—the set of symbols it uses and its notational conventions, so to speak. That is “ontological remodeling.” That is not taught in STEM classes. But its necessity is still pretty obvious…

The actual point of this page is to point out some patterns in remodeling, which may be slightly less obvious. The main one here is the continuum of outcomes from dropping a category, through demoting it to informal status, to retaining formally it in a redefined form.

The rest of the book is about methods and patterns of remodeling. My guess is that it mostly won’t be obvious to most readers with a primarily STEM background.

In math or CS I have often had the experience that I read a chapter, think it’s straightforward, try some exercises, then say … “Uhh how does this go again”, etc.

Oh, that’s a really interesting analogy! And, yes, it’s importantly true.

do you think exercises of some sort could be appropriate for your format?

Yes. Well… I’m not sure about here… but definitely somewhere!

In the last year I’ve realized that I need to transition from just putting stuff on the web to teaching in person. And, that should definitely involve exercises / activities / problems / experiences. I hadn’t really thought about writing those up publicly, but yes that’s a natural related development. It’s something that normally goes in a textbook, and I’m thinking of the meta-rationality part of Meaningness as a sort of textbook. We probably need to try them out in person first, though!

I know this sort of thing is associated with cheesy self-help books. I don’t think it can be helped.

Funny that you should say that! I’ve been struggling with exactly this. I associate “experiential exercises” with New Age weekend workshops. As you say, there’s probably no way of altogether avoiding that sort of cheesiness. But maybe we can be up-front about it, and joke about it, and that will dispel some of its sillier manifestations.

I’m not really sure what form the exercises would take but a few occur to me…

These are really nice, thank you!

I know practically zero about how to teach and how to develop a curriculum or exercises or whatever. I ought to read up on that.

I’ve been discussing the possibility of “meta-rationality seminars” with people who do have experience with that, and they have better ideas than me, so far!

a Twitter poll would be appropriate

Good idea!

Barring unexpected obstacles, I will finish The Egpplant first regardless, because it has momentum. But then it would be useful to get feedback on whether to do more meta-rat, or something else.

Bartley's Retreat to Commitment

Kind of a shot in the dark. Have you ever read Wm Bartley III’s The Retreat to Commitment? Kevin Simmler is the only other person I know who’s read it. Look him up on Wikipedia if you draw a complete blank. Retreat was his first book, about how “rational Christianity” fell apart, before later phases: Biographer of Wittgenstein and of Werner Erhard, Editor of the complete works of Hayek (and accused sometimes of having written one of the works, or changed it excessively - and I think Bartley died before the “works” was completed), and having sort of mutual love-hate relationship with Karl Popper (something of an exageration but they seem to have had an interesting and important relationship).

Not really convinced by the Pluto example

Brian Slesinsky's picture

I’m not sure I understand how to reconcile:

“Meta-rationality treats all category boundaries as inherently nebulous and malleable. It recognizes that there are always marginal cases. Those have to be dealt with pragmatically—taking into account context and purpose—because there is no rational standard.”

With:

“It’s widely understood that the 2006 definition is stupid and useless.”

This statement seems exaggerated, overly emotional, and unsupported. (I would guess that there would be agreement that the 2006 definition is a bit kludgy and inelegant, but I’m not sure how many people would agree that it’s “stupid and useless.”)

It seems like the goals were mostly preserving backward compatibility for the word “planet” while coming up with a naming scheme for new objects in the solar system. It was well-understood that it was kind of arbitrary, so they just picked something. That sounds very pragmatic to me, much like you describe meta-rationality. It doesn’t seem like a particularly good example of the rational mindset?

If everyone agrees that category boundaries are somewhat arbitrary, why get worked up about it? Perhaps there is a better example to demonstrate the difference between rational and meta-rational?

Pragmatic planets

It was well-understood that it was kind of arbitrary, so they just picked something. That sounds very pragmatic to me

Interesting point! I think you are right that the de facto outcome was pragmatic. But it doesn’t seem that the process was; and the formal outcome isn’t, either.

That is: many astronomers argued passionately that Pluto really truly is, or isn’t, a planet. (How many? The accounts I’ve read don’t say. Maybe it was just a few loudmouths, and most IAU members didn’t care?)

And, the formal outcome was not “The word ‘planet’ doesn’t have any meaningful technical definition, so we’re just going to adopt the traditional list for naming purposes.” They invented the “clearing the neighborhood” criterion instead. That wasn’t pragmatic, it was a obfuscatory fudge (as far as I can tell, not being a planetary dynamicist).

If everyone agrees that category boundaries are somewhat arbitrary, why get worked up about it?

Why indeed! And yet, people did. In the case of Pluto; and we do for other categories all the time.

Sometimes it matters a lot where you draw a boundary. Sometimes it doesn’t.

Categories and Truth

If everyone agrees that category boundaries are somewhat arbitrary, why get worked up about it?

Let me answer this a second way, that is more relevant to the first half of The Eggplant. (Whereas my answer above was more in the spirit of the second half.)

Rationality assumes (implicitly, or, often, explicitly) that statements are either true, false, or meaningless. That is, ontologically—in the world. Epistemologically, we may be unsure; we may have a degree of belief; but that is a separate issue.

We would like to think that statements of the form “X is a Y” (where Y is a category) are usually true or false. However, if category boundaries are generally somewhat vague or arbitrary (“nebulous”), we’d have to say that “X is a Y” is meaningless, quite often.

How often?

Are category boundaries ever precise enough that we can say “X is a Y” is “absolutely” true or false?

How precise do they need to be? This would also seem nebulous… it’s hard to say.

As you dilute it with increasing amounts of water, where exactly does “this is yogurt” go from “true” to “meaningless” to “false”? Isn’t this also nebulous? In which case… a clear division of true/false/meaningless does not seem feasible.

precision and boundary setting

Brian Slesinsky's picture

It might be useful to compare formal definitions with legal boundaries (property lines, etc). Legal boundaries can be quite arbitrary at times. For many purposes, the precision provided by surveying property accurately is simply unnecessary; nobody cares exactly where the boundary is. But sometimes, arbitrary precision is useful as a way of solving disputes.

Similarly, most words have no formal definitions; their meanings drift based on usage. For most purposes this is fine; everyone is free to classify astronomical objects using their own categories whenever it suits their purpose.

Apparently, the people who catalog new astronomical objects wanted to decide in advance what upper limit they’ll use for “dwarf planet” in an official standard. If we think of “clearing the neighborhood” as just a Schelling point for setting an arbitrary upper limit on how to classify new objects, this seems like a pretty pragmatic decision as well. (A Schelling point was needed to end debate, but needn’t have any further significance.)

Carving reality at its joints can be important when we want to think clearly about some issue. But for legal reasons, just being precise often suffices.

What is Rationalism(s) and Are Hunter-Gatherers Rational?

The Eggplant essays, and other Meaningness writings seem to alternate between references to “rationality” as if it is one thing, and references to “systems” - and Kaplan’s Stage 4. Let’s name the subject of this(my) mini-essay “Rational systems”. We could use some explicit examples with comments on how well or poorly they fit the idea (to be developed) of a “rational system”. Would the following be examples: Randian “Objectivism”, Ptolemaic astronomy, Newtonian mechanics, Quantum mechanics, general relativity, Scientology, the logic of “Flatland”, Kantian philosophy, Hegelian philosophy, psychology or sociology and their bodies of research/publications? I believe the Ptolemaic system with Aristotelian physics were as logic-based as modern systems, but one can’t say that they delivered the benefits of our modern technological world. To what degree was the Enlightenment, say, a change in a “way of thinking”? Could I make the case that it was more a matter of discarding certain beliefs, and having a program of testing folk- and other dubious beliefs empirically so they got weeded out? How much is the modern world a result of a succession discrete “facts” or “discoveries”? How much a matter of accumulation of methods/instruments? How much a matter of progress in storage capacity and bandwidth of media (i.e. starting with Gutenberg)? How much a matter of institutions, starting with the Royal Society? and successive institutions and their development and tuning: other scientific journals, the German and Johns Hopkins model research university - the refinement of the idea of “scientific fact”.

Do scientific paradigms and science-oriented epistemology shed much light on how to know ephemeral or simply non-general facts? Like Barrack Obama was born in ___, or Russians under the direction of Putin interfered in the US election/interfered enough to change the result? Or Intifa is planning a revolution on 11/4? Could science be characterized as a protocol under which “true statements” (however we define that) come to the foreground and false ones fade into the background? If so, could that be applied to ephemeral or non-general facts?

Are rational systems characterized to any degree by certainty that every question (or well-formed question, or whatever) has an answer? My own reading and thinking leads me to think societies at a hunter-gather level of organization (the Yanomamo have agriculture but I don’t think it has affected the social organization much if at all) more or less don’t know that there’s anything (of importance) that they don’t know; i.e. they have answers to any question that could be communicated to them, or a procedure for getting the answer (Why is my boy sick? Shaman takes halucinagins, goes into a trance, etc). If nearly everything they “know” is “wrong”, that’s not the point.

Ontology&Concepts+NLP is the key to general AI

I think you’ve really nailed it here David. I think this is really THE key problem that needs to be solved for AGI, and current statistical approaches are light-years away from tackling this. In the current machine learning/neural network craze, most researchers are looking in all the wrong places.

I think the key to progress is understand how to connect many concepts together into a coherent whole. It’s the art of being able to look at things from many angles. All models have limitations , so the key is the ability to identity the correct context and limitations of any given concept, and understand it’s relationships with other concepts.

My understanding is really taken a huge quantum leap over the past year, after months of me loading up on Wikipedia articles and connecting multiple concepts together.

I’ve got an approach to the AI control problem that’s light-years ahead of MIRI and co. :D Yudkowsky and co. ain’t even close. To give an aircraft analogy, my plane is roaring down the runaway towards takeoff speed, MIRI guys literally never even left the hangar!

It’s natural language processing and concept learning! That’s the key to machine psychology and the control problem! The statistical approach is a distraction , and we must get back to symbolic AI !

Look at my latest wiki-book here, the answer to the control problem and AGI is here!
https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Mac…

I’d also recommend an older wiki-book I did here, on the central concepts in the field of knowledge representation and ontology:
https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Ont…

Modal & Non-Monotonic Logics

David,

It seems that what you’re looking for are the logics that don’t assign definite truth values (T/F), but rather allow for degrees of true. Well, there are logics for that, so I think what you call ‘meta-rationality’ is still ‘rationality’ , just not the usual deductive or inductive kind.

I think modal and non-monotonic logics are what we’re looking for here! This includes fuzzy logic and imprecise probabilities. Modal logic allows you perform reasoning about dynamic (temporal) systems, and deals with the notions of counterfactuals (possible worlds).

In my view these modal and non-monotonic logics are the key to understanding concept learning, natural language processing and machine psychology, including the AI control problem! This is because they’re specifically designed to deal with fuzzy boundaries and ambiguities.

Read through my wiki-book entries on these topics here:

https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Mod…

Linear Temporal Logic (LTL) shows promise

I’ve got respect for David’s view that ‘nebulosity’ can’t be systematized, but even if that’s true, the brain is a natural system (nothing supernatural!) and clearly humans can deal with nebulosity, so there must be some mathematical definition of a method for handling it, even if it’s a non-computable one.

Modal logic, and in particular Linear Temporal Logic (LTL), is showing promise at dealing with nebulous natural language problems.

Read this:
http://www.firstpost.com/tech/news-analysis/mit-incorporates-human-intui…

“MIT researchers have improved award winning automatic planning software by adding in code that mimics human intuition. “

“The strategies used by the human problem solvers were described in a simple statements that could be understood by machines, in a formal language known as linear temporal logic.”

Absolute stages, learning methods

Rob Alexander's picture

Stages as absolute — I’ve tended to read you as saying stages are absolute, too. Encouraged, for example, by passages like this:

“People who are at a particular stage cannot think or feel in the ways characteristic of later stages. They actually cannot understand explanations given in a later stage’s framework.” (from https://meaningness.com/metablog/political-understanding-stages)

Learning methods — my understanding is that effortful methods are the ones that work. E.g. reading and re-reading are very poor; retrieval practice and self-testing are much better, especially when topics are interleaved and optimally spaced out. (My main source on this is the book Make it Stick — https://makeitstick.net/)

In a similar vein, it’s very easy to read something like your commentary on the Pluto debacle and nod along. It’s a lot harder to do a novel-but-fruitful restructuring yourself.

Stage transitions are gradual

Ah… On the other hand, from my main stages post:

Stage transitions are gradual; they take many years. During a transition, one is sometimes able to function in the more sophisticated mode and sometimes not.

Maybe it will be helpful to mention/explain this more frequently.

Wording

Rob Alexander's picture

Perhaps equally important - avoid ever using wording that implies hard boundaries, especially if other sources (e.g. Kegan himself) do imply this.

I may also have been influenced by analogies (here or elsewhere) to e.g. “the shift when you see the boxing kangaroo instead of the noise pattern” which refer to largely atomic, irreversible changes.

Boxing kangaroo

I may also have been influenced by analogies (here or elsewhere) to e.g. “the shift when you see the boxing kangaroo instead of the noise pattern” which refer to largely atomic, irreversible changes.

Oh, that’s interesting! Yes, the analogy is misleading in that respect.

I think it is pretty much irreversible, but not fast or atomic. It’s a matter of acquiring a whole bunch of specific, complex skills.

You don’t learn Bayes’ Rule and suddenly thereby become rational. Learning rationality involves mastering huge amounts of material, both conceptual and procedural. The same goes for meta-rationality.

Analogously, meta-rationality

Analogously, meta-rationality has been around for more than half a century,7 and we can at least say that it’s not known to be unworkable.

It seems to me there is a case to be made that Pyrrho and Nagarjuna both hit upon the ideas first, but in contexts that didn’t carry their ideas forward to the modern West well. That is, both were in ways too advanced because there wasn’t enough rationality to also support metarationality, although both also found ways too thrive for a while anyway: Pyrrho until the stoics and epicureans evaporated during the European dark age and Nagarjuna as prajnaparamita melted into a mystical state rather than being understood as a philosophical method (though as I’m sure you well know some lineages managed to IEP alive or more likely rediscover what Nagrajuna had meant). But in fairness I wouldn’t know this if it weren’t for the Kant, Hegel, Husserl, Heidegger, Sartre, Hofstadter, Kegan line of reasoning that got us to metarationality as we think of it now.

Really really great!

Kenny Evitt's picture

This is great! This is a wonderful, and accessible, explanation of what you mean by ‘meta-rationality’ and why it’s useful, and important.

I think now, even more strongly, that the ongoing disagreement/confusion about rationality and meta-rationality among you and LW+-ers is due to us, the LW+-ers automatically including all of what you consider to be ‘meta-rationality’ under the category ‘rationality’.

Even the LW-EY-2008 post to which I linked yesterday – Taboo Your Words – makes the exact same point you do in this post – words, and more importantly the ontological categories to which they refer, must (sometimes) be carefully interpreted according to the relevant context and the relevant purposes of their users. (I think this post better emphasizes the importance of context and purpose than the LW post.)

Meta-rationality is all about navigating and creating – and remodeling – ontologies.

Rationality assumes (implicitly, or, often, explicitly) that statements are either true, false, or meaningless. That is, ontologically—in the world. Epistemologically, we may be unsure; we may have a degree of belief; but that is a separate issue.

This is a good statement about (straw) ‘rationalism’ and it makes me wonder if there’s not some kind of ‘rational meta-rationality’. I’m extremely more sympathetic to your frequently repeated claim that ‘rationalism isn’t sufficient!’. I’m much less sure now that it’s ‘false’ (ha!) in that I’m less sure that there’s any possible mind that isn’t effectively using a hodge-podge of (rational) tools, but with no overall ‘rational’ architecture.

Fuzzy logic is more general than probability theory

Hi Kenny,

With probabilities, what the numbers are quantifying is subjective confidence in some hypothesis about the world.

Fuzzy logic also deals with reasoning under uncertainty, but what it’s talking about is something a little different from probabilities. In the case of fuzzy logic, the numbers represent the degree of precision about some concept - that is to say, how well defined is that concept? And that, I think, is precisely what we need for dealing with ontology remodelling.

Fuzzy logic is more general than probability theory, because it can deal with uncertainty in our underlying models or concepts that we are using to talk about reality, whereas probability theory simply assumes a precisely defined underlying model or set of fixed concepts that we use to formulate hypothesis. So fuzzy logic lets you handle an extra layer of uncertainty (what Scott Aaronson calls ‘Knightian uncertainty’ or uncertainty in underlying models themselves).

A guy called Bart Kosko wrote a paper suggesting that probability is just a special case of fuzzy logic way back in 1990! And basically, I think he’s right.

https://en.wikipedia.org/wiki/Fuzzy_logic

Type theory is a form of computational logic that’s an off-shoot of a more abstract mathematical logic (category theory). It’s especially suited for dealing with concepts , because it’s basically a set of language rules (it can be treated as a programming language).

https://en.wikipedia.org/wiki/Type_theory

So if we combine type theory with fuzzy logic , I think we have the tools we need for dealing with ontological remodelling.

To some extent I agree with David though; it’s likely that no strictly formal set of procedures can fully capture ontological remodelling.

To me, meta-rationality is related to a notion of time and the fact that everything ultimately is ephemeral. The thing is that the world is a dynamical system - it evolves in time, and this is the reason that no fixed set of rules can ever entirely capture reality. It’s the dynamical (time-evolution) nature of reality that ultimately defeats any purely formal methods.

That said, I think type theory and fuzzy logic are essential components of ontological remodelling.

I'm building a rationalist qabbalah.

Bird Handorbush's picture

That said, I think type theory and fuzzy logic are essential components of ontological remodelling.

mjgeddes, how would type theory and fuzzy logic be applied when remodelling an ontology wherein the earth is flat?

To provide some context, suppose that I believe the earth is flat. I don’t trust information that I can’t verify either with my senses or according to the opinion of someone I’ve known for years who’s proven that they will be honest with me who says that they’ve verified it personally or else knows someone who they trust who has verified it for themselves and so on. I’ve gone to different fields and lakes and held up a flat-edge to the horizon from various high vantage points and it’s always looked to me as though the horizon were level. Now, someone I trust says that they know someone they trust who told them on the phone that they went to the ocean and climbed a tree with a ruler and held it up against the horizon and they said that it did appear to dip at the outside edges. Neither myself nor my friend can replicate this experiment ourselves because we don’t have any means of getting to the ocean, but I now have reason to believe the earth isn’t flat, which makes me uncertain about whether or not I should be open to information which I would normally consider unverifiable, such as might be found in a physics textbook or on the internet.

In this thought experiment, I’m questioning the validity of my current flat-earth model and wondering whether to change it and if so how I might go about realizing a different model that I can accept. Would you accept that this is a particular instance of ontological remodelling? If so, how will type theory and fuzzy logic be essential components of the process whereby I convince myself that the earth is round?

I hope you can trust, despite the notoriety of my chosen example, that I’m posing this problem to you in good faith. I don’t know the answer, which may be because I don’t have anything more than an acquaintance with these theories beyond their respective Wikipedia articles, and knowledge of Russell’s paradox and his proposed solution of the theory of types from his book Problems in Philosophy. However, to show my hand early, I suspect that the real reason is because, as far as the example goes, neither of these theories are relevant. They may be useful in some other cases of ontological remodelling, particularly instances concerned with theories of logic and mathematics at the object-level, but I believe that they will not play a meaningful role in changing the mind of a flat-earther, or of getting a totalitarian to think like a democrat, etc.

Do you mean to say that these theories are useful for helping rationalists understand why it might sometimes be necessary to adjust their decision frameworks for what is or isn’t significant in order to renegotiate the boundaries of their objects and accomplish different purposes?

Infinitesimals

Bad Horse's picture

“As always when something is a prerequisite for itself, you have to proceed in a spiral. An approximate understanding of a small part of the subject makes it possible to grasp more of it, and thereby to revise your understanding of the initial beachhead. You need repeated passes over the topic, in increasing breadth and depth, to master it.”

This is correct, but the denial that outward spirals work is fundamental to rational thought. It’s in the name: “rational”. The irrationals were so named because they were “against reason”; the non-irrationals were thus called the rationals, meaning in accordance with reason, meaning rationalism.

Rational thought allows only integers and ratios, and hence disallows use of physical measurements. This has been consistent since ancient Greek geometry to the present day, although everyone today is unaware of the distinction, because people in the sciences are unaware that “rational thought” disallows irrationals, and people in the humanities are oblivious to the uses of irrationals and continuums. People in the humanities have successfully fooled people into using the phrase “rational thought” as if it were a synonym for proper or even scientific thought, and you will frequently see the phrase “scientific rationalism” in print, as if such a thing were possible.

Aristotle argued that infinitesimals can’t exist, and his physics assume that they can’t exist, and this is the basis of a wide variety of ancient, medieval, and post-modern arguments. All such arguments begin by saying that everything must be explained by a causal chain which can be traced back to a single event which began the chain. Similar arguments apply to the construction of meaning in language (Derrida’s argument in /On Grammatology/ that words can’t signify is exactly that).

So if you want to win over anyone in the rationalist camp, you have to begin by explaining infinitesimals, calculus, and differential equations, and show how the “spiral” works. Otherwise it just sounds like crazy talk to them.

Notes

Bad Horse's picture

Other thoughts:

  • Doug Lenat’s Cyc has an extensive theory and method for ontological remodeling on the fly for an AI. See Guha 1993, “Contexts”. (Sorry; I don’t know the ref. I only have a prepub draft of it.) Also Lenat & Guha 1990, Building Large Knowledge-Based Systems.
  • Similar to my comment on infinitesimals, the very notion of “ontological remodeling” is nonsense to a Platonist or Aristotelian, and there are still a shocking number of Platonists and Aristotelians. I know plenty of people (all Catholic) who explicitly believe objects and people have Aristotelian essences, and I’m told there are contemporary journals of medieval scholastics, where people still publish “research” into questions about essences and the “proofs” of Aquinas and Duns Scotus. I met a semiotician who told me that the entire discipline of semiotics is based on the Aristotelian ontology of Duns Scotus.
    • And post-modern philosophy takes the Aristotelian account of essences for granted. A post-modernist might nod and say he believes in ontological remodeling, but he would mean that he believes society continually socially reconstructs its ontology through dialectic–and that it is reconstructing reality by doing so.
    • None of these people I’m talking about are nominalists. They don’t believe words are socially-constructed concepts which correspond to structures in reality. Nominalism, the understanding of langauge that ended the Middle Ages, is increasingly out of fashion in the humanities. So they won’t grok what you’re saying; it will sound like crazy talk.
  • Phlogiston is not “wrong”. Phlogiston is a theory that works quite well at predicting some things, and therefore provides information, and therefore is a good theory. Phlogiston is simply “negative oxygen”. Oxygen is a mass which begins *outside* the fuel to be burned, then is combined *into* the products of combustion. Phlogiston theory assumed that the thing-which-must-be-present-for-burning must begin *in* the fuel, and was *eliminated* from the combustion products. So a phlogiston-based chemical reaction would have 1 atom of “phlogiston” in the fuel for every atom of oxygen that was in the air, and one atom of phlogiston dispersed into the air for every atom of O incorporated into the combustion products. The problem was this required phlogiston to have negative mass, and also this didn’t explain the need for air in burning.
  • Gravity has no meaning at all in Newtonian mechanics. It’s just a name. It’s important to remember that. I don’t know quantum mechanics, but it may be that this “wrinkle in spacetime” is an analogy or description of effects, not a meaning. In nominalist physics, there’s typically a bottom level of names that should not be assumed to have referential meanings.
  • Comets were believed to be in the heavens until 1577, when Tycho Brahe was able to measure the distance to one. It caused quite a stir, even in the 16th century, as most intellectuals still believed the heavens were perfect and unchangeable.
  • It’s important in discussions of heliocentrism to point out that, without an ether, there is no fact of the matter as to what moves about what.
  • Re.”But we can’t have any general theory of truths, because they don’t have anything meaningfully in common.”–But we do: information theory.

Suggestions for stabilizing the complete stance?

Emily's picture

So when, over a long period of time, there is introspection which learns to recognize how the mind continually alternates between trying to solidify things or discount them as without meaning, then one arrives at prolonged experiences of groundlessness (the complete stance, correct?).  By groundlessness I mean a mind of spaciousness where meanings are flowing in and out (sometimes rather quickly) and shifting and changing depending on the circumstances.  At this point, the subjective experience is one of awe and, quite frankly, profound relief (fixation sucks). 

But then there seems to be this intense peer pressure to conform to meaning-making in the context of either of the extremes.  In some sense there might only be one extreme which comes down to fixation, or an appeal to PIN IT DOWN, by all means (including under the guise of there being nothing to pin down).  When confidence in the complete stance is still shaky, it is extremely irritating to face this pressure to pin things down over and over again, at least it was for me personally.  In my interactions I sometimes got feedback that I seemed spacey and aimless.  In retrospect I think this criticism was accurate: there was some combination of being traumatized and exhausted by fixation and so just not wanting to go there, and also murderous rage at a perception of pervasive oppression, as well as simple confusion about how to proceed. 

Luckily, after many bouts of nihilistic depression/rage/anxiety a flip switched and I stopped trying to protect myself from other people.  (Previously, I was terrified of getting enmeshed, probably due to a sense of powerlessness).  OK, ok, a flip didn’t switch–it’s an ongoing thing having to do with lots and lots of hideous shadow stuff and just snacking on that all the time even though it often tastes like pork rinds.

I am writing because I am interested in others’ experiences with taking a stand in the complete stance (to use David’s terminology) and how you have navigated that.  I’m sure it’s different for different folks due to varying strengths and weaknesses, but maybe there are also common themes?  Or suggestions?

Navigation

This page is in the section In the cells of the eggplant,
      which is in ⚒ Fluid understanding: meta-rationality,
      which is in ⚒ Sailing the seas of meaningness,
      which is in Meaningness and Time: past, present, future.

This is the last page in its section.

The next page in book-reading order is ⚒ Fluid self in relationship.

The previous page is The function and structure of the eggplant.

This page’s topics are History of ideas, Rationalism, and Systems.

General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2018 David Chapman.