Comments on “Fluidity: a preview”

Comments

Healthcare in a fluid world

Elayne's picture

Wow. I'm home with the flu and have binge-read your book today-- and it resonates deeply. Most of my life, I've felt somewhat a Stranger in a Strange Land, and you've articulated ideas I've tried to develop, in a much more coherent manner than I ever managed. I was raised atheist and by default, "blue" politically, in the Deep South (can you imagine?!), but somehow monism has always seemed not just incorrect but personally ghastly. I can't say exactly why other than it seems to invalidate the parts of life which mean the most for me-- engaging with others in relationships. If we aren't actually two, although certainly connected as well, how can we love each other? I have used the term "ineffabilities" to describe those particular but indescribable and transitory qualities of the "other"... whether person or poem. I really like your term nebulosity. And the combination with patterns. Beautiful.

I see now why I don't fit anywhere, in groups, unless they are poets or the like, unless I keep my mouth shut. It's because like you, I'm neither monist nor dualist. I have been baffled at how my progressive friends are utilitarian but my libertarian or objectivist friends are the ones who seem to see people as non-abstract-- and now that makes sense.

My atheist friends are almost to a person varieties of monists, and they think I'm arguing just to argue, lol, because they know I'm not a dualist. I am a "geek" in organizations-- and ouch, I guess clueless, but I get that. It's why I've left most of them, because the purpose wasn't what I thought it was.

I don't know how I wound up being anti-monist plus someone with mainly typical progressive political ideas. I can tell you it's a strange place to be.

I'm a pediatrician and poet, with a very gap-filled education outside of medicine. I'm not sure I have anything useful to contribute here, but I have a request for an example of fluidity applied to healthcare.

I've been in practice over 20 years. My desire for people to be able to get healthcare isn't based in utilitarianism, because I seem to be fundamentally incapable of abstracting individual humans into math. It is just because I see, in person, people who want medical care and can't get it. Parents of kids in my practice. It breaks my heart.

Years ago, I thought the solution was to tackle one area at a time, starting with the most ignored. So I helped get a homeless clinic set up in my town. But... then I started reading and learning about the convoluted financing and access problems, and I thought well, we need to address this nationally. I heard a single payer speaker and went all in--I wound up being elected to a national board (I'm using a pen-name here).

I figured they'd be interested in tweaking the plan to accomodate folks who didn't want to participate. I thought that could be my contribution-- explaining what southern conservatives want. Because no single solution works for every person. I came up with some ideas where opt-outs could work, and even tested these out with my conservative friends and got favorable responses, but there was no interest from the single payer org. None. I was startled at how rigid the thinking was-- I figured there had to be something more than just trying to bring about healthcare going on with them. Smart, kind people, and with this funny religiousness about their idea. It was sacred and couldn't be adjusted. Even though it was only a 20 year old piece of legislation, it was untouchable. This universalism you describe would explain that.

About that time, I got some education on my political party of birth, the Democrats, and became disillusioned-- I left my party. I was amazed that it had taken me so long to question the good guy/bad guy bit. I toyed with being a green but decided to see if I could get a better view of things by staying out of parties. I spent a lot of time trying to organize advocacy at the state level, which has been mostly a process of stuffing newspaper in the leaky roof. It doesn't last long, and the structure doesn't change.

Then I started wondering if the government was too full of sociopaths to trust with healthcare. I read about how they climb hierarchies, and I see that it happens. Not all are, but a good many. I wondered if we were at the point where neither corporations nor government could manage healthcare without making it worse. Or whether there were even minimal boundaries between corporations and government anymore. And I realize I can't take up an eternalist stance that's just anti-corporate either. I gave up full blown socialism. I gave up anarchy. I don't know what the answer is, now. I don't know what to do in light of the sociopaths. We have to account for them. I do think there need to be multiple possibilities, which are somehow flexible and fluid.

To some degree, I've felt at an impasse. And intermittently nihilist, minus this big quasi single payer plan I had which now seems crazy, lol. I hate that feeling. Your book was very helpful in reminding me of how I really see meaning, which is the same as your description. It's hard to hang on to sometimes. If I get with a bunch of utopians, it's contagious. It seems to be for me, at least. I get caught up in it, until the rigidness is too much to take. So when I can't sustain that fantasy anymore, I get dumped into meaninglessness and have to find one of those little meaningness boats again.

So... any ideas for little healthcare boats?

Thank you again for doing this book ❤

Elaine's Comment

Corey's picture

Wow Elaine. So this will be a cheerleader comment for sure, but just thanks for posting that. Reading someone else illustrate in their own context a profound reaction to David's was cathartic. I'm right there with you, just basically 36 hours coming off of a couple month addiction to Standard Issue Leftist that on reflection I fell into.

This. This resonates:

If I get with a bunch of utopians, it's contagious. It seems to be for me, at least. I get caught up in it, until the rigidness is too much to take. So when I can't sustain that fantasy anymore, I get dumped into meaninglessness and have to find one of those little meaningness boats again.

I've learned from Matthew O'Connell that spiritual practice (the primary subject of some of David's other sites) must lead to some sort of impact on the wider world or there is a sense of cloistering off, of failing to engage with the whole of reality. The challenge in trying to 'make a difference' so to speak is that so many movements are completely subsumed in the Kegan level 3&4 logics that lead to the situation that you ran into. I am trying to respond to the challenge of this moment by withdrawing from a tendency to 'B.S.' into a wide vareity, and instead to narrow my own focus and trust more knowledgeable on others, opinions as I find David's technocratic approach rings true with my own experience and intellectual journey. It would seem that working for change with both political sides in a way that does not 'compromise' but rather transcends both perspectives, in a field one is passionate and skilled about, is the path to take. This approach would appear to be exactly the one you have taken. As we perhaps now more readily understand under a backdrop of David's illustration of our political climate, we who would take this path are an isolated lot facing a great challenge. Both Matthew and David have described how there aren't really any post-religious, post-traditional institutions that have yet emerged that would promote this way of thinking about and attempting to resolve problems, perhaps with the partial exception of Silicon Valley(?). I've just found the straight up difficulty of holding the line on this understanding very difficult. Maybe I'm just a weak person, but every time I read David's writing along these lines it resonates BIGLY (thanks Trump), but now several times in the meaning-making and now political arena I've found {"my"} favored opinion of the moment floating back into the level 3 and 4 due to an introverted+agreeable personality and the sheer volume of opinion that dominates this discourse as the days and months go by until I happen upon something that jolts me back awake from a state of either dull reluctant agreement with the Left or confused hopelessness. I'm currently working with Matthew, who I mentioned earlier, on practices that might help reign in this tendency to ground and leave more confident. But damn would it be much easier if we had more thoroughly developed ways of meeting flesh and blood individuals of like mind (I'm one that still believes there's irreplaceable value in that). Emotional support is a thing, even if it cannot usually be foregrounded if we wish to move beyond tribalism. David is concerned about our ability to maintain essential institutions in an anti-institutional, '4.5' cultural moment, perhaps it's part of our task to make an effort to maintain them with new materials that are more aware of their own limitations, thus appealing more to those who grew up in the postmodern age, or to work on scaffolding that's already started. Or maybe that task is so far fetched that it's naive to even work on it, and we should just make the best of it, find our friends and support as we can get it, and hope for the best, idk.

Is "A Human's Guide to Words" (LessWrong) Fluid Mode 101?

Amy's picture

As I understand, you are familiar with Yudkowsky's writings and LessWrong. Have you read "A Human's Guide to Words"? I think it's the best thing he ever wrote, since it actually presents a way of viewing words, categories, concepts, labels, meanings - whatever we call it when we put a structure/meaning on the world that is more than "...this up quark in this state, this down quark in that state...".

As far as I've read your book, I can't find a way to distinguish the viewpoint Yudkowsky argues in "A Human's Guide to Words" and your idea of the fluid mode. Whenever you write "fluid mode", that's what I think about, except more developed than a brief introduction.

Since you're quite popular among rationalists, and I'm confident I'm not the only one with this equivalence, it would really help me/us if you could explain why Eliezer doesn't quite capture fluid mode, or if you think he does, maybe perhaps link to it and offer some commentary until you write your own explanation that improves on it?

A Human's Guide to Words

Hi, Amy, thanks for the recommendation!

I hadn't seen "A Human's Guide to Words." It's a list of about 30 blog posts; I had read several of them, but not most. There's a sort of a summary at "37 Ways Words Can Be Wrong," which is excellent. (That's one I hadn't read.)

By coincidence, the piece I've been working on most recently does more-or-less address this (although not by reference to Yudkowsky's work specifically). It distinguishes three sources of "nebulosity": linguistic ambiguity, epistemological uncertainty, and ontological indefiniteness. The first two are "problems in the map" and the third is "problems in the territory."

Generally, it seems rationalism tries to deal with the map problems, and ignores the territory problems. (The "Guide to Words" is about linguistic ambiguity; and Bayes/decision theory/etc. are about epistemological uncertainty.)

"Fluidity" or "meta-rationality" is about territory "problems." That is, the world is inherently fluid/mushy/vague, independent of any being's beliefs about it.

I don't know of any discussion by rationalists of ontological indefiniteness. The unstated background assumption seems to be that the world is perfectly well-behaved: facts are definitely true or false. It is just stuff in our brains (language and beliefs) that are imperfect.

Does that help make sense of the difference?

And, does it seem right that rationalism makes that implicit assumption?

The Simple Truth (1/2)

Dan's picture

That is, the world is inherently fluid/mushy/vague, independent of any being's beliefs about it.... The unstated background assumption seems to be that the world is perfectly well-behaved: facts are definitely true or false. It is just stuff in our brains (language and beliefs) that are imperfect....

And, does it seem right that rationalism makes that implicit assumption?

For my part, I'd say the question is hard to even understand from a LessWrongianism point of view, so this might be a good line of discussion for bridge-building purposes.

I wouldn't call myself a rationalist/aspiring rationalist/Bayesian/whatever, but I have read most of Yudkowsky's writings and have been strongly influenced by them.

Yudkowsky's own stated usage of the word "true" is that it means a useful correspondence between a map and some aspect of the territory. He lays this out in a long and silly allegory about counting sheep by dropping counting-rocks in a bucket. Mostly, the point is that what makes a model "true" or "about something" is a particular pattern of interaction and interpretation—also one of your favourite topics.

Within that usage, the question of whether "facts are definitely true or false" doesn't seem to make sense—if a "fact" is something in the territory, then all "facts" are "true" by definition (the territory corresponds to itself!). If a "fact" is some kind of proposition that can be true or not, it's another language/belief/stuff-in-brain type thing. (Is there a third option?)

Similarly, both "vague" and "non-vague" strike me as descriptions of a map, or a map-territory syncing process, not descriptions of a territory. Do you just mean that all possible mapping-processes are necessarily somewhat vague, or is there more to it than that?

(I'm not sure how far our uses of "map vs territory" line up, either, but I have no idea how to debug that...)

Quantum Explanations (2/2)

[Dan posted this comment, which got eaten by the spam filter. Sorry about that!]

Yudkowsky's approach to ontology is detailed in the Quantum Physics Sequence. It's exceptionally long even for Yudkowsky, though you can get the gist from the summaries. It explains the Many Worlds Interpretation (fairly accurately, according to physicists who've read it), and argues that anything short of the quantum amplitude distribution of the entire universe is a mushy approximation. Some quotes particularly about nebulosity:

A basic, ontologically existent entity, according to our current understanding of quantum mechanics, does not look like a photon - it looks like a configuration of the universe with "A photon here, a photon there."

What creates the illusion of "individual particles", like an electron caught in a trap, is a plaid distribution - one that happens to factor into the product of two parts.... Quantum entanglement is the general case; quantum independence is the special case.

Asking exactly when "one world" splits into "two worlds" may be like asking when, if you keep removing grains of sand from a pile, it stops being a "heap".... There is no exact point at which decoherence suddenly happens. All of quantum mechanics is continuous and differentiable, and decoherent processes are no exception to this.

But I get the impression that "quantum amplitudes" doesn't encompass even most of what you mean by "the world is inherently mushy".

Another EY post that's probably relevant is Where Recursive Justification Hits Bottom, which argues that there cannot be any "foundations" of the sort that modern philosophers used to look for and makes a proposal for what to use instead.

bulding a bridge

anders's picture

In reply to Dan
I can't guarantee that I understand so I'll resort explaining my view to see if it can be brought into agreement with your account.

The level of physical subatomic thingies has pretty much no relevance to how people operate. It is not a base level but a theoretical apparatus built on top of more basic experiences and experienced only indirectly. The basic elements that our world is built out of are everyday things like tables and chairs and promises from friends. (if you decide that the basic level is subsubsubatomic physics then this will look like nothing more than layers and layers of maps. Or you may choose nervous system input as the base level and the conceptual world may be nothing more than layers of interpretation on that. (Trying to write simple AI programs forced me to choose the everyday world as the base reality because anything else was useless for operating in the everyday world. (go figure))))

Given that this is the world, lets ask a simple question of it, yes or no.
Is my friend lying when he says he's too tired to hang out tonight? This should be a simple fact about the world. Look up the definition of "too tired" and compare my friend's exhaustion level to that critical value. If he's more exhausted than that then he's telling the truth. If he's less exhausted then he is lying. You and I both know that it doesn't quite work that way. I could bring up that "too tired" isn't a single value so it should be modeled by fuzzy logic or a probability distribution. But that's not the actual problem. The problem is that "too tired" is genuinely nebulous. I know that if we were going to see a movie he had been looking forward to then he would not be too tired to go. I know that he isn't too tired to take a shower and do some chores. The difficulty is precisely in deciding what "too tired" means. Even with perfect knowledge of exactly how tired he is the problem of deciding what "too tired" /is/ doesn't go away.

(One difficulty I encounter in explaining is that many of the best examples have been disqualified as not /real/ exactly because they are not reduced to perfectly defined parts)

It is well known that any ambiguity or vagueness in the territory can be solved by creating another map interposed between it and the REAL territory. But the process of pushing all nebulosity out of the world and into the map, creates a world that is more and more unmanageable. Consider: snow is white, my skin is white. We can regard this as a fact about the world with fuzziness around the edges for different shades of skin and different shades of snow, and genuine ambiguity on whether light colored wood is white or not. Or if you prefer you can say that the ambiguity and fuzziness is merely in our words and that "white" is a word that can refer to different colors at different times. The problem with doing this is that you haven't gained any genuine reduction in fuzziness, the things that will be referred to as "white" will be the same heterogeneous set however you choose to assign the blame.

What sort of thing could be true?

For my part, I'd say the question is hard to even understand from a LessWrongianism point of view, so this might be a good line of discussion for bridge-building purposes.

Yes, excellent, this is exciting. The piece I was working on when I last had time to write was about exactly this, so maybe it's high-leverage.

both "vague" and "non-vague" strike me as descriptions of a map, or a map-territory syncing process, not descriptions of a territory.

Yeah, perfect, so this is a key thing to get across. I don't think that should be too difficult.

Do you just mean that all possible mapping-processes are necessarily somewhat vague, or is there more to it than that?

No, the world doesn't have any "true facts" (in the rationalist sense) about it, because it's too squishy.

anything short of the quantum amplitude distribution of the entire universe is a mushy approximation.

Exactly! That distribution is the only true fact (along with whatever the correct field equations are).

And, this is one inferential step from "stage 4.5 STEM nihilism." Because:

The level of physical subatomic thingies has pretty much no relevance to how people operate.

The single true fact is unknowable, and would be completely useless if we somehow knew it. We want to know things about cottage cheese and immigration and dragonflies—and nothing is "true" about them.

Obviously, all sorts of things are true about them, in a common sense sense. But we can't even say definitely whether or not something is cottage cheese, much less that cottage cheese is white. There is no fact-of-the-matter about whether or not something counts as cottage cheese, nor as white. More generally, all the categories and properties we actually care about are ones that can't be reduced to some complicated set of quantum criteria. (Can't even in principle, but it's not necessary to argue about that, because they obviously can't in practice.)

So... we can know nothing. Might as well give up. (That's the STEM nihilism.)

Well, obviously that's wrong... so some quite different tack will be required. Which is: meta-rationality.

Is my friend lying when he says he's too tired to hang out tonight?

This is a nice example, thank you!

It is well known that any ambiguity or vagueness in the territory can be solved by creating another map interposed between it and the REAL territory.

Not sure what you are referring to here... is this the fantasy of successive levels of reduction?

Another EY post that's probably relevant is "Where Recursive Justification Hits Bottom," which argues that there cannot be any "foundations" of the sort that modern philosophers used to look for and makes a proposal for what to use instead.

I couldn't find the proposal for what to use instead.

If it is "play to win," I think that's pointing in vaguely the right direction (what I describe as "indexical-functional representation"). However, "winning" is incompatible with a rationalist interpretation of truth. Because, there aren't any useful truths in that sense, if you take it to its logical conclusion (that there's only one truth, the complete field density).

Tossing pebbles

Maybe it would be helpful to look harder at the pebble-counting story in "The Simple Truth."

How many pebbles are in the bucket? Yudkowsky takes this as unproblematic: either there are 28 pebbles or 29 pebbles.

What is a pebble? "A small bit of rock."

If you pick up "a pebble" and look at it closely, you'll find (almost always) that there are other small bits of rock loosely adhering to it. You could call them "grains of sand" or something.

What is a "grain of sand"? A small bit of rock.

How many small bits of rock did you throw in the bucket?

There's no answer to that. What counts as "a bit of rock" is not definable. Is an isolated three-atom SiO2 complex that breaks off when the pebble hits the bottom of the bucket "a bit of rock"?

There is no truth about how many bits of rock are in the bucket.

How big does a bit of rock have to be before it stops being a grain of sand and becomes a pebble? You could choose some arbitrary threshold... 1 gram... but you can't measure that accurately by eye (or, ultimately, at all).

In order to count sheep with pebbles, you need to choose bits of rock that you're reasonably confident you will be able to later distinguish as "pebbles" and not "bits of grit that fell off."

This is a purposeful, context-relative, embodied skill. An accurate account of knowledge, truth, belief, and so on, always grounds out in purposeful, context-relative, embodied skills.

On cartomancy

Dan's picture

Hmm, this is interesting. At this point I'll have to stop trying to speak for LessWrong-rationalism for a moment.

  1. I'm pretty sure we're in violent agreement and that I now understand what you mean by "metarationality".
  2. I remember learning metarational thinking from LessWrong. But, re-reading The Simple Truth, it's not obviously in there. Possibly it's in the more advanced material, or possibly I somehow read it into Eliezer's stuff myself.
  3. In any case we have quite different vocabulary for saying the same things, which might be worth sorting out.

Is my friend lying when he says he's too tired to hang out tonight?

What I remember learning from LessWrong is mostly a set of techniques for sorting out disagreements about questions like this. In broad strokes, the strategy is to (1) consider what you need the answer for (why do you want to know if your friend's "really" lying? what will you do with that information? what would happen if you got the "wrong" answer?) and (2) try to find a more easily-answerable question that is still about the underlying issue. When people argue with Scott Alexander, he often tries to turn the disagreement into a concrete bet. That requires creatively negotiating some kind of test where both parties are likely to agree on what the outcome was.

There is no truth about how many bits of rock are in the bucket.

To me that seems like a confusing way to put it. But, if you'd agree that what you mean could be rephrased as something like

There is no objective, mind-independent truth about how many bits of rock are in the bucket.

then we agree that "objective, mind-independent truths" about things like bits of rock is a really silly idea, for all the reasons you just gave. (Hmm, it may be necessary to clarify what I mean by "mind-independent": Eliezer habitually uses "a mind" to refer to any intelligent agent, so "mind-independent" would mean something like "independent of any rock-bit-counting agents." Although really this should include things like simple rock-counting machines, which EY probably wouldn't normally call "minds".)

As to "linguistic/epistemological" versus "ontological" nebulosity, it sounds like the model you're critiquing is something like:

  1. There is something called "the territory", wherein lives the One True Answer to "how many pebbles in the bucket?"
  2. Then there is something called "a map", wherein lives someone's (possibly-incorrect) opinions about the number of pebbles.
  3. The goal is to draw a map that reflects the One True Answer from the territory.

We agree that this is not how things work. You're making the point by saying that the stuff that people using this model call "the territory" is just as nebulous as the maps, i.e. it doesn't contain a One True Answer. Right?

I found that confusing just because I draw a different map/territory border: in my usage, the territory that contains propositions (whether the nebulous kind or the One True Answer kind) is not the True Territory; since a definition of "pebble" isn't part of the territory, anything about numbers of pebbles necessarily involves "maps" of some kind, which can be evaluated and adjusted with the usual map-adjusting techniques; and the shepherd's goal is just to draw maps that, in practice, prevent dead sheep. I think this amounts to the same thing.

Layers and layers of maps, as Anders says; or in other words: "You're very clever, young man, but it's cartography all the way down!" :)

Eschew the infinite regress

I remember learning metarational thinking from LessWrong. But ... it's not obviously in there. Possibly it's in the more advanced material

Overall, the LW-derived rationality community has grown up, intellectually, quite a lot over the past five years or so. I see a gradual shift in the direction of meta-rationality. It's a natural motion if you reflect seriously on the problems rationalism gets you into.

we agree that "objective, mind-independent truths" about things like bits of rock is a really silly idea

Right. And, that silly idea is essential to rationalism as traditionally understood. It's practically the definition!

I'm happy to concede that this is not person X's conception of "rationalism" as they present it in text Y, when that is the case... but my impression is it was the typical view of LW, as of a few years ago.

Layers and layers of maps; or in other words: "You're very clever, young man, but it's cartography all the way down!"

That doesn't work. It's an infinite regress of epicycles.

If you are going to have maps at all, you have to ground them in something else. By coincidence, I've just published a page about that! "Abstract Reasoning as Emergent from Concrete Activity."

Terminological nebulosity

Dan's picture

It's an infinite regress of epicycles.

Nah, we're still not actually disagreeing; I'm just using the word "map" differently and not explaining it very well. But since I'm not sure anyone else uses it the way I do, it's probably not worth sorting out.

The new post looks really great!

Neural categories / fuzzy clusters in reality

Amy's picture

David,

I think your point about the inherent vagueness of categories is exactly what the 3 Guide to Words posts The Cluster Structure of Thingspace Disguised Queries, and Neural Categories (I tried to link these, but apparently the spam filter does not like that - you can find them in my link to the sequence above) are dealing with, or am I being mistaken? These, along with the two preceding posts, are what I really had in mind when I suggested A Human's Guide to Words for comparison, apologize for the original vagueness (haha). It also has a good way to resolve the issue, which only requires acknowledging that there are cases where the question "is it a rock, or isn't it?" doesn't have any useful meaning. I don't think that the takeaway from Guide to Words is that "if only we could make our language more precise, we could say for sure whether anything is X or not-X" - Yudkowsky actually heavily criticizes this Aristotelian notion of categories.

Vagueness of categories

Mmm... those posts are mostly not pointing in the direction I have in mind. Although, he's sort of taking first steps toward grappling with some of the problems that motivate "the direction I have in mind."

My difficulty when trying to read Yudkowsky is that he's self-educated and mostly unaware that smart people have already worked on all the problems he discusses, and the solutions he comes up with are standard ones that are described in undergraduate textbooks, along with the ways they do and don't work. If you know the fields he hasn't read, this is a constant irritant.

He also explains things himself, not always very well, in cases where I think he should just say "go read X." His piece "The Cluster Structure of Thingspace" is based on the work of Eleanor Rosch—she famously used the robin vs penguin example—and he ought to have at least said so. Not so much because of giving credit where it's due, as letting readers know where to go to learn more. There's a huge literature on this in cognitive psychology, starting from her work.

I'm glad to hear him dissing backprop, though. Anyone who disses backprop can't be all bad! And he disses it for some of the right reasons.

The blegg vs rube post ("Disguised Queries") is the one that most nearly points in the direction I have in mind. He's sort of almost got it there, near the end! Categorization is relative to the practical purposes you have in a particular situation. But it's not about "inferring a characteristic" (a fact about the object), it's about function: what can I do with this?

'Fields' in academia and self-education

Sytse Wielinga's picture

My difficulty when trying to read Yudkowsky is that he's self-educated and mostly unaware that smart people have already worked on all the problems he discusses

Is this a sign that more people should be writing reasonably short 'if you read/skim this, you'll at least have some idea of where we're at on the topics important to me' guides (even if horribly incomplete/imperfect)?

I, for one, would love to see a 'field guide' of the fields that interest you, that tell me what to read/skim, why, and your caveats (the more idiosyncratic the better I suppose).

It's a very nice coincidence that in the same comment, you mention categorization being for and relative to 'what can I do with this?' (which I already knew about, by listening to Jordan Peterson -- I was already planning to write about that in a response to Dan's comment at 'What I remember learning from LessWrong [...]', which I was too lazy to finish yesterday).

That's because it points to the exact problem I have as someone who self-educates: the categorization we call 'fields' is necessarily problematic, because the purposes of that categorization (purposes unknowable with precision, embodied in the academic institutions themselves and their context) are very different from the purposes a free learner has, which necessitate learning about many fields and how they relate to your daily life.

It seems to be a big problem to learn an 'adequate' amount about a field, without being in the field, which is contrary to the purposes of learning these things in the first place if you're actually going to apply this knowledge!

Another problem is that way too many fields (most fields?) contain enormous amounts of bullshit, which may make undergraduate texts problematic to use (because it seems to me like they're often designed to tell you 'this is how it is, and if you truly want to understand why, graduate!')

Is any of this impossible without mentoring (like how getting buddhist knowledge is said to be impossible without a guru)?

Finally, it seems like Nassim Taleb's warning about 'Skin in the Game' (embody your knowledge with risk attached, or at least direct feedback, so that bad thinking will reveal itself) points to something that needs to be added here, in order to find 'ways to learn' that don't make you into an idiot; I'm not entirely sure what exactly, or how to combine it with the context of this comment; perhaps 'self-educating' is too general a context...

Do you have any better solutions about how to relate to academia as someone who isn't in it, but needs to evaluate for himself without 'trusting an expert'?

Pointers

Dan's picture

For those who haven't seen it, the canonical LW-ish explanation of the metarational approach to categories is Scott Alexander's "The Categories Were Made For Man, Not Man For The Categories". His starting point is Eliezer's "How An Algorithm Feels From Inside", but Scott's version pays much more attention to the "what can I do with this?" bottom line:

Suppose you travel back in time to ancient Israel and try to explain to King Solomon that whales are a kind of mammal and not a kind of fish....

In fact, the only thing Solomon cares about is whether responsibilities for his kingdom’s production of blubber and whale oil should go under his Ministry of Dag [≈ "fish"] or Ministry of Behemah [≈ "mammals"]. The Ministry of Dag is based on the coast and has a lot of people who work on ships. The Ministry of Behemah has a strong presence inland and lots of of people who hunt on horseback.

Notably, this is also a rebuttal to a Meaningness!rationalist claim that Eliezer made in "Where to Draw the Boundary?": "Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list."

I, for one, would love to see a 'field guide' of the fields that interest you, that tell me what to read/skim, why, and your caveats (the more idiosyncratic the better I suppose).

David's "What they don’t teach you at STEM school" might be just what you're looking for! It gives an extensive reading list, with commentary.

David's "What they don’t

Sytse Wielinga's picture

David's "What they don’t teach you at STEM school" might be just what you're looking for! It gives an extensive reading list, with commentary.

Heh, I should've seen that one coming. :-)

And it is certainly a kind of thing I'm looking for, all the more because it's written for exactly the audience I was describing in my comment: people who want to learn to become better people, instead of getting a degree.

I wonder why I didn't at least buy some books by Dreyfus (that name I knew, from my vague general interest in the history of CS) and 'Garfinkel and Ethnomethodology' when I read it; apparently, I was in a hurry.

It's not really a 'field guide' though (what field?); it's much too perfect (in prose like this it would take an entire book and half a year's work, and it's not designed to be updated haphazardly when David thinks of something with near-zero effort) and at any rate, Eleanor Rosch is not in it (nor is, say, Ken Wilber, in the category of developmental psychology, whose AQAL model I find pretty indispensable when thinking about why certain views are limited and therefore prone to error; with David's caveat 'watch out for monism'. SES -- Sex, Ecology, Spirituality is an interesting field guide to 'big philosophy', btw).

It does nicely illustrate the point about the categorization of 'fields' though: it's already a pretty vast array of fields that come by, which you're bound to get when you put together a reading list for any purpose other than 'learning about field X'.

Ken Wilber illustrates another point: what to do with all the thinkers who are unreadable for 80% of the academics in the fields that should be incorporating their ideas, and therefore reviled and ignored in those fields? Hubert Dreyfus is another example; Ted Nelson is another (he does have actual ideas that are still unrealized because they require starting over from scratch with the project of 'making software'), and even in the case of Wittgenstein it seems like he had things to say that were correct and are still not incorporated in the relevant fields. I'm sure there's thousands of these people by now...

Miscellaneous follow-ups

Sorry I'm behind.

Sorry also about the spam filter problems. I'm about to replace it with a different one that I hope will behave better.

more people should be writing reasonably short 'if you read/skim this, you'll at least have some idea of where we're at on the topics important to me' guides

That sounds like an excellent idea! A nice example that comes to mind is Teach Yourself Logic.

a 'field guide' of the fields that interest you, that tell me what to read/skim, why, and your caveats

Oh, wait, it sounded like an excellent idea until I realized you wanted me to do it! No, terrible idea :)

Seriously, it would be a huge amount of work.

the purposes of that categorization (embodied in the academic institutions themselves) are very different from the purposes a free learner has, which necessitate learning about many fields and how they relate to your daily life. It seems to be a big problem to learn an 'adequate' amount about a field, without being in the field... way too many fields (most fields?) contain enormous amounts of bullshit, which may make undergraduate texts problematic to use...

Yes, these are all important and accurate insights in my opinion!

Do you have any better solutions about how to relate to academia as someone who isn't in it, but needs to evaluate for himself without 'trusting an expert'?

Oof, yeah, this is really hard. A mentor could certainly help—but how do you find someone who understands the field but hasn't drunk its koolaid, and how can you know whether they are clueful until you know the field yourself? It's a chicken-and-egg problem. I guess you have to bootstrap your way up in stages.

The drossbucket blog is about teaching oneself PhD-level physics outside the system, and makes many interesting observations.

Dan, thanks for pointing out the Scott vs Eliezer categorizations of cetaceans; that's fun! It does make clear that one's purposes, in a specific context, determine categorization. That's related to "indexical-functional representation," discussed briefly in "Abstract-Emergent." The "specific context" is what makes it indexical, and the purpose makes it functional. I expect to say much more about that Real Soon Now.

Add new comment

To post a comment, you must enable Javascript and reload this page.

Navigation

This page is in the section How meaning fell apart,
      which is in Meaningness and Time: past, present, future.

The next page in this section is Modes of meaningness, eternalism and nihilism.

The previous page is Atomization: the kaleidoscope of meaning. (That page introduces its own subsection.)

This page’s topic is Fluidity.

General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2017 David Chapman.