Comments on “190-proof vs. lite nihilism”

Comments

Nihilism and meaning

Joscha's picture

In your list of perspectives on nihilism, I find it strange that you seem to not clearly distinguish between
- the psychological realities: of a perceived need to find meaning, the subjective absence of a way to satisfy this need, the experience of the pain/negative valence resulting from filling the need, and the absence of a need and notion of meaning
- the ontological realities: the ontological universe cannot contain a structure that is somehow equivalent to my psychological needs for meaning (as in: we are just moving bits or rock, or assemblies of cells that execute the mindless software inscribed by evolutionary principles in their DNA), as opposed to ontological idealism in a variant of: I am the answer to a question that God is asking
- the cognitive structure: my mind has a need to regulate the conformance to internalized norms, this has been translated into the primary feedback loop that gave rise to my current personality construct, hence my mind finds itself motivated to seek out ways of serving the immortals/greater good/higher purpose/transgender ideals, even though my rationalist ontology laughs about it

My own interpretation takes a functionalist perspective. All cognition is directed on satisfying a need, or avoiding its frustration. Our needs are implemented as cybernetic feedback systems, mostly in the mid-brain. Some of them represent physiological demands: heat and cold, nutrient levels etc. only become relevant to the mind if they are presented to it in the form of urge signals, which generate impulses that drive decisions, world modeling and the formation of the necessary computational structure in hippocampus and neocortex via general neural learning principles.
Next to the physiological demands, we have cognitive ones: competence, exploration, and aesthetics. And social ones: affiliation, control, dominance, nurturing, romantic affection, and conformance to internalized norms. I suspect that the latter is an evolutionary adaptation to aid group level selection: this is where the emergent group mind reaches down into the individual. Intersubjective, communicated norms represent a societal software that programs individuals to make sacrifices that can supersede their local Nash equilibria.

The regulation of instinctive intersubjective norms requires multiple mechanisms. The first one is the need for norm conformance itself. It manifests as the desire to be good, to be virtuous, to do “the right thing” even in the absence of any tangible reward, and the willingness to even sacrifice oneself, if the Greater Good requires it.

The second one is the synchronization of norms across groups. This largely happens instinctively, via empathy with ingroup members, multiplied with the social status of the respective ingroup member. By dressing up an ingroup member as a high status individual, humans can be hypnotized into adopting the norms presented by that individual: individuals become programmable to function as implementors of a group software, a memeplex that is parasitic on their individual minds.

As you are obviously aware, nerds have a defect of the latter mechanism. They tend to have difficulty in synchronizing their norms via empathy and instinct. For the same reason, the are often unable to “see” authority, to enjoy watching sport events with large ingroup audiences, and to perform automatic preference falsification.

But most nerds have an intact need for conforming to internalized norms. If their normative software cannot link up to some community purpose, such as nationalism, an ideology that can be transported linguistically etc., they will usually develop eternalism, i.e. some kind of Steppenwolf syndrome. Goodness, the sacred, the higher purpose is subjectively perceived as the service to a transcendental, otherworldly principle. They serve this transcendental need via art, science or psychedelics (which can directly trigger the sense of meaning), or they despair. (In your famous “Geeks, MOPs and sociopaths”, you call the nerds “geeks”, but a geek is actually a normie who uses nerd cultural elements for social signaling.)

Sociopaths have two degrees of freedom, compared to allistic people: they have no need to be good. They are free from the sacred. They cannot be infected with eternalism.

Note that I distinguish the meaning of buying catfood from the meaning of service to the sacred. The former can result from a plan to keep a cat alive, which in turn can be motivated by many things: affiliation to the cat, nurturing the cat, buying sex from the person owned by the cat, etc. Sociopaths can buy catfood.

It is also possible that we buy catfood in the service of higher meaning, i.e. in the service of true love. If we love the cat, we perceive it as part of the Sacred System, the thing that is larger than us and that we are programmed to serve.

Now I have laid out some terms, and we can swing back to nihilism. We often associate it with a state of suffering; you have well described the distinction above. Xkcd describes it here: https://xkcd.com/167/ – happiness is really more related to the ability to enjoy watching squirrels than to your ontological perspective on the universe. It is more the other way around: if you are depressed, because your needs are consistently not met (for instance because you are lonely, or because you actually suffer from an inability to satisfy your need for meaning/service to internalized norms), you may be more likely to pick a story about the universe that you infuse with the negative valence generated by your state of depression.

Now I would like to go on to thoroughly disentangle the question of systems (societal systems, cultural ones, systems of self-organizing cognitive agents in your neocortex etc.) from the subjective perception of meaning (via an innate drive that tries to get humans to serve a systemic set of norms beyond individual incentives), and from an ontological understanding of whether a construct of meaning reflecting the Sacred that is innate to our mental organization can exist in the universe, i.e. whether Love/God/Higher Purpose/The Transcendent do actually exist in any meaningful sense.

I think we have to answer the last question in the negative, but that does not mean that we cannot admit to and cultivate our innate need for meaning. It is part of most humans’ cognitive architecture, and most subjects core self construct, after all. Giving up meaning will either lead to psychological despair, or requires us to kill our current self construct and replace it with a sociopathic one. The need for meaning creates a belief attractor that does not make me want to kill my self construct, and hence I feel that what I perceive as your overarching project is exactly what we need.

(As far as I understand you, your larger project is the construction of a healthy self: ontologically, psychologically and if possible socially sound, aware of its needs and able to satisfy them in sustainable ways. This project is opposed to materialistic hedonism, because the regulation of meaning (as opposed to nutrition, libido, rest, affiliation, competence etc.) is the core task of your personal self construct. It is also opposed to the sutra, which you take to be the extermination of the conscious self, by removing the pain that sustains the attention needed to give rise to our particular type of self construct. It seems logical that you champion the tantra, the path of greater awareness and cultivation of the personal, interpersonal and transcendent self. I think that your project is very important, especially because most eudaimonic approaches are infested with toxic religious memeplexes.)

I have already noticed that it is probably going to be hard to bridge the inferential distance between our basic ontologies. Based on what you said on Twitter, I suspect that you left AI because you ended up with a defective version of functionalism, and you saw no hope of ever getting it to work. I would like to encourage you to not give up on this, because it might tie in with the larger project. Then again, we are projecting the universe onto different surfaces, and translation might turn out to be impossible.

Subjective and objective

Joscha, thank you very much for the long and thoughtful comment!

As you suggest, there seems to be some inferential distance here, despite our shared background in cognitive science and AI. I don’t quite understand some of what you said.

I think we probably mainly agree, except about words. We agree that there is no transcendent source of meaning (such as God). I think we agree that things are meaningful anyway, in at least the sense that a red light means you should stop at an intersection. Or, maybe you would not want to call that a “meaning”—but that might be a disagreement only about a definition, which is mostly arbitrary.

Or, maybe you would say the red light has only a purely subjective meaning, because it is not inherent in the physical traffic light, and therefore lives in brains only. I would quibble that the meaning is objectively observable, inasmuch as people mostly do stop at red lights. So the only disagreement might be about the meaning of “subjective.”

(But maybe there is a substantive disagreement as well! I’m not sure.)

I’m glad, in any case, that the Meaningness project seems worthwhile to you!

I suspect that you left AI because… you saw no hope of ever getting it to work.

I wouldn’t say ever—I left because I didn’t see any path that looked likely to lead to significant progress as of ~1990. The ImageNet results are intriguing, and I’ve been tempted to dive back in by them.

Re: Subjective and objective

Joscha's picture

I think that the transcendent source of meaning (we can call it God, if we were to remove all idolatry from the concept; God does not have a name or image or temple for a reason) is as real as the source of the behavioral impulse that makes you stop at a red light. It resides in your brain, in the form of a set of specific urges that give relevance to your mental representations.

If you throw statistically similar patterns at minds that have similar urges to give relevance to the structure that is discoverable in the patterns, you end up with a bunch of individuals that have different but functionally equivalent dreams of red lights, and similar impulses to stop at them.

The disagreement comes in where you say that meaning exists in some vague sense. We can dissect it better than that, I think.

I also don’t understand your notion of fluidity yet. I tend to think that we need a return to systematicity on a societal level, on pains of near term extinction. But such a systematicity needs to include meta-systematicity, i.e. the acknowledgement of the epistemological and evolutionary impossibility of setting in a single systemic structure. If the need to keep our systems constantly evolving translates into “fluidity”, I am on the same page. But at the moment I don’t know if your idea of fluidity is more like a back door invasion by post modernism?

The ImageNet results are intriguing, and I’ve been tempted to dive back in by them.

At the moment, I think that AI has begun to understand how our nervous system filters order out of the patterns the universe throws at it, by identifying mappings, eigenvectors and operators in low-dimensional feature manifolds. It’s mostly a crude, but workable model of perception, and at the moment, it stops way before the emergence of the type of general causal structure that gives rise to symbolic thought, interesting self modeling etc. AI cannot think. Yet.

It is also interesting to realize that Schmidhuber et al won the Imagenet competition by translating the mathematical constraints of Solomonoff induction into a computational approximation.

I’m glad, in any case, that the Meaningness project seems worthwhile to you!

Do you think that I understand your intentions correctly?

Dreaming and reality

dreams of red lights

I guess I don’t understand why you aren’t willing to call these “perceptions of red lights.” The implication of “dream” seems to be of an un-reality? Un-reality is meaningful only relative to something that is real. What do you think is actually real?

I think red lights are as real as anything. I think that perception is generally accurate, so when you think you see a red light, you almost always do actually see a red light, which is actually there and actually red. So it’s perception, not dreaming.

we need a return to systematicity on a societal level, on pains of near term extinction. But such a systematicity needs to include meta-systematicity, i.e. the acknowledgement of the epistemological and evolutionary impossibility of setting in a single systemic structure. If the need to keep our systems constantly evolving translates into “fluidity”, I am on the same page.

Yes, that’s a large part of it, at least! I think you do understand.

Re: Dreaming

Joscha's picture

What do you think is actually real?

Personally, I think the most promising model is that the universe has a global state vector that only permutes its bits in every step. The evolution of the universe is a function of its global state, but every observer embedded in the universe can only measure a handful of local bits, from which it makes predictions for the state of the next set of local bits. Thus, while the universe is probably a reversible deterministic computer, our models are necessarily probabilistic causal structures.

The high-level structures that we use to interpret the low-level patterns are not part of the causal structure of the universe itself. They are often even arbitrary pareidolia, such as your beautiful observation of the rotation of left-right in US cultures. Sometimes they allow us to make good predictions, or at least to tell good stories. The red lights are in the former category (sound predictive models), the culture wars probably straddle the line.

Furthermore, our models are synchronized via language. Red is probably not culture invariant: a family of speakers suggests a particular segmentation of the spectrum to the child, and the child grows a classifier to emulate that partitioning.

So it’s perception, not dreaming.

The embodimentalists thought that our minds can inhabit the world “out there”, but “out there” is too inhospitable for minds. Rather, the world is “in here”. The same circuitry that produces our dreams produces our perception of reality, but unlike the dreams at night, our neural generators feed on the input patterns of our sensory neurons.

Rejection of Nihilism

Johnathan's picture

Hi David,
I couldn’t find any way to contact you privately, so I am forced to use the comments.

If you’re short on time, skip below to the ****

I’ll start out by saying that I am looking for an ‘intellectual’ understanding, but nontheless I feel it’s important to give some personal background.

I’ve been depressed for over 6 years, during which I’ve had two major depressive episodes.
I’ve been Nihlistic throughout this time.
I grew up in a religious Jewish household, and the year prior to my 6 year Nihlism spree, I had become religious myself - meaning that I believed in god and a set of rules that are intrinsicly right, the objective ‘purpose in life’. (What you call Eternalism, which Wikipedia seems to say is a philosophy related to time? Anyhow)
So when I stopped believing in god, there went the objective meaning with it, which led me to Nihilism.
I can’t confidently call chicken or egg with regards to Nihilism and being depressed, but that is my situation.

My view is that there is no objective meaning, only subjective meaning which is a facade. I don’t hold myself as ‘elitist’, if anything I wish I could be in the ‘bubble’ like most people. To me, everything people value, is important or meaningfull, is ‘self-distraction’ of the (objective) meaninglessness of this world.

In other words, the problem is I hold the fact that meaning is subjective and therefore meaningless as true.
For 6 years I have stuck to my understanding/principals, and have post-event guilt anytime I fall into subjective enjoyment and/or meaning - because it isn’t ‘real’, just a subjective illusion.
This causes a self-loop of depression. I have made it my go-to to deprive myself of interests and desires (even natural ones), because it “isn’t real”, objectively.
So although you say in the book that nobody (or most people?) holds true/complete Nihilism for long, I believe I am one such mutant that hasn’t switched to a different belief/stance for the sake of happiness.

I’ve tried seeing a psychiatrist and being on anti-depressants for a while, but stopped.

****Read from here if you’re short on time****
So I have stumbled upon your book, and right off the bat could tell this deals with my issue. I have yet to read everything, and I tried continuing a little more and a little more, but nothing to change my understanding yet.

So I really want to understand, where and how I’m wrong with regards to meaning being subjective. To me, it doesn’t seem like I’m ignoring or denying obvious meaning, or what you call ‘pattern’.

In the example you give on this page of a man going to buy catfood - why isn’t it subjective? To me it seems the purpose is subjective in different ways to the man, his wife, and the cat.

I’m sorry for the long rant, and I probably use word semantics differently than you (you probably have way more precise and concrete definitions), but please try to help me understand your view.
I know you’re not a therapist, I’m looking for an intellectual understanding. (Which could change the world upside down for me. I don’t have high hopes because that would mean getting super depressed if I were to be let down, but I recognize that what you write about here is the first time in several tears where I thought something could change my view).

Like a rainbow

Hi Jonathan,

Sorry to hear about this. I’m familiar with nihilistic depression from personal experience. No fun.

There are no ultimately objective meanings. (This is true and important!)

But meanings aren’t subjective, or illusory, either. They are patterns of interaction between you and circumstances. “Rumcake and rainbows” is the beginnings of an explanation of that. It might be helpful if you haven’t already read it.

Meanings mostly aren’t arbitrary; you mostly can’t choose to have things mean what you want. That’s part of what makes them “not subjective.”

(It’s weird that it’s nevertheless biologically possible to make everything seem meaningless, sometimes as an act of intellectual choice.)

it seems the purpose is subjective in different ways to the man, his wife, and the cat.

Well, they may all have somewhat different views on it—but they may also all have somewhat different views on clearly objective matters. For instance, cats can see in ultraviolet, but don’t distinguish colors well at the red end of our spectrum, so they probably experience the world pretty differently. They probably categorize objects in ways that make sense given their senses and purposes, but would seem bizarre to us.

Nevertheless, they and we probably have pretty similar understanding of the basic meaning of food. Humans attribute all sorts of additional meanings to food, some of which are probably outright bogus, but many of which are functional, and non-subjective inasmuch as they are shaped partly by culture and partly by biology.

There’s supposed to be a LOT more to the nihilism part of this book. I have rough drafts for a a couple dozen more pages, and good intentions to finish them Real Soon Now. It’s my second-highest priority for writing currently. Reading comments like yours makes me feel it ought to be number one.

Navigation

This page is in the section Nihilism: the denial of meaning,
      which is in Meaning and meaninglessness,
      which is in Doing meaning better.

The next page in this section is Spam from God.

The previous page is ⚒ Nihilism is hard.

This page’s topic is Nihilism.

General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2018 David Chapman.