Comments on “Tribal, systematic, and fluid political understanding”

Comments

"Do the right thing"

This was a great illumination of stages-in-action. Near the end you write:

“Good societies are those in which there is common knowledge that most people—and especially most in government—are mostly committed to doing the right thing, where “the right thing” is not definable ahead of time. “Doing the right thing” cannot be forced by any system, because nebulosity makes it impossible to foresee all future circumstances and specify what would be right to do then.

Doing the right thing is always collaboratively improvised in concrete circumstances. Well-designed institutions are powerful resources in that collaborative improvisation. However, they are only tools for doing the right thing, never guarantors of it. There are ways to encourage ethical responsiveness, but no way to enforce it.”

This reminds me of “consequentialism,” the philosophical stance that the ends justify the means (and in cases where it seems not to, you’re just not thinking hard enough). Is that accurate? If that’s so, is consequentialism the path to “Stage 5” thinking?

I’d never heard of consequentialism til I started listening to Sam Harris & some of the Effective Altruism folks. The thinking seems to be correct, but not particularly useful. Or is it, as you say of the complete stance, “…boring, because it is obviously right; and unappealing, because it doesn’t make attractive (but false) promises, like confused stances do.”

Great stuff!

Great article, thank you for writing. Something that stood out for me : politics as the only remaining source of a coherent system of meaning. Something clicked for me (about religion too) with that concept, thanks!

A thought on nihilism after politics: it can also happen the other way around, whereby a person must withdraw from the endless war or else risk severe health damage and develops the nihilist view afterwards, to mitigate the shame or moral negativity they feel. “It’s ok that I’m not an active member any more because it’s all meaningless anyway”.

Consequentialism

Nick, I’m sorry, I missed your question before.

is consequentialism the path to “Stage 5” thinking?

The stages are about the way you think, rather than the content of what you think. So one could use consequentialism as a way of looking at ethical issues at either stage 4 or stage 5.

At stage 4, one adopts some system as an absolute. The Effective Altruism movement mostly does do that with utilitarianism (which is a type of consequentialism). There’s a series of typical failure modes that leads to, which I do see afflicting EA—although in general I think EA is a very good thing. That is, it’s afflicted with both the failure modes of eternalism (absolutizing some system) and the failure modes of utilitarianism/consequentialism specifically.

At stage 5, one uses systems as tools when appropriate, but you don’t see them as ultimate. Consequentialism is one way of looking at ethical problems that is often valuable. Sometimes it gives wrong answers, so you shouldn’t absolutize it.

Nebulosity and systems

D'James's picture

With the caveat that diagnosing my own place in Kegan’s framework is as dubious as medically diagnosing myself over the internet: am I accurate in saying that I ( and a few Millenial peers ) might have taken some steps toward a more complete stance in the wake of Occupy, by rejecting ideological socialism, instead seeing it as a tool that makes sense in certain circumstances, but cannot describe the all-time best or right political action, all the time. I’m not a smart lad either, so I’m sure there’s others coming to similar conclusions, but the current political arena discourages making them known publicly . I suppose this is running perilously close to Aleksandr Dugin’s all inclusive “Third Rome” idea ( “The South for Social Conservatives, The Heartland for sleeve rolling anarcho-syndicalists, and remote coastal enclaves for the liberal cosmopolitan decadents, All under the gaze of a stern and loving Papa” ) , which may be one of the most ironically colorful and truly insane stance combinations ( post-systems collapsing into stage 3 tribal romanticism? I think its simply a snow shovel broad enough to scoop up numerous disaffected young elements in both left and right anti-imperialist camps, to be deposited on “Revolutionary Island of Pleasures” and turned into marching donkeys). I only bring him up since he’s a big Heidegger buff; any chance of more analysis of Romanticism?

Its damned hard to avoid in anything, especially the aesthetic ( Camille Paglia’s writings on art are great; I remember her mentioning that Hollywood was a boon for American political freedom, by creating a space for handsome, charismatic men to achieve vast power and prestige without touching pragmatic policy. You seem on the verge of a treatise describing how this broke down! )

"Open-ended curiosity is an

Bad Horse's picture

“Open-ended curiosity is an antidote to both eternalism and nihilism, and a key aspect of the complete stance. When it comes to highway maintenance, bank regulation, and cybersecurity, most people aren’t curious; and there is no reason they should be. But that does imply they shouldn’t be interested in politics.”

That’s three great insights–thanks!

"Do the right thing"

Bad Horse's picture

@nick barr:

“This reminds me of “consequentialism,” the philosophical stance that the ends justify the means (and in cases where it seems not to, you’re just not thinking hard enough). Is that accurate?”

Consequentialism is the stance that you should measure the goodness of an action as some function of its consequences. There are some big questions this leaves open:

  • Do you measure the goodness of individual actions after the fact, by their effects, or do you measure goodness before an action is taken, by the distribution of its possible outcomes? The former is not very useful; it allows you to assign praise or blame, but not to use it as a decision theory.
  • If you measure goodness before an action is taken, how do you aggregate the possible outcomes? A standard answer is that the von Neumann-Morgenstern theorem proves you must maximize expected value. This disturbs people when the distribution of possible outcomes is very uneven–e.g., playing Russian roulette with the existence of the universe at stake.

    I wrote a rebuttal of this claim in story form here. I argued that the “theorem” is circular: Its 4th assumption is that if M is better than L, then a fifty percent chance of M and a fifty percent chance of N is better than a fifty percent chance of L and a fifty percent chance of N, no matter what N is. But if M = N, this says a 100% chance of M is better than a 50% chance of L and a 50% chance of M. Do the math and you’ll see this implies that your utility function must take the expected value of the possible actions.

    But that’s what the theorem claims to prove! Its conclusion can thus be proven from just one of its assumptions, which means its conclusion was assumed.

  • Do you measure the goodness of individual actions, or of policies? That is, does your decision theory consider only a decision at one moment in time, or do you look at iterated games? This is important in social coordination problems that have better outcomes if you can plausibly pre-commit, e.g., Parfit’s hitchhiker. Ignoring this distinction leads to confusing decision theory with psychology (see Newcomb’s problem).
  • Do you evaluate only the final outcome (the “eternalist” approach, which is “the ends justify the means”), or do you add up utility over the course of the action as well?
  • Do you evaluate outcomes as instantaneous states (the “eternalist” approach), or do you integrate over time? If the latter, how far into the future do you look? To infinity? Looking into the future reverses most “commonsense” ethical decisions, which typically optimize for people alive now at the expense of future beings.
  • How do you do time discounting in your computation? If you don’t use exponential time discounting, evaluating all possible consequences to infinity has been proven to be noncomputable. Eliezer Yudkowsky has an argument for using no time discounting, but I think we have to rule that option out, as it’s paralyzing.
  • Do you measure consequences subjectively (aggregated over individual agents), or objectively (one measurement of the world)? If the former, who counts as an agent, and how do you weigh the different agents? One uses the word “utilitarianism” rather than “consequentialism” when considering this aspect, but as far as I can tell, the two words mean the same thing. The utility monster “paradox” falls under this decision. Note that people who claim the utility monster is abhorrent actually favor respecting utility monsters, as the position that the consequences to a human should be weighed more heavily than the consequences to an individual bacterium depend on the human being a utility monster. Also see “Average utilitarianism must be correct?”, which argues that whether you should average over people is the same question as whether you should average over your possible future selves. (I no longer believe it is, but it is the same question for any social utility function with no quasi-indexicals in it, e.g., a utility function not computed from a first-person point-of-view.)
  • Navigation

    You are reading a metablog post, dated November 1, 2016.

    The next metablog post is The Court of Values and the Bureau of Boringness.

    The previous metablog post was A first lesson in meta-rationality.

    This page’s topics are Fluidity, Politics, and Systems.

    General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2018 David Chapman.