Comments on “The wheel of fortune”



"Techno-futurists are sure a Singularity will soon deliver us from all material afflictions (or perhaps doom us to sudden extinction)"

I'm sure you can find some techno-futureists that think that way, but there's a non-eternalist version of it too, which I'll try to present here.

We have strong reasons to believe general AI and brain emulation are feasible technologies, and strong reasons to believe they will be achieved if our current civilization can avoid collapse for say, 1000 years. There are also weaker reasons to believe it could happen much faster.

These techs do not offer to make us live forever, or perfect us, or change our basic condition of being resource-constrained physical creatures. But they do offer extraordinary possibilities and dangers which are unprecedented in the history of our species. Sudden extinction is possible. It's also possible we could continue to exist but under hellish or meaningless conditions.

It's also possible that these techs could bring about a new regime of progress and glory for humanity, like industrial revolution but bigger. Modern civilization packed more progress in centuries than agricultural civs managed in millennia. A successor civilization of thinking computers could achieve more in decades than we have in centuries.

How we handle this transition will probably determine the fate of our galaxy for millions of years.

But exponential growth curves eventually flatten out. Meanings remain nebulous. Resources remain constrained. Life remains finite. There's nothing eternal at stake here. What's at stake is very big, but it's finite.

The industrial revolution also offered us unimaginable riches and the potential for extinction. This is not fundamentally different, just bigger.

Non-eternalist futurism

Thanks, yes, I agree with almost everything you've said there!

The one thing I would disagree with is:

We have strong reasons to believe general AI and brain emulation are feasible technologies

I don't think there's any in-principle reason to think they are impossible, but I also don't think there are strong reasons to believe they are "feasible"—depending, perhaps, on what "feasible" means. Presumably quantum-level simulation of a whole person in a whole social and physical environment would suffice; but that is not feasible. We don't know whether there's a level of simulation above quantum that will get equivalent results. Currently, we don't know much about how brains, or even individual neurons, work—or even what they do. One can guess that there would be some intermediate level of simulation, but there's not much basis for that guess.

Arguments for feasibility of brain emulation.

Good points on emulation, but I'm still pretty convinced it's feasible in the long term.

The critique about the level of detail required for an effective simulation is valid in general. There physical systems that we can know all the relevant physics about and still not simulate effectively. Weather, because of dynamical chaos. A quantum computer running Shor's algorithm, because of complexity (probably). But I think there are reasons specific to brains for why we should expect to be able to simulate them.

First, there's the universality of computation. Brains are a physical control system. Representations of the world from the sense organs come in, encoded as nerve signals, and control signals for muscles come out. Many different types of physical technologies have been invented or imaged for creating general purpose control systems, or calculation engines. Many different mathematical models of computation have been devised. They're almost all equivalent. The only exception is quantum computers, which are closely related.

Second is, although we may have no clue about the wiring, the parameters, or the algorithms, or what the important thing to simulate is, but we know a few things about neurons and they are extremely suggestive of computation. You've got a network of discrete elements, each hooked up to a bounded number of partners, each imposing a specific relationship between its connection points. This looks a lot like the transistor model of computation. Yes, it's different in many particulars. The's many more connections per neuron, the relationships they impose are more complicated than a transistor, etc, but those differences don't do anything to suggest the neuron model is not just as turing-equivilant as the transistor model.

Third, even if brains really are some sort of amazing, undreamt-of form of physical control system that transcends the capabilities of turing machines, they are still mechanistic. They are still a physically-embodied technology, which was designed by a process which is in many ways much stupider than we are, which we have working examples of and a copy of the instructions for building more. It may be very, very hard, but it seems way too pessimistic to say that this tech will remain mysterious to us after a thousand years.

Nothing is certain, but I would be very surprised if humanity still cannot master the technology of thought after a thousand years of trying, starting from where we are now. And once we do, it seems just as unlikely that artificial thought would not be able to exceed the performance of biological thought by orders of magnitude.

One of the limitations of evolution is it tends to get stuck in local maxima. Protein and DNA evolved to synthesize metabolic catalysts, not to make minds. But when all you've got is a ribosome, every problem looks like organic chemistry. Our brains are cobbled together from the parts evolution had at hand, but artificial brains will be able to exploit a much larger design space and achieve much higher performance.

I'm not expecting to get uploaded in my lifetime. I don't really expect to see AGI, though that seems more plausible. But a singularity in the next millennium is something I'd bet on, assuming we don'd nuke ourselves into the stone age or something)


It's apparent that at least in it's more popular forms, Buddhism is eternalistic. For example the Just World Hypothesis is axiomatic for modern day Buddhists. They believe in the Moral Universe. This is why karma seems so plausible to them.

The afterlife is the central strategy of all religions for dealing with violations of the JWH. I think you might want to mention this. It really is very important in trying to understand injustice in a just world. In this view, moral accounting happens in the afterlife. Be it the balance operated by Anubis in the Egyptian afterlife or any of the metaphorical variations on it, including the sugati/durgati of Buddhism, they all make up for undeserved suffering in the afterlife. This way the eternal order is maintained. Apparent violations are localised and counteracted in the long run - the law of conservation of harmony (or something).

Also note Gananath Obeyesekere's (Imagining Karma) observation that the introduction of ethics bifurcates the afterlife: "good" people go to a place of reward; "bad" people go to a place of punishment or non-existence. In primitive forms of rebirth, for example in the Ṛgveda (probably the regional eschatology the Indic speakers picked up on arrival in India), rebirth is neutral - one dies, goes to the ancestors, comes back. Rinse and repeat. There is no judgement. But the early Upaniṣads the destinations are starting to diverge based on whether or not one has performed the rituals; Buddhism and probably Jainism at the same time, changed the criteria to be following the communities rules about how to treat other people (i.e. ethics).

Our politicians just cut payments to people with disabilities (designed to help them pay for equipment to get back into the workplace) in order to help fund income-tax cuts for the rich. Are you sure they are not evil? I have my doubts.

Buddhist eternalism

Yes, absolutely! My outline has a page titled "Buddhist eternalism" in the section on non-theistic eternalism. It's a topic I care about quite a lot, but it's probably not an important one for the intended audience of the book, so I'm not sure if I'll ever get around to writing it up.

The afterlife is the central strategy of all religions for dealing with violations of the JWH. I think you might want to mention this.

I think I did—although only in passing, and it could certainly be expanded on! Thus:

how can bad things happen to good people? ... theories of cosmic justice in the afterlife are attempts to preserve eternalism against everyday experience


Sasha's picture

Interesting critique.
It seems fair to say that people are not purely evil or purely good, but a mix, but aren't certain actions purely evil or good? e.g. mass-murder.

Add new comment


This page is in the section The appeal of eternalism,
      which is in Eternalism: the fixation of meaning,
      which is in Meaning and meaninglessness,
      which is in Doing meaning better.

The next page in this section is Eternalism as the only salvation from nihilism.

The previous page is The fantasy of control.

This page’s topics are Causality and Eternalism.

General explanation: Meaningness is a hypertext book (in progress), plus a “metablog” that comments on it. The book begins with an appetizer. Alternatively, you might like to look at its table of contents, or some other starting points. Classification of pages by topics supplements the book and metablog structures. Terms with dotted underlining (example: meaningness) show a definition if you click on them. Pages marked with ⚒ are still under construction. Copyright ©2010–2017 David Chapman.