Comments on “The wheel of fortune”

Add new comment

Singularity.

Lawrence D'Anna 2016-03-17

“Techno-futurists are sure a Singularity will soon deliver us from all material afflictions (or perhaps doom us to sudden extinction)”

I’m sure you can find some techno-futureists that think that way, but there’s a non-eternalist version of it too, which I’ll try to present here.

We have strong reasons to believe general AI and brain emulation are feasible technologies, and strong reasons to believe they will be achieved if our current civilization can avoid collapse for say, 1000 years. There are also weaker reasons to believe it could happen much faster.

These techs do not offer to make us live forever, or perfect us, or change our basic condition of being resource-constrained physical creatures. But they do offer extraordinary possibilities and dangers which are unprecedented in the history of our species. Sudden extinction is possible. It’s also possible we could continue to exist but under hellish or meaningless conditions.

It’s also possible that these techs could bring about a new regime of progress and glory for humanity, like industrial revolution but bigger. Modern civilization packed more progress in centuries than agricultural civs managed in millennia. A successor civilization of thinking computers could achieve more in decades than we have in centuries.

How we handle this transition will probably determine the fate of our galaxy for millions of years.

But exponential growth curves eventually flatten out. Meanings remain nebulous. Resources remain constrained. Life remains finite. There’s nothing eternal at stake here. What’s at stake is very big, but it’s finite.

The industrial revolution also offered us unimaginable riches and the potential for extinction. This is not fundamentally different, just bigger.

Non-eternalist futurism

David Chapman 2016-03-17

Thanks, yes, I agree with almost everything you’ve said there!

The one thing I would disagree with is:

We have strong reasons to believe general AI and brain emulation are feasible technologies

I don’t think there’s any in-principle reason to think they are impossible, but I also don’t think there are strong reasons to believe they are “feasible”—depending, perhaps, on what “feasible” means. Presumably quantum-level simulation of a whole person in a whole social and physical environment would suffice; but that is not feasible. We don’t know whether there’s a level of simulation above quantum that will get equivalent results. Currently, we don’t know much about how brains, or even individual neurons, work—or even what they do. One can guess that there would be some intermediate level of simulation, but there’s not much basis for that guess.

Dirty rant about brain simulation

David Chapman 2016-03-17

By the way, this is a great rant about the state of the art in brain simulation:

http://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/

Brain simulation may be a solved problem in a hundred years—or not. It’s not going to happen soon.

[Content warning: very many uses of the word “fucking,” as in “we have no fucking clue.”]

Arguments for feasibility of brain emulation.

Lawrence D'Anna 2016-03-17

Good points on emulation, but I’m still pretty convinced it’s feasible in the long term.

The critique about the level of detail required for an effective simulation is valid in general. There physical systems that we can know all the relevant physics about and still not simulate effectively. Weather, because of dynamical chaos. A quantum computer running Shor’s algorithm, because of complexity (probably). But I think there are reasons specific to brains for why we should expect to be able to simulate them.

First, there’s the universality of computation. Brains are a physical control system. Representations of the world from the sense organs come in, encoded as nerve signals, and control signals for muscles come out. Many different types of physical technologies have been invented or imaged for creating general purpose control systems, or calculation engines. Many different mathematical models of computation have been devised. They’re almost all equivalent. The only exception is quantum computers, which are closely related.

Second is, although we may have no clue about the wiring, the parameters, or the algorithms, or what the important thing to simulate is, but we know a few things about neurons and they are extremely suggestive of computation. You’ve got a network of discrete elements, each hooked up to a bounded number of partners, each imposing a specific relationship between its connection points. This looks a lot like the transistor model of computation. Yes, it’s different in many particulars. The’s many more connections per neuron, the relationships they impose are more complicated than a transistor, etc, but those differences don’t do anything to suggest the neuron model is not just as turing-equivilant as the transistor model.

Third, even if brains really are some sort of amazing, undreamt-of form of physical control system that transcends the capabilities of turing machines, they are still mechanistic. They are still a physically-embodied technology, which was designed by a process which is in many ways much stupider than we are, which we have working examples of and a copy of the instructions for building more. It may be very, very hard, but it seems way too pessimistic to say that this tech will remain mysterious to us after a thousand years.

Nothing is certain, but I would be very surprised if humanity still cannot master the technology of thought after a thousand years of trying, starting from where we are now. And once we do, it seems just as unlikely that artificial thought would not be able to exceed the performance of biological thought by orders of magnitude.

One of the limitations of evolution is it tends to get stuck in local maxima. Protein and DNA evolved to synthesize metabolic catalysts, not to make minds. But when all you’ve got is a ribosome, every problem looks like organic chemistry. Our brains are cobbled together from the parts evolution had at hand, but artificial brains will be able to exploit a much larger design space and achieve much higher performance.

I’m not expecting to get uploaded in my lifetime. I don’t really expect to see AGI, though that seems more plausible. But a singularity in the next millennium is something I’d bet on, assuming we don’d nuke ourselves into the stone age or something)

Eternalism

Jayarava 2016-03-19

It’s apparent that at least in it’s more popular forms, Buddhism is eternalistic. For example the Just World Hypothesis is axiomatic for modern day Buddhists. They believe in the Moral Universe. This is why karma seems so plausible to them.

The afterlife is the central strategy of all religions for dealing with violations of the JWH. I think you might want to mention this. It really is very important in trying to understand injustice in a just world. In this view, moral accounting happens in the afterlife. Be it the balance operated by Anubis in the Egyptian afterlife or any of the metaphorical variations on it, including the sugati/durgati of Buddhism, they all make up for undeserved suffering in the afterlife. This way the eternal order is maintained. Apparent violations are localised and counteracted in the long run - the law of conservation of harmony (or something).

Also note Gananath Obeyesekere’s (Imagining Karma) observation that the introduction of ethics bifurcates the afterlife: “good” people go to a place of reward; “bad” people go to a place of punishment or non-existence. In primitive forms of rebirth, for example in the Ṛgveda (probably the regional eschatology the Indic speakers picked up on arrival in India), rebirth is neutral - one dies, goes to the ancestors, comes back. Rinse and repeat. There is no judgement. But the early Upaniṣads the destinations are starting to diverge based on whether or not one has performed the rituals; Buddhism and probably Jainism at the same time, changed the criteria to be following the communities rules about how to treat other people (i.e. ethics).

Our politicians just cut payments to people with disabilities (designed to help them pay for equipment to get back into the workplace) in order to help fund income-tax cuts for the rich. Are you sure they are not evil? I have my doubts.

Buddhist eternalism

David Chapman 2016-03-19

Yes, absolutely! My outline has a page titled “Buddhist eternalism” in the section on non-theistic eternalism. It’s a topic I care about quite a lot, but it’s probably not an important one for the intended audience of the book, so I’m not sure if I’ll ever get around to writing it up.

The afterlife is the central strategy of all religions for dealing with violations of the JWH. I think you might want to mention this.

I think I did—although only in passing, and it could certainly be expanded on! Thus:

how can bad things happen to good people? ... theories of cosmic justice in the afterlife are attempts to preserve eternalism against everyday experience

Evil

Sasha 2017-06-09

Interesting critique.
It seems fair to say that people are not purely evil or purely good, but a mix, but aren’t certain actions purely evil or good? e.g. mass-murder.

Certainty, control, and understanding feel beside the point

Thomas Paynter 2023-08-29

I read these sections to try to get a better handle on what you mean by meaning. Your claim is that eternalism is attractive because it promises certainty, control, and understanding, and I take it those are the qualities you think we want for meaning (at least until we take the compelled stance)—thus, eternalism is attractive.

Those qualities don’t seem like what I want fro meaning though. I think I want two things: reward, that is, a valuable good-for-me payoff that makes work, struggle, life worthwhile; and a sense that what I do is good/valuable meaningful in some larger way.

Taking the first, as a person who tends toward depression and dysphoria, I often find everyday life flat, joyless, boring, etc. Things that are supposed to be the payoff, like a trip to look at a beautiful vista or Christmas with family, can seem ho-hum and disappointing. I want the promise that life will have payoffs in the form of direct pleasurable experiences, like passionate sex with a beautiful partner. I’d be a herein addict if the ruinous consequences of that didn’t promise more pain that pleasure. Life lacks meaning in the sense that the rewards don’t feel good enough to justify all the trouble.

As for larger meaning, that’s wanting what I do to feel morally good and connected to some payoff for others that is greater and longer-lasting than my own life. If I were closely identified with some group that was fighting a war for liberation or on a long struggle towards some promised land that would lift us out of our present struggle and oppression, that would have the kind of meaning I mean.

These can be shoehorned into certainty, understanding, and control, but it seems a poor fit. Yes , I’d like more certainty about life’s rewards, more control of receiving them, and I guess it would be nice to understand where they come from; but it is the reward/pleasure/hood feeling that I want. If I could go through my days on a morphine drip half the time, determined randomly for reasons o did not understand, without consequences, I’d spend less time wondering what the point of all this is and if it is ever going to get good. And being part of some noble effort would help too even if I wasn’t certain it would work out, didn’t have full control of that, and didn’t understand all the particulars.

Add new comment:

You can use some Markdown and/or HTML formatting here.

Optional, but required if you want follow-up notifications. Used to show your Gravatar if you have one. Address will not be shown publicly.

If you check this box, you will get an email whenever there’s a new comment on this page. The emails include a link to unsubscribe.