This page is unfinished. It may be a mere placeholder in the book outline. Or, the text below (if any) may be a summary, or a discussion of what the page will say, or a partial or rough draft.
Utilitarianism is an ethical theory based on the intuition that one should act to produce the most good for everyone overall. That intuition is often right. Trying to make it the sole source of ethics always fails, though. This is an example of a non-theistic, rationalist eternalist error.
Utilitarianism is an accountant’s theory of morality. (It appeals especially to atheists of a technical bent.) Suppose you have to choose between two actions. If you could predict all the results of each action, and if you could figure out how good (or bad) the results would be for everyone, and if you could combine all the goods and bads into a single total number, then you could compare the totals for each action, and choose the better one. (This is an example of the continuum gambit.)
Notice the “ifs” in this story. To make utilitarianism work, you’d have to be able to:
- predict all the effects of actions
- assign a numerical goodness/badness of each effect on each person
- combine these numbers into a meaningful total
Each of these tasks is quite impossible.
Utilitarianism is, therefore, a wrong-way reduction: it replaces the difficult but tractable problem of ethical decision-making with an absolutely hopeless one. This is just like the sportsball problem I discussed earlier. It is far easier to predict the winning team in a sportsball game than to predict how many goals will be made by each side.
The Other Leading Brands of ethical theory—deontology and virtue ethics—don’t require you to solve such problems. Deontology merely requires that you follow rules, and virtue ethics that you be a moral sort of person. These approaches have other dire defects, and are quite wrong. But they don’t require impossible feats of computation.
Utilitarians are undeterred. When pressed, they usually admit the impossibilities. Further, they admit that no known version of utilitarianism gives correct ethical answers even in principle, even if you could solve all the impossible problems.
The seemingly-simple ethical accounting turns turns fiendishly complicated once you dive into the details. Every accounting scheme produces clearly wrong results in some cases. Utilitiarians propose ever-more-complex approaches, each of which turns out to have its own pathologies. This obviates utilitarianism’s most attractive feature: its intuitive simplicity, at first glance, compared with the endless rules of deontology and the elaborately literary conceptions of virtue.
When challenged, utilitarians usually argue that, on balance, their theory is less bad than deontology or virtue ethics—which they regard as the only two possible alternatives. (The fact that all three are clearly wrong does not seem to motivate a search for other possibilities.)
Utilitiarians suggest that, even if it is impossible to calculate the overall goodness of actions, doing so even approximately is correct approach to ethics. They feel that there must be a version of their theory that actually works, and that all-purpose methods of approximating must exist—even though they are presently unknown. This is a nice example of eternalistic wistful certainty.
Eternalism is the denial of nebulosity: the fact that meaningness is inherently indefinite, uncertain, and untidy. Utilitarianism proposes a fixed, objective, sharp-edged theory of ethics—which I believe is entirely impossible.
The nebulosity of ethics is uncomfortable. It means we can have no guarantee of acting ethically, no matter how hard we try. It means ethics is really hard.
Utilitarianism promised, at first glance, that ethics is easy, just a matter of adding some numbers. Looked at in detail, it makes ethics impossible, not merely really hard.
Later in the book, we’ll look at the ways ethical eternalism’s failure produces ghastly, unethical outcomes.