Shared posts

08 Apr 21:38

Whose Utilitarianism?

by Scott Alexander
Braden Anderson

The wirehead civilization seems to me a less meaningful happiness, like Mill's "fool satisfied". I am more comfortable with Bentham's explicit rule "Few human creatures would consent to be changed into any of the lower animals for a promise of the fullest allowance of a beast's pleasures" than my vague intuition, though. I do think that most humans would reject wireheading, notwithstanding Zach Weiner's incisive near-far argument ( http://www.smbc-comics.com/index.php?db=comics&id=2625 ).

[Trigger warning: attempt to ground morality]

God help me, I’m starting to have doubts about utilitarianism.

Whose Superstructure?

The first doubt is something like this. Utilitarianism requires a complicated superstructure – a set of meta-rules about how to determine utilitarian rules. You need to figure out which of people’s many conflicting types of desires are their true “preferences”, make some rules on how we’re going to aggregate utilities, come up with tricks to avoid the Repugnant Conclusion and Pascal’s Mugging, et cetera.

I have never been too bothered by this in a practical sense. I agree there’s probably no perfect Platonic way to derive this superstructure from first principles, but we can come up with hacks for it that come up with good results. That is, given enough mathematical ingenuity, I could probably come up with a utilitarian superstructure that exactly satisfied my moral intuitions.

And if that’s what I want, great. But part of the promise of utilitarianism was that it was going to give me something more objective than just my moral intuitions. Don’t get me wrong; formalizing and consistency-ifying my moral intuitions would still be pretty cool. But that seems like a much less ambitious project. It is also a very personal project; other people’s moral intuitions may differ and this offers no means of judging the dispute.

Whose Preferences?

Suppose you go into cryosleep and wake up in the far future. The humans of this future spend all their time wireheading. And because for a while they felt sort of unsatisfied with wireheading, they took a break from their drug-induced stupors to genetically engineer all desires beyond wireheading out of themselves. They have neither the inclination nor even the ability to appreciate art, science, poetry, nature, love, etc. In fact, they have a second-order desire in favor of continuing to wirehead rather than having to deal with all of those things.

You happen to be a brilliant scientist, much smarter than all the drugged-up zombies around you. You can use your genius for one of two ends. First, you can build a better wireheading machine that increases the current run through people’s pleasure centers. Or you can come up with a form of reverse genetic engineering that makes people stop their wireheading and appreciate art, science, poetry, nature, love, etc again.

Utilitarianism says very strongly that the correct answer is the first one. My moral intuitions say very strongly that the correct answer is the second one. Once again, I notice that I don’t really care what utilitarianism says when it goes against my moral intuitions.

In fact, the entire power of utilitarianism seems to be that I like other people being happy and getting what they want. This allows me to pretend that my moral system is “do what makes other people happy and gives them what they want” even though it is actually “do what I like”. As soon as we come up with a situation where I no longer like other people getting what they want, utilitarianism no longer seems very attractive.

Whose Consequentialism?

It seems to boil down to something like this: I am only willing to accept utilitarianism when it matches my moral intuitions, or when I can hack it to conform to my moral intuitions. It usually does a good job of this, but sometimes it doesn’t, in which case I go with my moral intuitions over utilitarianism. This both means utilitarianism can’t ground my moral intuitions, and it means that if I’m honest I might as well just admit I’m following my own moral intuitions. Since I’m not claiming my moral intuitions are intuitions about anything, I am basically just following my own desires. What looked like it was a universal consequentialism is basically just my consequentialism with the agreement of the rest of the universe assumed.

Another way to put this is to say I am following a consequentialist maxim of “Maximize the world’s resemblance to W”, where W is the particular state of the world I think is best and most desirable.

This formulation makes “follow your own desires” actually not quite as bad as it sounds. Because I have a desire for reflective equilibrium, I can at least be smart about it. Instead of doing what I first-level-want, like spending money on a shiny new car for myself, I can say “What I seem to really want is other people being happy” and then go investigate efficient charity. This means I’m not quite emotivist and I can still (for example) be wrong about what I want or engage in moral argumentation.

And it manages to (very technically) escape the charge of moral relativism too. I think of a relativist as saying “Well, I like a world of freedom and prosperity for all, but Hitler likes a world of genocide and hatred, and that’s okay too, so he can do that in Germany and I’ll do my thing over here.” But in fact if I’m trying to maximize the world’s resemblance to my desired world-state, I can say “Yeah, that’s a world without Hitler” and declare myself better than him, and try to fight him.

But what it’s obviously missing is objectivity. From an outside observer’s perspective, Hitler and I are following the same maxim and there’s no way she can pronounce one of us better than the other without having some desires herself. This is obviously a really undesirable feature in a moral system.

Whose Objectivity?

I’ve started reading proofs of an objective binding morality about the same way I read diagrams of perpetual motion machines: not with an attitude of “I wonder if this will work or not” but with one of “it will be a fun intellectual exercise to spot the mistake here”. So far I have yet to fail. But if there’s no objective binding morality, then the sort of intuitionism above is a good description of what moral actors are doing.

Can we cover it with any kind of veneer of objectivity more compelling than this? I think the answer is going to be “no”, but let’s at least try.

One idea is a post hoc consequentialism. Instead of taking everyone’s desires about everything, adding them up, and turning that into a belief about the state of the world, we take everyone’s desires about states of the world, then add all of those up. If you want the pie and I want the pie, we both get half of the pie, and we don’t feel a need to create an arbitrary number of people and give them each a tiny slice of the pie for complicated mathematical reasons.

This would “solve” the Repugnant Conclusion and Pascal’s Mugging, and at least change the nature of the problems around “preference” and “aggregation”. But it wouldn’t get rid of the main problem.

The other idea is a sort of morals as Platonic politics. Hobbes has this thing where we start in a state of nature, and then everybody signs a social contract to create a State because everyone benefits from the State’s existence. But because coordination is hard, the State is likely to be something simple like a monarchy or democracy, and the State might not necessarily do what any of the signatories to the contract want. And also no one actually signs the contract, they just sort of pretend that they did.

Suppose that Alice and Bob both have exactly the same moral intuitions/desires, except that they both want a certain pie. Every time the pie appears, they fight over it. If the fights are sufficiently bloody, and their preference for personal safety outweighs their preference for pie, it probably wouldn’t take too long for them to sign a contract agreeing to split the pie 50-50 (if one of them was a better fighter, the split might be different, but in the abstract let’s say 50-50).

Now suppose Alice is very pro-choice and slightly anti-religion, and Bob is slightly pro-life and very pro-religion. With rudimentary intuitionist morality, Alice goes around building abortion clinics and Bob burns them down, and Bob goes around building churches and Alice burns them down. If they can both trust each other, it probably won’t take long before they sign a contract where Alice agrees not to burn down any churches if Bob agrees not to burn down any abortion clinics.

Now abstract this to a civilization of a billion people, who happen to be divided into two equal (and well-mixed) groups, Alicians and Bobbites. These groups have no leadership, and no coordination, and they’re not made up of lawyers who can create ironclad contracts without any loopholes at all. If they had to actually come up with a contract (in this case maybe more of a treaty) they would fail miserably. But if they all had this internal drive that they should imagine the contract that would be signed among them if they could coordinate perfectly and come up with a perfect loophole-free contract, and then follow that, they would do pretty well.

Because most people’s intuitive morality is basically utilitarian [citation needed], most of these Platonic contracts will contain a term for people being equal even if everyone does not have an equal position in the contract. That is, even if 60% of the Alicians have guns but only 40% of the Bobbites do, if enough members of both sides believe that respecting people’s preferences is important, the contract won’t give the Alicians more concessions on that basis alone (that is, we’re imagining the contract real hypothetical people would sign, not the contract hypothetical hypothetical people from Economicsland who are utterly selfish would sign).

Whose Communion?

So what about the wireheading example from before?

Jennifer RM has been studying ecclesiology lately, which seems like an odd thing for an agnostic to study. I took a brief look at it just to see how crazy she was, and one of the things that stuck with me was the concept of communion. It seems (and I know no ecclesiology, so correct me if I’m wrong) motivated by a desire to balance a desire to unite as many people as possible under a certain banner, with the conflicting desire to have everyone united under the banner believe mostly the same things and not be at one another’s throats. So you say “This range of beliefs is acceptable and still in communion with us, but if you go outside that range, you’re out of our church.”

Moral contractualism offers a similar solution. The Alicians and Bobbites would sign a contract because the advantages of coordination are greater than the disadvantages of conflict. But there are certain cases in which you would sign a much weaker contract, maybe one to just not kill each other. And there are other cases still when you would just never sign a contract. My Platonic contract with the wireheaders is “no contract”. Given the difference in our moral beliefs, whatever advantages I can gain by cooperating with them about morality are outweighed by the fact that I want to destroy their entire society and rebuild it in my own image.

I think it’s possible that all of humanity except psychopaths are in some form of weak moral communion with each other, at least of the “I won’t kill you if you don’t kill me” variety. I think certain other groups, maybe along the culture level (where culture = “the West”, “the Middle East”, “Christendom”) may be in some stronger form of moral communion with each other.

(note that “not in moral communion with” does not mean “have no obligations toward”. It may be that my moral communion with other Westerners contains an injunction not to oppress non-Westerners. It’s just that when adjusting my personal intuitive morality toward a morality I intend to actually practice, I only acausally adjust to those people whom I agree with enough already that the gain of having them acausally adjust toward me is greater than the cost of having me acausally adjust to them.)

In this system, an outside observer might be able to make a few more observations about the me-Hitler dispute. She might notice Hitler or his followers were in violation of Platonic contracts it woud have been in their own interests to sign. Or she might notice that the moral communions of humanity split neatly into two groups: Nazis and everybody else.

I’m pretty sure that I am rehashing territory covered by other people; contractualism seems to be a thing, and a lot of people I’ve talked to have tried to ground morality in timeless something-or-other.

Still, this appeals to me as an attempt to ground morality which successfully replaces obvious logical errors with complete outlandish incomputability. That seems like maybe a step forward, or something?

EDIT: Clarification in my response to Kaj here.

07 Apr 14:54

Friedman on Psychic Harm

by Steve Landsburg

Four terrific posts by David Friedman, partly on psychic harm, partly on talking about psychic harm. I’d recommend these highly even if they hadn’t invoked my name.

Landsburg v Bork: What Counts As Injury?

Response to Bork and Landsburg

Frightening Ideas

Why Landsburg’s Puzzle is Interesting

Click here to comment or read others’ comments.

Share/Save

04 Apr 17:15

Deep value judgments and worldview characteristics

by Holden
Braden Anderson

It's almost weird how precisely I agree with this.

One purpose of this blog is to be explicit about some of the deep value judgments and worldviews that underlie our analysis and recommendations. As we raise the priority of expanding our research into new causes, this seems like a good time to lay out some of the things we believe – and some of the things we’re unsure about – on topics that could be of fundamental importance for the question of where to give.

In general, the below statements broadly describe the values of the GiveWell staff who have final say over our research. There may be cases in which different individuals would give different levels of weight/confidence to the various statements than I have, but at a high level we expect these statements to be a reasonably good guide to the values underlying GiveWell’s research.

Values

We don’t believe it would be productive to try to produce a complete explicit characterization of the fundamental values that guide our giving recommendations, but we think it’s worth noting some things about them.

  • We are global humanitarians, believing that human lives have equal intrinsic value regardless of nationality, ethnicity, etc. We do believe there may be cases where helping some people will create more positive indirect effects than helping others (for example, I stated in 2009 that I preferred helping people in urban areas for this reason, though this represents my view and not necessarily the view of others at GiveWell). However, we do not agree with the principle that “giving begins at home”: we do not assign more moral importance to people in our communities and in our country than to others.
  • The primary things we value are reducing suffering and tragic death and improving humans’ control over their lives and self-actualization. We also place value on reducing animals’ suffering, though substantially less than on human suffering. (We do not have clear consensus views on how to weigh these things against each other.)
  • We do not put strong weight on “achievements” (artistic endeavors, space exploration, etc.) as ends in themselves, though these may contribute to the things we do value (details above). We also don’t put strong weight on things like “justice,” “equality,” “fairness,” etc. as ends in themselves (though again, these may contribute to the things we do value).
  • We are broadly consequentialist: we value actions according to their consequences.
  • We are operating broadly in an “expected value” framework; we are seeking to “accomplish as much good as possible in expectation,” not to “ensure that we do no harm” or “maximize the chance that we do some good.”

There are many questions that we do not have internal consensus on, or are individually unsure of the answers to, such as

  • How should one value increasing empowerment vs. reducing suffering vs. averting deaths?
  • How should one value animal suffering in comparison to human suffering?
  • Is it better to bring someone’s quality of life from “extremely poor” to “poor” or from “good” to “extremely good?”
  • Is creating a new life a good thing? Can it be a bad thing? How “desirable” or “undesirable” must the life be for its creation to count as a good/bad thing? Should we value “allowing future lives to exist that would never come into existence otherwise” similarly to “lives saved?”
  • Is it better to save the life of a five-year-old or fifteen-year-old?

We don’t believe it is practically possible to come to confident views on these sorts of questions. We also aren’t convinced it is necessary. We haven’t encountered situations in which further thought on these questions would be likely to dramatically change our giving recommendations. When we have noticed a dependency, we’ve highlighted it and encouraged donors to draw their own conclusions.

Worldview

We view the questions in the previous section as being largely “fundamental,” in that empirical inquiry seems unlikely to shift one’s views on them. By contrast, this section discusses views we have that largely come down to empirical beliefs about the world, but are very wide-ranging in their consequences (and thus in their predictions).

There are two broad worldview characteristics that seem, so far, to lie at the heart of many of our disagreements with others who have similar values.

1. We are relatively skeptical. When a claim is made that a giving opportunity can have high impact, our default reaction is to doubt the claim, even when we don’t immediately see a specific reason to do so. We believe (based partly on our experiences investigating charities) that most claims become less impressive on further scrutiny (and the more impressive they appear initially, the steeper the adjustment that happens on further scrutiny). As a result, we tend to believe that we will accomplish more good by recommending giving opportunities we understand relatively well than by recommending giving opportunities that we understand poorly and look more impressive from a distance.

We have written about this aspect of our worldview previously, and have done some rudimentary work on formalizing its consequences:

  • A Conflict of Bayesian Priors? lays out the basic fact that we have a skeptical prior (by default, we expect that a strong claim will not hold up to scrutiny).
  • Why We Can’t Take Expected-Value Estimates Literally does some basic formalization of this aspect of worldview and explores some of the consequences, defending our general preference for giving where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good. It also explains why we put only limited weight on formal, explicit calculations of “expected lives saved” and similar metrics.
  • Maximizing cost-effectiveness via critical inquiry expands on this framework, laying out how it can be vital to understand a giving opportunity “from multiple angles.”
  • We will likely post more in the future on this topic.

2. We believe that further economic development, and general human empowerment, is likely to be substantially net positive, and that it is likely to lead to improvement on many dimensions in unexpected ways. This is a view we haven’t written about before, and it has strong implications for what causes to investigate. While we see great value in directly helping the poorest of the poor, we’re also open to the viewpoint that contributing to general economic development may have substantial benefits for the poorest of the poor (and for the rest of the world). And while we are open to arguments that particular issues (such as climate change) are particularly important to the future of humanity, we also believe that by default, we should expect contributions to economic development and human empowerment to be positive for the future of humanity; we don’t feel that one must necessarily choose between improving lives in the short and long term. (This view is part of why we put more weight on helping humans than on helping animals.)

Because of this view, we are open to outstanding giving opportunities across a wide variety of causes; we aren’t convinced that the best opportunities must be in developing-world aid, or mitigation of global catastrophic risks, or any other particular area. Even if a particular problem is, in some sense, the “most important,” it may be possible to accomplish more good by working in another cause where there is better room for more funding. We will discuss this view more in a future post.

02 Apr 23:30

Natural rights and wrongs?

by esr

One of my commenters recently speculated in an accusing tone that I might be a natural-rights libertarian. He was wrong, but explaining why is a good excuse for writing an essay I’ve been tooling up to do for a long time. For those of you who aren’t libertarians, this is not a parochial internal dispute – in fact, it cuts straight to the heart of some long-standing controversies about consequentialism versus deontic ethics. And if you don’t know what those terms mean, you’ll have a pretty good idea by the time you’re done reading.

There are two philosophical camps in modern libertarianism. What distinguishes them is how they ground the central axiom of libertarianism, the so-called “Non-Aggression Principle” or NAP. One of several equivalent formulations of NAP is: “Initiation of force is always wrong.” I’m not going to attempt to explain that axiom here or discuss various disputes over the NAP’s application; for this discussion it’s enough to note that libertarians take the NAP as a given unanimously enough to make it definitional. What separates the two camps I’m going to talk about is how they justify the NAP.

“Natural Rights” libertarians ground the NAP in some a priori belief about religion or natural law from which they believe they can derive it. Often they consider the “inalienable rights” language in the U.S.’s Declaration of Independence, abstractly connected to the clockmaker-God of the Deists, a model for their thinking.

“Utilitarians” justify the NAP by its consequences, usually the prevention of avoidable harm and pain and (at the extreme) megadeaths. Their starting position is at bottom the same as Sam Harris’s in The Moral Landscape; ethics exists to guide us to places in the moral landscape where total suffering is minimized, and ethical principles are justified post facto by their success at doing so. Their claim is that NAP is the greatest minimizer.

The philosophically literate will recognize this as a modern and specialized version of the dispute between deontic ethics and consequentialism. If you know the history of that one, you’ll be expecting all the accusations that fly back and forth. The utilitarians slap at the natural-rights people for handwaving and making circular arguments that ultimately reduce to “I believe it because $AUTHORITY told me so” or “I believe it because ya gotta believe in something“. The natural-rights people slap back by acidulously pointing out that their opponents are easy prey for utility monsters, or should (according to their own principles) be willing to sacrifice a single innocent child to bring about their perfected world.

My position is that both sides of this debate are badly screwed up, in different ways. Basically, all the accusations they’re flinging at each other are correct and (within the terms of their traditional debates and assumptions) unanswerable. We can get somewhere better, though, by using their objections to repair each other. Here’s what I think each side has to give up…

The natural-rightsers have to give up their hunger for a-priori moral certainty. There’s just no bottom to to that; it’s contingency all the way down. The utilitarians are right that every act is an ethical experiment – you don’t know “right” or “wrong” until the results come in, and sometimes the experiment takes a very long time to run. The parallel with epistemology, in which all non-consequentialist theories of truth collapse into vacuity or circularity, is exact.

The utilitarians, on the other hand, have to give up on their situationalism and their rejection of immutable rules as voodoo or hokum. What they’re missing is how the effects of payoff asymmetry, forecasting uncertainty, and decision costs change the logic of utility calculations. When the bad outcomes of an ethical decision can be on the scale of genocide, or even the torturing to death of a single innocent child, it is proper and necessary to have absolute rules to prevent these consequences – rules that that we treat as if they were natural laws or immutable axioms or even (bletch!) God-given commandments.

Let’s take as an example the No Torturing Innocent Children To Death rule. (I choose this, of course in reference to a famous critique of Benthamite utilitarianism.) Suppose someone were to say to me “Let A be the event of torturing an innocent child to death today. Let B be the condition that the world will be a paradise of bliss tomorrow. I propose to violate the NTICTD rule by performing A in order to bring about B”.

My response would be “You cannot possibly have enough knowledge about the conditional probability P(B|A) to justify this choice.” In the presence of epistemic uncertainty, absolute rules to bound losses are rational strategy. A different way to express this is within a Kripke-style possible-futures model: the rationally-expected consequences of allowing violations of the NTICTD rule are so bad over so many possible worlds that the probability of landing in a possible future where the violation led to an actual gain in utility is negligible.

My position is that the NAP is a necessary loss-bounding rule, like the NTICTD rule. Perhaps this will become clearer if we perform a Kantian on it into “You shall not construct a society in which the initiation of force is normal.” I hold that, after the Holocaust and the Gulag, you cannot possibly have enough certainty about good results from violating this rule to justify any policy other than treating the NAP as absolute. The experiment has been run already, it is all of human history, and the bodies burned at Belsen-Bergen and buried in the Katyn Wood are our answer.

So I don’t fit neatly in either camp, nor want to. On a purely ontological level I’m a utilitarian, because being anything else is incoherent and doomed. But I respect and use natural-rights language, because when that camp objects that the goals of ethics are best met with absolute rules against certain kinds of harmful behavior they’re right. There are too many monsters in the world, of utility and every other kind, for it to be otherwise.