Monthly Archives: October 2015

Experience Machines

Robert Nozick’s experience machine ask us if we would enter a machine that would give us “any experience [we] desired.” Essentially, a virtual reality machine that makes us think we’re writing a great novel, or touring the world as a classical pianist, or whatever. We would gain nothing from this; we don’t learn to write or play an instrument, it just feels like we do. Nozick claims that most people would not spend their life in the machine. Importantly, he also adds that while you’re in the machine, “you’ll think that it’s actually happening.” This means you won’t remember deciding to go into the machine, or any aspect of your life before the machine that would contradict the experience you’re having within it.

There are two objections to this that I haven’t seen discussed much (though it’s possible I’ve just missed them! If anyone knows of literature to the effect of the following objections, please let me know.) Actually, the first has been at least mentioned; I’m not familiar with anyone discussing the second:

  1. People are terrible predictors of their own behavior. The vast majority of people surveyed believe they would not, in a Milgram-experiment situation, give electric shocks to someone after that person had protested. But the Milgram experiment (and several variant replications) show that the vast majority of people would give the shocks. John Doris’ Lack of Character, whatever its other flaws, is a great compendium of these sorts of experiments, and they do show that in moral cases most people are not very good at predicting how they would choose if presented with a real-world, as opposed to survey, choice. It’s entirely possible that most people, if they lived in a world where experience machines were available, would plug in. Perhaps we should go further and claim that, if they lived in such a world, and there was no social stigma against going into the machine, they would plug in. And studies of conformity suggest that if they lived in such a world, and there was a social stigma against not plugging in, that most people would plug in. So I’m not sure that Nozick proves his point, since it does depend upon most people not plugging in to the machine.
  1. The less discussed problem is that the machine makes us forget the choice of going into the machine. Is part of the problem with the experience machine that we have to give up our memory to go into it? What if we changed the problem so that you could perfectly well remember that you had a life before and made a decision to go into the machine? Presumably, you might then not want to go in because you’d fear being troubled by memories of your former life. But that’s not a point against hedonism; rather, it’s a point for it, since the reason you wouldn’t go in is because it might cause mental distress. Or suppose you knew you were going in, but the machine would take care of any emotional distress. Would that be better?

What I’m trying to get at in the cases in 2 is that the machine produces a certain loss of self. If you forget your former life, it seems like, in some important sense, it isn’t you who is in the machine, just some sort of loose continuer of you. We think of ourselves as tied into our memories–Nichols and Strohminger’s research, most recently, has shown that the vast majority of people associate total memory loss with loss of identity; presumably partial memory loss would, at some point on the spectrum, also be taken as loss of identity. Would you still be you if I deleted the last seven years of your life? Or rewrote your memory so that you believed you arrived at where you are now by a series of decisions, over the course of your life, that you did not, in fact, make? In other words, if I radically altered your self-narrative, as the experience machine clearly must do in order to have you believing that you are a word-famous guitarist or billionaire software developer. I imagine that those sorts of alterations to memory would make people fairly uneasy, and at least draw into question to what extent it will actually be me having these experiences.

If we retained our memories, but the machine took care of our feelings so that we weren’t distressed by the loss of our former lives, or the memory of what we’d left behind, that, too, could count as loss of identity. Again, Nichols and Strohminger’s surveys found that people think that loss of moral character counts as loss of identity. Without debating whether they are right about this, the important thing is that people believe this, and feel it to be true. So if there’s a reluctance to go into the machine, it might have to do with this sense of identity loss: that it wouldn’t be me in there. If the machine alters how I feel about a moral decision, I have a very profound loss of moral character, here understanding “moral character” as the aspects of my personality that I think of as (1) reflective of values and (2) constituting who I am in relation to those values. Surely, on almost any theory of valuing, having emotional attitudes towards things is part of valuing.


Sacrificing one to save another

Imagine the following situation:

A woman, let’s call her Su, is hit on the head and her memory and personality are destroyed. Over the course of several years, she acquires new memories and a new personality as her brain heals and she is trained back up to adult-levels of competence in most skills. Notably, all of this actually happened, so so far we’re not in a science-fictional thought experiment.[1]

Choice 1: first person

Now, imagine (and this part has never, to the best of my knowledge occurred) a surgeon tells the woman and her family that he can finally “heal” her. She will have all of her memories and personality restored, but it will:

Scenario A: wipe out her current memories and personality, so she (or whoever) will awake exactly as before the accident, but thinking no time has passed.

Scenario B: wipe out her current personality, but not her current memories. When she (or whoever) wakes up, there’ll be an odd recognition of acting quite strangely for many years, and a sense of being restored, but no gap in time. However, the person who awakes will have trouble recognizing herself in her actions, emotions, and responses in the time since the accident.

Would Su, at being told about the surgery, want it? I would guess she would refuse (we could, of course, ask the real Su, but the question is not so much what Su would do as what people in general are likely to choose.) In the U.S. there are strong rights to refuse medical treatment, so there’s no ethical problem here. It seems mostly likely that she would refuse Scenario A, and, based on personal identity x-Phi work like that of Nichols and Strohminger, also very likely that she would refuse scenario B. I would think that people would think of themselves as destroyed in both A and B.

Choice 2: third person

Now, imagine, instead, that after acquiring the new memories and personality, Su is again struck on the head. She is in a coma, and a surgeon comes and tells her family that they have 2 choices:

  1. he can do a surgery which will enact either scenario A, or,
  2. he can do a surgery which will restore her to how she was immediately before the most recent blow to the head.

Does the family have a right to choose 1, destroying NewSu?

Suppose the surgeon also offered them

  1. that he can do a surgery which will enact scenario B

Is this the moral choice? Is it any better, from NewSu’s perspective, than 1?

It seems that in 1 and 3, NewSu is destroyed. However, Su is resurrected. It seems like this might be a moral toss-up for that reason. But in the most similar cases, there is a clear ethical solution:

There is almost no scenario in which a third party can decide that you should be sacrificed so that someone else might live. If that’s what’s happening, then the family cannot rightly choose 1 or 3. Of course, this relies on us as thinking of NewSu as the currently existing Su. But maybe she’s just the most recent Su, or the most recent manifestation of Su, if you want to unify them, or some such. In that case, then again there’s no clear answer.

But: if the surgeon told them that NewSu would wake up on her own in about 6 months, as the brain healed, or, he could do the surgery, but it would restore OldSu, then the choice is perhaps clearer. As much as the family might want OldSu back, then seem to be intervening in a way that kills NewSu.

Or perhaps in a case like this we have no real moral guidance, as our identity and rights concepts are not prepared for the case. But that alone tells us something about the (lack of) robustness and universality of those concepts, especially the identity concept.

[1] See I Forgot to Remember by Su Meck for the full story.