Experience Machines

Robert Nozick’s experience machine ask us if we would enter a machine that would give us “any experience [we] desired.” Essentially, a virtual reality machine that makes us think we’re writing a great novel, or touring the world as a classical pianist, or whatever. We would gain nothing from this; we don’t learn to write or play an instrument, it just feels like we do. Nozick claims that most people would not spend their life in the machine. Importantly, he also adds that while you’re in the machine, “you’ll think that it’s actually happening.” This means you won’t remember deciding to go into the machine, or any aspect of your life before the machine that would contradict the experience you’re having within it.

There are two objections to this that I haven’t seen discussed much (though it’s possible I’ve just missed them! If anyone knows of literature to the effect of the following objections, please let me know.) Actually, the first has been at least mentioned; I’m not familiar with anyone discussing the second:

  1. People are terrible predictors of their own behavior. The vast majority of people surveyed believe they would not, in a Milgram-experiment situation, give electric shocks to someone after that person had protested. But the Milgram experiment (and several variant replications) show that the vast majority of people would give the shocks. John Doris’ Lack of Character, whatever its other flaws, is a great compendium of these sorts of experiments, and they do show that in moral cases most people are not very good at predicting how they would choose if presented with a real-world, as opposed to survey, choice. It’s entirely possible that most people, if they lived in a world where experience machines were available, would plug in. Perhaps we should go further and claim that, if they lived in such a world, and there was no social stigma against going into the machine, they would plug in. And studies of conformity suggest that if they lived in such a world, and there was a social stigma against not plugging in, that most people would plug in. So I’m not sure that Nozick proves his point, since it does depend upon most people not plugging in to the machine.
  1. The less discussed problem is that the machine makes us forget the choice of going into the machine. Is part of the problem with the experience machine that we have to give up our memory to go into it? What if we changed the problem so that you could perfectly well remember that you had a life before and made a decision to go into the machine? Presumably, you might then not want to go in because you’d fear being troubled by memories of your former life. But that’s not a point against hedonism; rather, it’s a point for it, since the reason you wouldn’t go in is because it might cause mental distress. Or suppose you knew you were going in, but the machine would take care of any emotional distress. Would that be better?

What I’m trying to get at in the cases in 2 is that the machine produces a certain loss of self. If you forget your former life, it seems like, in some important sense, it isn’t you who is in the machine, just some sort of loose continuer of you. We think of ourselves as tied into our memories–Nichols and Strohminger’s research, most recently, has shown that the vast majority of people associate total memory loss with loss of identity; presumably partial memory loss would, at some point on the spectrum, also be taken as loss of identity. Would you still be you if I deleted the last seven years of your life? Or rewrote your memory so that you believed you arrived at where you are now by a series of decisions, over the course of your life, that you did not, in fact, make? In other words, if I radically altered your self-narrative, as the experience machine clearly must do in order to have you believing that you are a word-famous guitarist or billionaire software developer. I imagine that those sorts of alterations to memory would make people fairly uneasy, and at least draw into question to what extent it will actually be me having these experiences.

If we retained our memories, but the machine took care of our feelings so that we weren’t distressed by the loss of our former lives, or the memory of what we’d left behind, that, too, could count as loss of identity. Again, Nichols and Strohminger’s surveys found that people think that loss of moral character counts as loss of identity. Without debating whether they are right about this, the important thing is that people believe this, and feel it to be true. So if there’s a reluctance to go into the machine, it might have to do with this sense of identity loss: that it wouldn’t be me in there. If the machine alters how I feel about a moral decision, I have a very profound loss of moral character, here understanding “moral character” as the aspects of my personality that I think of as (1) reflective of values and (2) constituting who I am in relation to those values. Surely, on almost any theory of valuing, having emotional attitudes towards things is part of valuing.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s