Imagine the following situation:
A woman, let’s call her Su, is hit on the head and her memory and personality are destroyed. Over the course of several years, she acquires new memories and a new personality as her brain heals and she is trained back up to adult-levels of competence in most skills. Notably, all of this actually happened, so so far we’re not in a science-fictional thought experiment.
Choice 1: first person
Now, imagine (and this part has never, to the best of my knowledge occurred) a surgeon tells the woman and her family that he can finally “heal” her. She will have all of her memories and personality restored, but it will:
Scenario A: wipe out her current memories and personality, so she (or whoever) will awake exactly as before the accident, but thinking no time has passed.
Scenario B: wipe out her current personality, but not her current memories. When she (or whoever) wakes up, there’ll be an odd recognition of acting quite strangely for many years, and a sense of being restored, but no gap in time. However, the person who awakes will have trouble recognizing herself in her actions, emotions, and responses in the time since the accident.
Would Su, at being told about the surgery, want it? I would guess she would refuse (we could, of course, ask the real Su, but the question is not so much what Su would do as what people in general are likely to choose.) In the U.S. there are strong rights to refuse medical treatment, so there’s no ethical problem here. It seems mostly likely that she would refuse Scenario A, and, based on personal identity x-Phi work like that of Nichols and Strohminger, also very likely that she would refuse scenario B. I would think that people would think of themselves as destroyed in both A and B.
Choice 2: third person
Now, imagine, instead, that after acquiring the new memories and personality, Su is again struck on the head. She is in a coma, and a surgeon comes and tells her family that they have 2 choices:
- he can do a surgery which will enact either scenario A, or,
- he can do a surgery which will restore her to how she was immediately before the most recent blow to the head.
Does the family have a right to choose 1, destroying NewSu?
Suppose the surgeon also offered them
- that he can do a surgery which will enact scenario B
Is this the moral choice? Is it any better, from NewSu’s perspective, than 1?
It seems that in 1 and 3, NewSu is destroyed. However, Su is resurrected. It seems like this might be a moral toss-up for that reason. But in the most similar cases, there is a clear ethical solution:
There is almost no scenario in which a third party can decide that you should be sacrificed so that someone else might live. If that’s what’s happening, then the family cannot rightly choose 1 or 3. Of course, this relies on us as thinking of NewSu as the currently existing Su. But maybe she’s just the most recent Su, or the most recent manifestation of Su, if you want to unify them, or some such. In that case, then again there’s no clear answer.
But: if the surgeon told them that NewSu would wake up on her own in about 6 months, as the brain healed, or, he could do the surgery, but it would restore OldSu, then the choice is perhaps clearer. As much as the family might want OldSu back, then seem to be intervening in a way that kills NewSu.
Or perhaps in a case like this we have no real moral guidance, as our identity and rights concepts are not prepared for the case. But that alone tells us something about the (lack of) robustness and universality of those concepts, especially the identity concept.
 See I Forgot to Remember by Su Meck for the full story.
Choice 2 seems to assume some preference for OldSu over NewSu among her family members, so I think the intuition would depend a lot on where that preference comes from. If it’s just that NewSu sees them as strangers and they miss OldSu, I think most would agree with you that the family has no right to choose 1 or 3 and maybe shouldn’t even be making this choice at all.
But maybe this is the real limit case: suppose that NewSu’s personality is antisocial, self-destructive, or catatonically depressed, and we replace the family with a treating physician or medical ethics board who have no memories of OldSu (but know that she didn’t have these problems). I still feel like 2.1/2.3 would be wrong, but it’s a tougher call.
That’s an interesting extension. I still think the family _probably_ doesn’t have a right to choose to erase NewSu even if NewSu is fairly anti-social. I think they might be more likely to do so,though.
It sounds, though, a lot like that classic critique-of-Utilitarianism case: A fine family-woman, a mother and doctor, who has 2 small children, is in a coma with organ failure. A vicious and friendless bully enters the hospital, complaining of a minor ailment. No one would miss him if he were gone, and in many ways the world would be a better place. His organs just happen to be tissue-type matches for the fine family-woman. Should we just take them? Here the standard intuition, at least among Americans and Europeans, is that we shouldn’t. So maybe, on the same principle, we can’t sacrifice NewSu, even if she’s anti-social, for OldSu.
Are there any circumstances in which you’d sacrifice NewSu for OldSu?
I imagine there must be. There’s always a way to make some sort of fictional example to justify just about anything. But I think the general principle, that we shouldn’t sacrifice X to save Y without X’s permission, is pretty strong.
The question here would be: is NewSu a different person from OldSu? If we are, say, animalists, we might not think so, and then I’m not sure how we make the ethical decision–maybe by appealing to some ethical theory-say, Utilitarianism might tell me to sacrifice X if she is particularly bad and Y particularly good. I can’t really say how a deontologist would handle this, though. Clearly, the Kantian cannot sacrifice the life of one person to save another (without the first person’s permission, etc.). But is NewSu a different person from OldSu? Someone who said she was not, and who was a deontologist, might have no guidance here. Do I disrespect NewSu by sacrificing here? What if I know OldSu would want me to do so? What if I had been expressly instructed by OldSu to do so should she ever transform into a NewSu, but been expressly instructed by NewSu not to sacrifice her? Do I just take the most recent request? I obviously wouldn’t take the most recent request of someone who had had a psychotic break; I’d look to requests or directives made before the psychotic break. In short, I’m unsure about how to proceed on this case, which I think is good, because I think it represents a limit-case for our concepts.
But I’d still, overall, give preference to NewSu’s directives and requests. I don’t have a tremendously well-worked out justification for that, though.
I’m realizing now how much the phrasing matters. For example, you start out saying “her memory and personality are destroyed,” but then posit that they can be recovered, so maybe it should have been “suppressed”? I think that might have changed my intuitions a little.
Anyway, if we set aside the utilitarian case ….I haven’t heard the term animalist but it sounds like you’re positing a rights-based argument ….in favor of OldSu? Or just in favor of indifference between them?
I mean, I can definitely imagine an argument for indifference, but why favor OldSu? Just …seniority? It seems to me that even someone with a religious (or whatever) belief about the essential/immutable self would still have to deal with NewSu as a person, not as some kind of soulless phantom.
In fact they might wind up denying the whole premise of these operations, and insisting that they’re really the same person and their memories/personality are more continuous/integrated than it appears, perhaps even to NewSu herself.
Or maybe the fault line is the memory/personality distinction. Personality is a lot more subjective than memory. Maybe giving NewSu’s memories to OldSu, or vice versa, would lead to a convergence back to the same personality?
“Animalism” is the position (most strongly identified with Eric Olson) that persons are essentially animals, and that personal identity, such as it is, must be tracked to the animal. Olson holds, for example, that in the event of a cerebellum swap, the bodies are where identity resides, not the brain-parts. So if you give Prince the brain of Cobbler, and Cobbler the brain of Prince (well, just the cerebellum, but that should cover memory and personality) then the Prince-body-with-Cobbler-brain is Prince. That’s one of the more extreme results of the position, anyway, Judith Jarvis Thompson holds a pretty similar position, as do a few others.
Personality is tricky, especially in light of the situationist critique (see for example Doris “Lack of Character,”) which says that we don’t have stable personalities. There are many who think Doris is wrong, though, and that we can distinguish people by their personalities. Nichols and Strohminger (in the papers “The Essential Moral Self” and “Neurodegeneration and Philosophy”) have demonstrated that people tend to think of personality as the seat of identity even moreso than memory (both score highly, though.)
But, yes: why favor OldSu? I can think of some possible reasons, but I don’t generally think they’re dispositive.
The really tricky move here is the one you suggest: saying they’re the same person. This would be the animalist move. It would also, presumably, be the “further fact” move. Further Fact theorists hold that there is something, like maybe a soul, that is not reducible to either physical or mental material, that constitutes identity. Mots further fact theorists tend to think that a human body contains one identity until it dies (and then some of them think that identity continues in an afterlife.) I tend to think this is wrong, and have arguments for it, but they take some space. In short, though: If all mental content is irrelevant to identity, then personal identity ceases to track or pragmatic concerns and forensic notions, and if that’s the case, then we’re using “person,” “personal,” and “personal identity” in technical senses that are not closely related to ordinary language or interests in the problem.