Can a Self Persist Across Time?

Hume famously noted that when he introspected, he found no self, just a constantly shifting “bundle” of impressions. The self, such as it was, was at best a fleeting thing, having identity only as long as the bundle remained consistent, and losing that identity as the bundle shifted to new thoughts, feelings, and intentions.

Galen Strawson took this idea to imply that instead of a self, we have a series of selves, “pearls” on a string, as he first conceived them, each passing to the next. The self was a thing that lasted, on Strawson’s account, only for a few seconds or minutes, until its character was dissolved by the next set of impressions.

Others, like Locke and Parfit, have noted that there is a somewhat greater consistency to the self in that it draws upon the same memories and evinces the same habits, at least for a time. These, too, drift, but they last far longer than the seconds or minutes of Hume’s bundles or Strawson’s pearls.

All of these focus solely on the experiencing self. But what if we looked at the underlying hardware, as it were. If Hume and Strawson are right, then we cease to have selves when we sleep. And when we awake, we are new selves, not at all the selves that went to sleep.

But imagine working on a computer. You have a paper in progress, you’re keeping a few browser windows open, there’s a game in another window, and  the hard drive is full of all your previous works. When you put the computer to sleep for the night, it ceases to be actively attending to any of these tasks, but they’re ready and waiting when the computer is roused in the morning.

Similarly, our brains hold much of our mental life in place. Certainly not with the precision (and parsimony) of the computer, but when we go into sleep mode, while some active processes are lost or start to degrade, much remains for when we restart in the morning. And if we think of the self not merely as the experiencing self, the passing show or bundle, but as the entire organism of subjective potential, contained, at least largely or in important ways in the matter of the brain, then there is a self across time.  And, though it isn’t doing much while we sleep (at least during dreamless sleep), that doesn’t mean it doesn’t exist. It’s there, just as our computer’s memory state and capacities are there, waiting to be woken.

Perhaps a non-active self makes no sense, perhaps  that’s not what “self” means. But if by “self” we only mean “self-experience,” there must be still be some self to be experienced, and it seems profitable to think of it as a thing that can sometimes be doing nothing. We don’t think of other things as vanishing when inactive; perhaps the self is best thought of as simply quiescient, rather than gone, when it is not in service. And if that’s the case, then the “bundle” is merely some activity of the self, and not the self itself. The self could be both the active content of consciousness, and our storehouse of ideas and memories, and our capacities to act upon them, all of which have a persistence across time that some fleeting action only gives a glimpse of.

Mind Transfer For Fun and Profit

I tend to think that the psychological and physical theories of personal identity are both insufficient. There are cases where psychological continuity wouldn’t count as identity (say, we find a way to duplicate brains), and cases where physical continuity clearly fails (for example, most people would equate a complete brain-wipe with death.)

But if we assume that our theories are determinative for our own identity, we could do the following: we find a young, healthy physical continuity theorist and an elderly, unwell psychological continuity theorist, and offer them the following deal: the psychological continuity theorist gives all of his or her money to the physical theorist, and we then reprogram the physical theorist’s brain to be an exact match of the psychological theorist. The psych theorist then gets to live in the physical theorist’s body, at least on the psych theory, and the physical theorist gets a lot of money. Both would think of themselves as continuing to exist, so it’s a win-win.

Personal identity, the identity of a sculpture, animalism, psychologism, and necessary parts.


Suppose a famous statue, say, Michelangelo’s David, was in need of restoration. The marble under the base was rotting (let’s just assume marble can rot). So the restorers dig out the rotten section, a part that would be completely invisible, and find that it goes deep. They have to hollow out the legs and torso and pretty much the entire statue. They never touch the surface, and from the outside the statue looks exactly the same. All the marks of Michelangelo’s work are there, and every surface element is the way it is because Michelangelo sculpted it that way. But the inside is completely gone. To stabilize the statue they refill the inside with a powdered marble and some fine liquid adhesive, which sets, providing what is essentially a new marble interior that bonds perfectly with the shell of the statue. Though the surface layer is untouched, the vast majority of the statue’s mass has been replaced, but of course, it was mass that was never seen by viewers.

Is this the same statue?

Suppose instead that the surface of the statue is rotting. Restorers carefully, piece by piece, remove the surface, but only the surface layer (say, no more than 1cm of depth). They replace it with replica finished by contemporary restoration experts, who are, of course, highly trained sculptors. In the end, it would be very hard to tell the difference between the pre- and post-restoration statue. The great bulk of the statue, of course, is unaffected: it’s only the thin surface layer, the parts that Michelangelo touched, that are affected.

Is this the same statue?

The continuity of the statue in the first case is perhaps analogous to what psychological identity theorist, or those who hold that the person is the brain, would hold about persons. Though the vast majority of the matter is removed, the person or statue retains identity, because there’s a special, relatively small part, which carries what is essential to identity.

The continuity of the statue in the second case is perhaps analogous to what an animalist would hold: the majority of the mass of the statue is undisturbed, and the statue would be able to continue to support its own weight, just as the person’s body remains undisturbed, and is capable of carrying on basic functions, if the cerebellum were removed and replaced with a duplicate. The persistence of the major, structural elements is what’s important, not the small element that is commonly (though, according to the animalist, mistakenly) thought to maintain the identity of the statue or person.

My guess is that almost everyone would say that it is the same statue in case 1, because what makes it the David are the elements that Michelangelo worked on. So, again I’m guessing on general response, the convention concerning the meaning of “same statue” is probably tied into the marks made by the artist. In the second case, I think there’d be more disagreement. This would correspond to studies on what people think counts as same person; they generally think of persons as constituted by mental content.

One important thing to note: a poll is not the same as metaphysics, but a person is very different from, say, an electron. The term ‘person’ is the result of social agreement about language usage over many years. Persons were not discovered and discovered in the way electrons were, and what does and doesn’t count as a person or the same person is subject to a lot more force of convention than what does and doesn’t count as an electron. So we do need to be sensitive to common usage, because if we wind up with a version of ‘person’ that is strongly at odds with that usage, we may well no longer be talking about persons, but about some other thing that we have invented for the purpose of, say, a consistent philosophical position.

The necessity of identity and its reliance on connectedness

Persons across time could have identity, unity, or connectedness:

  1. Identity: Person at time T1 and person and time T1+n are the same person. They are identical.
  2. Unity: Person at time T1 and person and time T1+n re two parts of one larger cross-temporal entity. The two “time slices” are not identical to each other, but are component parts of one, four-dimensional thing.
  3. Connectedness: Person at time T1 and person and time T1+n have some other relation which connects them: they may share memories, or bodily parts, or be connected by a chain of overlapping memories, or overlapping bodily parts, or they may hold the same “office,” that is, they may hold the same place in social or political or legal relations. They are not strictly identical, but they may be the same person for specific purposes: for example, perhaps psychological and physical continuity would be sufficient for legal responsibility; or mere physical continuity would be sufficient for continued ownership of goods, etc.

On some level, 1 is impossible. Nothing has absolute sameness across time; the platinum-iridium kilogram bar, for example, seems to have grown lighter (or its duplicates grew heavier.) People under go much greater change than platinum-iridum bars. Absolutely strict identity won’t work for them, so we have to decide if personal identity is carried by some subset of the things that the person is made up of, or is, contra the strict notion of identity, amenable to some changes in virtual any part, etc.

The unity answer doesn’t really help, because it already assume that person at T1 and person at T1+n are slices of the same person, and that’s what we need to get at. That is: how do I know that these two person-slices are slices of the same person? Surely, even a four-dimensional person has conditions which he or she cannot survive, and must come to be and cease to be at various points in time. The rotting corpse of Johan is, for most purposes, not Johan. Nor are the various bits of matter, floating around in the early universe, which will eventually make up Johan’s body.

Connectedness seems to be the area of investigation for sameness of persons. What are the conditions of connection needed for a person to be the same person as something that exists at another time? This is, indeed, where most of the theories come down: animalism, physicalism, psychological theories, all say that some part of their preferred section of the person must be preserved, though by no means the entirety of that part. So no animalist claims that the entire body must be preserved; they focus on continuity conditions for living organisms, and must make some arbitrary choices for beginning and end points. Thus, Olson claims that a person is dead even if her brain is still functioning and we have some system of communicating with that brain (say, a neural implant hooked up to audio input and output systems that allow speech and hearing) if her body has died. So a brain transplant, on this account, is the end of a person. A sudden and complete memory wipe is probably the end of a person on psychological accounts, even if the body persists and can be rehabilitated.

The question then becomes: why are we asking about personal identity or survival? Is it to apportion blame, legal punishment, ownership of goods? To re-identify loved ones? To know if my memories are in fact my memories, and not the memories of some other that I’ve inherited?

Notably, when we do legally punish, under the vast majority of systems of laws we only punish people believed to be, in some important sense, identical to the person who committed the crime. I don’t punish a twin for her sister’s crime, for example. And we want to be our friend, the very person who has identity with the person we previously identified as our friend, not someone who merely looks and acts like him.

So it seems that, for some purposes, a kind of identity is called for, though it will not be strict identity. Strict identity may be, as David Shoemaker says, “the reddest of herrings.” But identity, more broadly construed, is central to our concerns.

Extended mind/collective intentionality

In some ways,  extended mind and collective intentionality are highly distinct positions: the extended mind is always some one person’s mind, where collective intentionality always involves a group. But:

What if X and Y work in the same lab. X works days and Y works nights, and they’ve working on the same project. They share a lab notebook. Each morning X reads Y’s entries, and at night Y reads X’s. On Clark’s extended mind hypothesis, it seems that X and Y share a mind-part. While X has a mind separate from Y and Y from X, they also have overlapping minds in that the notebook is part of both of their minds. Further, since Y and X both input into the notebook, they intrude upon each other’s minds and partly constitute the others mental content.

Since they are working on the same project, they could be said to have collective intentionality, in Searle’s sense, even though they do not act at the same time. They coordinate their activity partly through the mediation of the notebook, but they also maintain a part of their shared mental content in the notebook.  Further, they clearly are working towards the same end (the research project) and their individual actions (checking a reading, adjusting an instrument, etc.) are best described as intentionally directed towards the shared project. X only adjusts the instrument because he intends to do the research project.

Of course, at some point, Clark’s extended mind thesis starts to implicate large groups of people, perhaps everyone, in one shared extended mind, via institutions like the science of chemistry with its shared journals, research facilities, results, their propagation through newspapers and the internet. This would also include collectively held content, like child-rearing techniques that are well-known by some members of the community who are then consulted when necessary, and the consultees then become more aware of these techniques. There are large networks of experts–doctors, DIY handbook writers, on-line software FAQ writers, who can be consulted as needed when an individual mind seeks to expand its capacity to deal with some particular situation (illness, building a house, fixing a glitch in a web browser.)

But these don’t necessarily imply collective intentionality in the way that the notebook does, and it and cases like it provide a strong example of not just extended mind, but overlapping mind in shared intention.

Scientist’s and Engineer’s Approaches to “Solving” Philosophical Problems

There are often two paths in presenting an answer in philosophy:

  1. One claims that it is the correct answer, and all the uses of “knows” or “good” or “true” or etc. that don’t meet this answer have been mistaken from the start, even if these are common uses of the term. (this is generally a methodist account; it accords with the saying that philosophers like to give a special definition of a word then claim that all who use it in its non-special, common sense, are misusing it.)
  2. One claims it is the best answer. That is, one admits that one is stipulating and claims that this stipulation produces good results: we get something consistent in our discussions of “knows” or “good” or “just” or etc.; or we get a version of the term or concept that is valuable in a way common uses are not, or a model of the phenomenon that is enlightening or opens up new and helpful ways of conceiving it. This is, for example, the claim of the virtue theorists with regard to “knows,” or what Kendall Walton claims when he asks us to think of mimesis as make-believe.

We can think of the first method as that of the “philosophical scientist.” The truth is sought, and thought to be discovered. The second path belongs to the “philosophical engineer.” There’s a problem, and a solution, though hardly the only possible solution, is offered, because it gets the job done particularly well.

 Robot and Frank: Crime and Punishment for Robots

In the m movie Robot and Frank, an elderly man (Frank Langella) who is suffering from mild Alzheimer’s is given a helper-robot, which he calls Robot. Frank enlists Robot’s help in committing a series of burglaries, but the police close in, and are going to use the recordings in Robot’s memory to prosecute Frank. Robot suggests deleting his memory, and Frank is reluctant because Robot has become his friend.

Unlike in common personal identity stories involving memory loss, it’s not robot’s subjective sense of self that is altered by the removal of his memory; he has no subjective sense of self. Rather, Frank believes his buddy, Robot, will ‘die.’

Further, Robot’s personality and moral values don’t change. It’s just that he’s not the same Robot to Frank because what was important in their relationship was the shared memory of their heists together. So unlike some common intutiions about personal identity (as in Strohminger and Nichols [1] ) what matters here is just memory, not moral orientation.

AI identity where the AI has no subjective experience is a good test case for the limits of mental and physical content as identity bearers. They meet Wilkes’ [2] criterion of being “real,” insofar as current AIs have no subjective experiences, but we can become attached to them, and they do have memory.  Of course, many (most?) would assert that the AI is not a person if it lacks self-awareness. (I assume eliminative materialists would not have this problem, and would need some other criterion or criteria for personhood, if they care to retain the concept, and the concept is important for legal purposes.)

An interesting extension of this problem occurs in the recent case where an AI bought drugs [3]. Who is guilty of this crime? If we erase the AI program that bought the drugs but keep the computer it ran on, is the computer guilty? Is no one guilty? Is only the programmer guilty, even though he did not know if the AI would buy drugs, because he released the AI and knew it could buy drugs?

The AI obviously did not use its own money, since it’s not capable of owning money under current law (notably, the law seems to state that only persons own things, understanding legal persons to include corporations and governments; i.e. entities with legal standing, which opens a question as to animal personhood: once an animal is counted as a person, as in the Spanish decision on apes, can the animal own property? Can we say that some stretch of land belongs to a tribe of apes?).  If it can’t own anything, then it lacks an essential character of legal personhood.

What about Robot’s crimes? Frank asked Robot to commit crimes with him, but are Robot’s programmers guilty? If the programmer who set loose the drug-buying AI did not intend for to buy drugs, but gave it funds and a tendency to follow orders, and someone else asked it to buy drugs, who would be guilty, the programmer or the one who asked it to buy drugs (assume the one who asked did so not on his own behalf; he was not going to collect the drugs he just wanted the AI to buy them.)

Since the AI used someone’ s money to buy drugs, who owns the drugs? If drug possession is illegal, it seems the owner of the money is then liable, as he or she owns the drugs. But can she claim that she did not intend to buy drugs? What happened with the money then? Was it stolen? If I lend you money and you buy drugs I’m not guilty (assuming I did not know your plan.) If I put you in charge of my money and you commit a crime with it I assume I’m not guilty…

What is Robot’s guilt or responsibility?

Can Robot be punished? (is that even possible for a being without self –awareness?)

If Robot could be punished, would wiping Robot’s memory make him “die” so he cannot be punished, or would his body be punished?  It seems like we might find Robot  more culpable because he has a body…not on good grounds, but as a continuation of our way of thinking about guilt and responsibility by analogy with other responsible beings.

The question of robot and AI guilt and responsibility will need to be worked on in coming years, assuming AIs advance in sophistication and independence to the point where we cannot strictly blame the creator of an AI for all of the AIs actions. Consider, too, if an AI outlived its human creator and then started committing crimes. In this case, there is clearly no human to punish, and if we think that punishment makes no sense for AIs, then there is no one to punish. A number of interesting possibilities open up here.

1 Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition,131(1), 159-171.

2 Wilkes, K. V., & Wilkes. (1988). Real people: Personal identity without thought experiments. Oxford: Clarendon Press.

3 “This little bot was busted for drugs”


Is Harming Only Oneself Sometimes Also Harming Another?

There are some cases where harm to self is effectively harm to another because the harm produces a person who is so radically different from the person bestowing the harm that he or she loses relevant continuity of identity with that prior person. This can be exacerbated by producing, also, a person with diminished capacity to consent to the harm.

Part 1: Wrongful Life

Imagine that people intentionally damage their reproductive systems so that they will only produce offspring with a genetic condition that causes severe pain throughout life. Then, assume further that these people intentionally reproduce. Have they done a wrong in bringing about a person who (1) has no say in his or her coming-to-be and (2) will have a life that is unbearably painful? We can also imagine this with some sort of severe disability, one that makes life potentially unlivable, or some other disorder.

I think many would hold that this is something we should not do: it’s one thing to have and raise a child who has a severe health problem, it’s another to intentionally produce such a child. If you think it’s justified to do so, maybe you can push the thought experiment until it becomes unjustified: perhaps moving to a warzone where child rape and torture are common occurrences, and then intentionally having a child while knowing that you will die within a few months of the child’s birth, and that the child will be left with abusive relatives. Or some such: make up your own Dickens’ story.

If we agree that some such version is wrong, we can agree there is something like a “wrongful life” case.

Part 2: Self Determination

Some people will hold in the existence of wrongful life cases, and also hold that a people have a great deal of leeway about what they do to themselves. So we might hold that it’s morally permissible to get facial tattoos that will make future employment extremely difficult, or to engage in dangerous “extreme” sports that have a very high risk of causing permanent disability, or do other things that put our future welfare at risk.

Liberals and libertarians generally give people a great deal of leeway to self-harm, but very little to harm others. This is, in some sense, the most basic principle of liberalism, at least as understood by the tradition coming out of Mill. In general, this means they would prefer that there not be government regulations against self-harm, or that such regulations would require special justification (for example, that they apply only to people with diminished agency) or only be a by-product of regulations designed to prevent harm to others (for example, a pro-euthanasia libertarian might agree that people should not commit suicide in public to avoid traumatizing others.)

Part 3: Wrongful Self-Creation

Suppose I, an unusually cruel person, wish to cause someone intense and long lasting pain. But I respect the law, so I can only do this to myself. But I do not wish to experience this pain. So: I arrange the following: I will be injured in the spine in a manner that will cause me excruciating pain for the rest of my life, and also experience severe disability. But I also arrange to have my memory wiped.

The latter is not impossible: Su Meck[1] and the victims of Donald Ewen Cameron[2] suffered near-total memory wipes due to head injury in the former case and the use of drugs and sensory deprivation tanks in the latter.

So what will happen is that I will produce a person, with no knowledge of how he was produced, no memory of any decision to become who he is, nor any real psychological connection to me, assuming the cases above are representative, and for the purpose of the thought experiment, we can just stipulate that this will be the effect. In Su  Meck’s case, for example, people who knew her before say that the person who emerged after the memory wipe had a very different personality from the one before. Further, this being will be in constant pain.

It seems like,  if I think, as liberals and libertarians do, that I should be allowed to self-harm, if I also think that there are wrongful life cases, as in part 1 above, I may have conflicts about how to assess this case.

On the one hand, there is a description of that is done here that makes it seem like it is merely self-harm, and not harm to others. We can even add that I have made enough money that I can put it towards the care of this future self, so that society is not burdened by his existence. In fact, there’s some societal benefit in that people are employed to care for him.

On the other hand, it seems that this might no longer be myself, in any meaningful way. I’ll have lost all memory and connection of character to the prior person. So is this a case of wrongful life?

Note that the full memory wipe means that the person will have to be taught to talk and read and write and etc. So this is highly analogous to creating a child, and, in this case, intentionally creating a child with very low quality of life.

Part 4: Conclusion: Edicts Against Self-Harm as Edicts Against Other-Harm

Granted that there are some cases in which harm to a future self is effectively harm to another, it might be the case that many such cases exist. Given enough time, many if not most people will undergo radical changes in personality, and in any event most of our memories are not retained for very long, and those that are retained are increasingly unreliable as time progresses. A foolish decision I make when I’m 15 might seriously impair my future life, and I could well be said to be harming some future person who is, at best, only mildly continuous with me.

While I may have bodily continuity with this future person (though we could imagine cases where I don’t, say due to extensive prosthesis including slow replacement of brain parts with computer parts) I may, as in Derek Parfit’s Russian nobleman case, have little in common morally or psychologically with this future person.

Can I be forbidden, either morally or legally, from making decisions that harm my future self, even if I agree that I have a great deal of moral and legal leeway to harm myself? That is, taking a liberal/libertarian ethic, can I still argue that there are things I cannot do to myself, even though they harm only me (or some future me) and not anyone else? I think, given the above, some sort of argument can be made in at least some cases. Those would be cases where I create a future self who would not have consented to the harm he receives, or who would be unable to consent.

Thus, there may be a legal case, even in strictly libertarian/liberal terms, for laws against self harm, at least when the self-harm is related to a future person with limited continuity with the person committing the harm.

[1] See and her autobiography, “I Forgot To Remember,” Simon & Schuster, 2014

[2] There’s a brief rundown of Cameron’s work here: . Fuller discussion can be found in Anne Collins’ “In the Sleep Room: The Story of CIA Brainwashing Experiments in Canada.” Toronto: Key Porter Books. 1988/1998

Enhancement vs. Therapy: Why is the Normal Normal?

Some enhancement conservatives like Michael Sandel  and Francis Fukuyama worry that if we allow unrestricted freedom to enhance human bodies, we may lose our humanity, and the meaning we find in our lives. For Fukuyama, for example, our finitude gives us meaning; for Sandel, it’s the sense that life is a gift, and not something we created.

But both accept therapeutic use of enhancement technologies. If a person were born with a genetic abnormality that caused very low IQ, for example, Fukuyama and Sandel would both accept that we should be allowed to use, say, a genetic enhancement technique to “correct” this, and give the person a normal IQ.
But who’s to say what’s normal and what’s low? In general, the conservative answer appeals to the average person, and what sorts of attributes such a person would have. The boundaries of disease, disability, and mildly lower levels of ability, are hard to map. But something can be labelled an “abnormality,” by some means, Sandel and Fukuyama accept that it can be treated.

This would rule out, for them, giving a person with an average (say, 90-110) IQ an enhancement to genius level. Probably Fukuyama and Sandel’s IQs are notably higher than the average level, but they don’t want the average person to be able to enhance to their level using an intervention like drugs, genetic engineering, or some kind of neural implant. Presumably, they’re OK with better schooling, being raised by educated parents, living in a house with books, and having parents who have time to give personal attention to children, all of which actually do account for a lot of IQ difference.

Now, suppose we find out that the “average” IQ is the result of a widespread mutation that occurred because, let’s say, the Black Plague injured the standard genome some 500 years ago. Most people have the damaged gene; some, like Sandel and Fukuyama, do not. Now it seems that most people are “abnormal,” at least if we sum across history. Now would it be ok to “cure” them of this bad mutation so they could have Sandel/Fukuyama-level IQ?

The point being: we have no reasonable historical baseline for what’s ‘normal’ in many traits. Even prior to the widespread use of steroids, athletic records were continually broken in the modern era; IQ has risen drastically in the last century (cf. “Flynn effect”); and life expectancy has nearly doubled in the last 200 years. While this latter is partly due to drastically decreased infant mortality, life expectancy at age sixty has more than doubled in the last 200 years, from 69 to 89 (i.e. from 9 to 19 years.) Perhaps enhancements are merely correcting for errors in our environment that have damaged our bodies; it’s not as though someone from 1800 couldn’t have lived to be 89; they probably just lacked our enhancements (better education, clean water, access to medical care, etc.)

Without knowing why something is “normal” or “average,” it seems unfair to declare that average to be a baseline, and any therapy that improves above this baseline to be an “enhancement.” It could simply be a correction, that is, a therapy.


Letting die vs. killing

One point I should have included in the previous post, but wasn’t sure how to work in, was the distinction between letting die and killing. A strong pro-lifer could claim that it’s morally permissible (if not exactly laudatory) to let someone die, but not permissible to actively kill. Thus, they could claim they have no duty to sign up for organ donation, though perhaps it would be nice if they did.

However, I doubt they would take that tack in the drowning-child case: if you see a child drowning, and you could save her, but choose not to because, let’s say, it’ll get your clothes dirty and you have a social engagement, have you committed a moral infraction? If the answer is “yes,” then it seems you may still be obligated to at least give blood regularly.

That is, some minor inconvenience is no excuse for failing to aid in a way that would save another’s life.

Now, if the pro-life proponent holds the above, then it seems that he is entailed sign up for organ donation, since the inconvenience of pregnancy far outweighs that of a missed social engagement. Unless, that is, even though the pro-life advocate holds that there is a duty to aid and a duty not to kill, those duties operate by very different rules. That is, the duty to aid stops when the inconvenience is larger, but the duty not to kill does not.

But at what point, then, does the duty switch off? Suppose I could save the child but choose not to because I’m on vacation and will miss a plane home, and will have to stay at some unpleasant, but not terribly dangerous, location for a week. Is it ok to let the child die? What if saving the child means I’ll miss an opportunity that would have increased my income by 40%? That’s a huge inconvenience. Am I now allowed to let the child die? I doubt the strong pro-life advocate would say, “well, if you’re going to a job interview, you should just let the child die.” I’m guessing it would be pretty hard to come up with an inconvenience that doesn’t involve serious, physical harm to self or others that  would morally permit one to let the child die.

In the case of pregnant women, states have added duties that seem to go beyond forbidding active killing, and include  aiding the embryo/fetus. Some states (Tennessee, Alabama, Utah) will prosecute a mother who is drug addicted and becomes pregnant and does not stop taking drugs if those drugs might harm the infant. Mothers have been prosecuted for the deaths of children born prematurely in these states.

It seems here that we are actively asking the women to do, rather than refrain from doing, something: that is, we ask them to give up drugs. This is a serious doing, and not merely a not doing, since it takes considerable effort to overcome an addiction. If you wish to ignore the effort of quitting an addiction, one could claim that the drug-taking is an active doing, and not taking is merely not-doing. I think this is obviously wrong, but to pursue it:

Let’s reverse the drug-addiction case: Suppose a mother knows that she must take a certain vitamin, or her baby will have a 40% chance of dying shortly after birth. Would the strong pro-lifer accept that the mother has no duty to take the vitamin, because in not taking it she is not doing anything to harm the child, she is merely refraining from doing something? I doubt it.

I think there is some wiggle room for the strong pro-lifer here, but he will be put in a bad position if he adopts the strong “letting die vs. killing” distinction. He must allow the mother to neglect her health, eat far too little, and engage in behaviors that would endanger a fetus or embryo, as long as these behaviors are omissions of acts, and not acts.