Extended mind/collective intentionality

In some ways,  extended mind and collective intentionality are highly distinct positions: the extended mind is always some one person’s mind, where collective intentionality always involves a group. But:

What if X and Y work in the same lab. X works days and Y works nights, and they’ve working on the same project. They share a lab notebook. Each morning X reads Y’s entries, and at night Y reads X’s. On Clark’s extended mind hypothesis, it seems that X and Y share a mind-part. While X has a mind separate from Y and Y from X, they also have overlapping minds in that the notebook is part of both of their minds. Further, since Y and X both input into the notebook, they intrude upon each other’s minds and partly constitute the others mental content.

Since they are working on the same project, they could be said to have collective intentionality, in Searle’s sense, even though they do not act at the same time. They coordinate their activity partly through the mediation of the notebook, but they also maintain a part of their shared mental content in the notebook.  Further, they clearly are working towards the same end (the research project) and their individual actions (checking a reading, adjusting an instrument, etc.) are best described as intentionally directed towards the shared project. X only adjusts the instrument because he intends to do the research project.

Of course, at some point, Clark’s extended mind thesis starts to implicate large groups of people, perhaps everyone, in one shared extended mind, via institutions like the science of chemistry with its shared journals, research facilities, results, their propagation through newspapers and the internet. This would also include collectively held content, like child-rearing techniques that are well-known by some members of the community who are then consulted when necessary, and the consultees then become more aware of these techniques. There are large networks of experts–doctors, DIY handbook writers, on-line software FAQ writers, who can be consulted as needed when an individual mind seeks to expand its capacity to deal with some particular situation (illness, building a house, fixing a glitch in a web browser.)

But these don’t necessarily imply collective intentionality in the way that the notebook does, and it and cases like it provide a strong example of not just extended mind, but overlapping mind in shared intention.

Advertisements

Scientist’s and Engineer’s Approaches to “Solving” Philosophical Problems

There are often two paths in presenting an answer in philosophy:

  1. One claims that it is the correct answer, and all the uses of “knows” or “good” or “true” or etc. that don’t meet this answer have been mistaken from the start, even if these are common uses of the term. (this is generally a methodist account; it accords with the saying that philosophers like to give a special definition of a word then claim that all who use it in its non-special, common sense, are misusing it.)
  2. One claims it is the best answer. That is, one admits that one is stipulating and claims that this stipulation produces good results: we get something consistent in our discussions of “knows” or “good” or “just” or etc.; or we get a version of the term or concept that is valuable in a way common uses are not, or a model of the phenomenon that is enlightening or opens up new and helpful ways of conceiving it. This is, for example, the claim of the virtue theorists with regard to “knows,” or what Kendall Walton claims when he asks us to think of mimesis as make-believe.

We can think of the first method as that of the “philosophical scientist.” The truth is sought, and thought to be discovered. The second path belongs to the “philosophical engineer.” There’s a problem, and a solution, though hardly the only possible solution, is offered, because it gets the job done particularly well.

 Robot and Frank: Crime and Punishment for Robots

In the m movie Robot and Frank, an elderly man (Frank Langella) who is suffering from mild Alzheimer’s is given a helper-robot, which he calls Robot. Frank enlists Robot’s help in committing a series of burglaries, but the police close in, and are going to use the recordings in Robot’s memory to prosecute Frank. Robot suggests deleting his memory, and Frank is reluctant because Robot has become his friend.

Unlike in common personal identity stories involving memory loss, it’s not robot’s subjective sense of self that is altered by the removal of his memory; he has no subjective sense of self. Rather, Frank believes his buddy, Robot, will ‘die.’

Further, Robot’s personality and moral values don’t change. It’s just that he’s not the same Robot to Frank because what was important in their relationship was the shared memory of their heists together. So unlike some common intutiions about personal identity (as in Strohminger and Nichols [1] ) what matters here is just memory, not moral orientation.

AI identity where the AI has no subjective experience is a good test case for the limits of mental and physical content as identity bearers. They meet Wilkes’ [2] criterion of being “real,” insofar as current AIs have no subjective experiences, but we can become attached to them, and they do have memory.  Of course, many (most?) would assert that the AI is not a person if it lacks self-awareness. (I assume eliminative materialists would not have this problem, and would need some other criterion or criteria for personhood, if they care to retain the concept, and the concept is important for legal purposes.)

An interesting extension of this problem occurs in the recent case where an AI bought drugs [3]. Who is guilty of this crime? If we erase the AI program that bought the drugs but keep the computer it ran on, is the computer guilty? Is no one guilty? Is only the programmer guilty, even though he did not know if the AI would buy drugs, because he released the AI and knew it could buy drugs?

The AI obviously did not use its own money, since it’s not capable of owning money under current law (notably, the law seems to state that only persons own things, understanding legal persons to include corporations and governments; i.e. entities with legal standing, which opens a question as to animal personhood: once an animal is counted as a person, as in the Spanish decision on apes, can the animal own property? Can we say that some stretch of land belongs to a tribe of apes?).  If it can’t own anything, then it lacks an essential character of legal personhood.

What about Robot’s crimes? Frank asked Robot to commit crimes with him, but are Robot’s programmers guilty? If the programmer who set loose the drug-buying AI did not intend for to buy drugs, but gave it funds and a tendency to follow orders, and someone else asked it to buy drugs, who would be guilty, the programmer or the one who asked it to buy drugs (assume the one who asked did so not on his own behalf; he was not going to collect the drugs he just wanted the AI to buy them.)

Since the AI used someone’ s money to buy drugs, who owns the drugs? If drug possession is illegal, it seems the owner of the money is then liable, as he or she owns the drugs. But can she claim that she did not intend to buy drugs? What happened with the money then? Was it stolen? If I lend you money and you buy drugs I’m not guilty (assuming I did not know your plan.) If I put you in charge of my money and you commit a crime with it I assume I’m not guilty…

What is Robot’s guilt or responsibility?

Can Robot be punished? (is that even possible for a being without self –awareness?)

If Robot could be punished, would wiping Robot’s memory make him “die” so he cannot be punished, or would his body be punished?  It seems like we might find Robot  more culpable because he has a body…not on good grounds, but as a continuation of our way of thinking about guilt and responsibility by analogy with other responsible beings.

The question of robot and AI guilt and responsibility will need to be worked on in coming years, assuming AIs advance in sophistication and independence to the point where we cannot strictly blame the creator of an AI for all of the AIs actions. Consider, too, if an AI outlived its human creator and then started committing crimes. In this case, there is clearly no human to punish, and if we think that punishment makes no sense for AIs, then there is no one to punish. A number of interesting possibilities open up here.

1 Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition,131(1), 159-171.

2 Wilkes, K. V., & Wilkes. (1988). Real people: Personal identity without thought experiments. Oxford: Clarendon Press.

3 “This little bot was busted for drugs” http://www.thedailybeast.com/articles/2015/01/24/this-little-robot-bought-ecstasy-and-a-passport-online.html

 

Is Harming Only Oneself Sometimes Also Harming Another?

There are some cases where harm to self is effectively harm to another because the harm produces a person who is so radically different from the person bestowing the harm that he or she loses relevant continuity of identity with that prior person. This can be exacerbated by producing, also, a person with diminished capacity to consent to the harm.

Part 1: Wrongful Life

Imagine that people intentionally damage their reproductive systems so that they will only produce offspring with a genetic condition that causes severe pain throughout life. Then, assume further that these people intentionally reproduce. Have they done a wrong in bringing about a person who (1) has no say in his or her coming-to-be and (2) will have a life that is unbearably painful? We can also imagine this with some sort of severe disability, one that makes life potentially unlivable, or some other disorder.

I think many would hold that this is something we should not do: it’s one thing to have and raise a child who has a severe health problem, it’s another to intentionally produce such a child. If you think it’s justified to do so, maybe you can push the thought experiment until it becomes unjustified: perhaps moving to a warzone where child rape and torture are common occurrences, and then intentionally having a child while knowing that you will die within a few months of the child’s birth, and that the child will be left with abusive relatives. Or some such: make up your own Dickens’ story.

If we agree that some such version is wrong, we can agree there is something like a “wrongful life” case.

Part 2: Self Determination

Some people will hold in the existence of wrongful life cases, and also hold that a people have a great deal of leeway about what they do to themselves. So we might hold that it’s morally permissible to get facial tattoos that will make future employment extremely difficult, or to engage in dangerous “extreme” sports that have a very high risk of causing permanent disability, or do other things that put our future welfare at risk.

Liberals and libertarians generally give people a great deal of leeway to self-harm, but very little to harm others. This is, in some sense, the most basic principle of liberalism, at least as understood by the tradition coming out of Mill. In general, this means they would prefer that there not be government regulations against self-harm, or that such regulations would require special justification (for example, that they apply only to people with diminished agency) or only be a by-product of regulations designed to prevent harm to others (for example, a pro-euthanasia libertarian might agree that people should not commit suicide in public to avoid traumatizing others.)

Part 3: Wrongful Self-Creation

Suppose I, an unusually cruel person, wish to cause someone intense and long lasting pain. But I respect the law, so I can only do this to myself. But I do not wish to experience this pain. So: I arrange the following: I will be injured in the spine in a manner that will cause me excruciating pain for the rest of my life, and also experience severe disability. But I also arrange to have my memory wiped.

The latter is not impossible: Su Meck[1] and the victims of Donald Ewen Cameron[2] suffered near-total memory wipes due to head injury in the former case and the use of drugs and sensory deprivation tanks in the latter.

So what will happen is that I will produce a person, with no knowledge of how he was produced, no memory of any decision to become who he is, nor any real psychological connection to me, assuming the cases above are representative, and for the purpose of the thought experiment, we can just stipulate that this will be the effect. In Su  Meck’s case, for example, people who knew her before say that the person who emerged after the memory wipe had a very different personality from the one before. Further, this being will be in constant pain.

It seems like,  if I think, as liberals and libertarians do, that I should be allowed to self-harm, if I also think that there are wrongful life cases, as in part 1 above, I may have conflicts about how to assess this case.

On the one hand, there is a description of that is done here that makes it seem like it is merely self-harm, and not harm to others. We can even add that I have made enough money that I can put it towards the care of this future self, so that society is not burdened by his existence. In fact, there’s some societal benefit in that people are employed to care for him.

On the other hand, it seems that this might no longer be myself, in any meaningful way. I’ll have lost all memory and connection of character to the prior person. So is this a case of wrongful life?

Note that the full memory wipe means that the person will have to be taught to talk and read and write and etc. So this is highly analogous to creating a child, and, in this case, intentionally creating a child with very low quality of life.

Part 4: Conclusion: Edicts Against Self-Harm as Edicts Against Other-Harm

Granted that there are some cases in which harm to a future self is effectively harm to another, it might be the case that many such cases exist. Given enough time, many if not most people will undergo radical changes in personality, and in any event most of our memories are not retained for very long, and those that are retained are increasingly unreliable as time progresses. A foolish decision I make when I’m 15 might seriously impair my future life, and I could well be said to be harming some future person who is, at best, only mildly continuous with me.

While I may have bodily continuity with this future person (though we could imagine cases where I don’t, say due to extensive prosthesis including slow replacement of brain parts with computer parts) I may, as in Derek Parfit’s Russian nobleman case, have little in common morally or psychologically with this future person.

Can I be forbidden, either morally or legally, from making decisions that harm my future self, even if I agree that I have a great deal of moral and legal leeway to harm myself? That is, taking a liberal/libertarian ethic, can I still argue that there are things I cannot do to myself, even though they harm only me (or some future me) and not anyone else? I think, given the above, some sort of argument can be made in at least some cases. Those would be cases where I create a future self who would not have consented to the harm he receives, or who would be unable to consent.

Thus, there may be a legal case, even in strictly libertarian/liberal terms, for laws against self harm, at least when the self-harm is related to a future person with limited continuity with the person committing the harm.

[1] See http://www.washingtonpost.com/local/education/gaithersburg-woman-earns-college-degree-two-decades-after-complete-memory-loss/2011/05/19/AFWAMg8G_story.html and her autobiography, “I Forgot To Remember,” Simon & Schuster, 2014

[2] There’s a brief rundown of Cameron’s work here: http://en.wikipedia.org/wiki/Donald_Ewen_Cameron#Project_MKUltra . Fuller discussion can be found in Anne Collins’ “In the Sleep Room: The Story of CIA Brainwashing Experiments in Canada.” Toronto: Key Porter Books. 1988/1998

Enhancement vs. Therapy: Why is the Normal Normal?

Some enhancement conservatives like Michael Sandel  and Francis Fukuyama worry that if we allow unrestricted freedom to enhance human bodies, we may lose our humanity, and the meaning we find in our lives. For Fukuyama, for example, our finitude gives us meaning; for Sandel, it’s the sense that life is a gift, and not something we created.

But both accept therapeutic use of enhancement technologies. If a person were born with a genetic abnormality that caused very low IQ, for example, Fukuyama and Sandel would both accept that we should be allowed to use, say, a genetic enhancement technique to “correct” this, and give the person a normal IQ.
But who’s to say what’s normal and what’s low? In general, the conservative answer appeals to the average person, and what sorts of attributes such a person would have. The boundaries of disease, disability, and mildly lower levels of ability, are hard to map. But something can be labelled an “abnormality,” by some means, Sandel and Fukuyama accept that it can be treated.

This would rule out, for them, giving a person with an average (say, 90-110) IQ an enhancement to genius level. Probably Fukuyama and Sandel’s IQs are notably higher than the average level, but they don’t want the average person to be able to enhance to their level using an intervention like drugs, genetic engineering, or some kind of neural implant. Presumably, they’re OK with better schooling, being raised by educated parents, living in a house with books, and having parents who have time to give personal attention to children, all of which actually do account for a lot of IQ difference.

Now, suppose we find out that the “average” IQ is the result of a widespread mutation that occurred because, let’s say, the Black Plague injured the standard genome some 500 years ago. Most people have the damaged gene; some, like Sandel and Fukuyama, do not. Now it seems that most people are “abnormal,” at least if we sum across history. Now would it be ok to “cure” them of this bad mutation so they could have Sandel/Fukuyama-level IQ?

The point being: we have no reasonable historical baseline for what’s ‘normal’ in many traits. Even prior to the widespread use of steroids, athletic records were continually broken in the modern era; IQ has risen drastically in the last century (cf. “Flynn effect”); and life expectancy has nearly doubled in the last 200 years. While this latter is partly due to drastically decreased infant mortality, life expectancy at age sixty has more than doubled in the last 200 years, from 69 to 89 (i.e. from 9 to 19 years.) Perhaps enhancements are merely correcting for errors in our environment that have damaged our bodies; it’s not as though someone from 1800 couldn’t have lived to be 89; they probably just lacked our enhancements (better education, clean water, access to medical care, etc.)

Without knowing why something is “normal” or “average,” it seems unfair to declare that average to be a baseline, and any therapy that improves above this baseline to be an “enhancement.” It could simply be a correction, that is, a therapy.

 

Letting die vs. killing

One point I should have included in the previous post, but wasn’t sure how to work in, was the distinction between letting die and killing. A strong pro-lifer could claim that it’s morally permissible (if not exactly laudatory) to let someone die, but not permissible to actively kill. Thus, they could claim they have no duty to sign up for organ donation, though perhaps it would be nice if they did.

However, I doubt they would take that tack in the drowning-child case: if you see a child drowning, and you could save her, but choose not to because, let’s say, it’ll get your clothes dirty and you have a social engagement, have you committed a moral infraction? If the answer is “yes,” then it seems you may still be obligated to at least give blood regularly.

That is, some minor inconvenience is no excuse for failing to aid in a way that would save another’s life.

Now, if the pro-life proponent holds the above, then it seems that he is entailed sign up for organ donation, since the inconvenience of pregnancy far outweighs that of a missed social engagement. Unless, that is, even though the pro-life advocate holds that there is a duty to aid and a duty not to kill, those duties operate by very different rules. That is, the duty to aid stops when the inconvenience is larger, but the duty not to kill does not.

But at what point, then, does the duty switch off? Suppose I could save the child but choose not to because I’m on vacation and will miss a plane home, and will have to stay at some unpleasant, but not terribly dangerous, location for a week. Is it ok to let the child die? What if saving the child means I’ll miss an opportunity that would have increased my income by 40%? That’s a huge inconvenience. Am I now allowed to let the child die? I doubt the strong pro-life advocate would say, “well, if you’re going to a job interview, you should just let the child die.” I’m guessing it would be pretty hard to come up with an inconvenience that doesn’t involve serious, physical harm to self or others that  would morally permit one to let the child die.

In the case of pregnant women, states have added duties that seem to go beyond forbidding active killing, and include  aiding the embryo/fetus. Some states (Tennessee, Alabama, Utah) will prosecute a mother who is drug addicted and becomes pregnant and does not stop taking drugs if those drugs might harm the infant. Mothers have been prosecuted for the deaths of children born prematurely in these states.

It seems here that we are actively asking the women to do, rather than refrain from doing, something: that is, we ask them to give up drugs. This is a serious doing, and not merely a not doing, since it takes considerable effort to overcome an addiction. If you wish to ignore the effort of quitting an addiction, one could claim that the drug-taking is an active doing, and not taking is merely not-doing. I think this is obviously wrong, but to pursue it:

Let’s reverse the drug-addiction case: Suppose a mother knows that she must take a certain vitamin, or her baby will have a 40% chance of dying shortly after birth. Would the strong pro-lifer accept that the mother has no duty to take the vitamin, because in not taking it she is not doing anything to harm the child, she is merely refraining from doing something? I doubt it.

I think there is some wiggle room for the strong pro-lifer here, but he will be put in a bad position if he adopts the strong “letting die vs. killing” distinction. He must allow the mother to neglect her health, eat far too little, and engage in behaviors that would endanger a fetus or embryo, as long as these behaviors are omissions of acts, and not acts.

 

Pro-Life Position Entails Organ Donation

If someone holds that a woman is morally obligated to see a pregnancy through in spite of the imposition on her body, the inconvenience, pain, risk, and loss of autonomy, then they apparently hold the following principles: The autonomy of a person to choose what to do with his or her own body does not contravene the obligation to act to save another. Your body is not your own when another’s life is at stake.

So at the very least, someone holding the pro-life position should donate blood regularly. They should sign up to be available to give bone marrow. They should volunteer to  be a living kidney donor, and to be a living liver donor (that is, to donate these organs while still alive, as opposed to agreeing to donate organs after death.) While these procedures involve some risk, pain and inconvenience, that cannot, on the pro-life principles, be grounds for not accepting them.

Again, this is mere consistency with their basic position: that my body is not my own when it can support another life. That I am obligated to undergo a certain amount of paint, inconvenience, and loss of bodily autonomy if doing so will allow another to live.

It’s been claimed that some strong vegetarian positions entail an anti-abortion position (“On the consistent application of moral consideration,” Justin Caouette and David Boutland, presented at Society for Applied Philosophy annual conference 2014). I question whether vegetarian dedication to preserving animal life and abortion are really analogous, since the vegetarian can consistently hold that bodily autonomy contravenes duty to preserve life: they might hold that one does not have to preserve a life if it is inside of one’s own body, or if it severely limits one’s autonomy. But Caouette and Boutland do have a strong point about maintaining consistency of moral principles, and it may not be possible to spell out a strong pro-life/anti-abortion position that doesn’t entail giving up bodily autonomy to save another’s life. Since that would include live organ donation, the pro-lifer should feel obliged to go to organ donor and bone marrow donor registries. Some information can be found here:

Information on living kidney donation

The bone marrow registry

Donating blood

Living donors on-line: information on living donation of livers, kidneys, and other biological materials