Category Archives: Uncategorized


I’m posting over at A Philosopher’s Take now. Here’s an excerpt from my first post:

What we are likely to create, though, if we allow AIs all the benefits that emerging technologies can bring, are para-persons, things that have all the personhood qualities, or pass all the tests for personhood, that philosophers have set up (self-awareness, ethical cognition, other-awareness, self-respect, linguisticially expressible concerns etc.), but also have an ability that makes them not supra-persons, but something outside of personhood. That is, even if it had all the personhood qualities, it could also have an additional, defeating quality for personhood: the ability to change instantly and without effort. Our ethical systems are designed or adapted to apportion blame and praise to persons. But it’s not clear that they will work with the kind extremely malleable para-persons that strong AI or strong enhancement will produce. read the whole thing at A Philosopher’s Take

Should Robots Be Allowed To Practice Law?

An update to the last post. Pro Publica just published this report indicating racial bias in the software used to predict recidivism. So of course, the robots are only as non-racist as their programming.

Also, someone asked me how we could solve some of these problems without resorting to AI. Here’s a very rough sketch:

1) Have judges use a sentencing checklist that determines maximum sentences based on relevant factors, and make the checklist public, and its application public, so that independent auditors and the public can see if it was properly adhered to in each case. This helps remove the judge from the problem by making lists of features related to each crime that set the sentencing. That way, a judge who is hungry cannot give a harsher sentence; a harsher sentence cannot go to a black youth rather than a white one; a black youth will not be “tried as an adult” when a white youth in the exact same criminal situation would not be. Of course, such a list is difficult to put together, but it’s not impossible to at least improve on current guidelines.
2) Give better instruction to jurors on the unreliability of eyewitness testimony. For example: currently it’s ok to tell jurors that if a witness seems certain, they should take that into account. However, memory research (cf Loftus et al) shows that subjective certainty has almost no bearing on accuracy of eye witness reports.
3.) Make all emotional appeals inadmissable. Eliminate the opening arguments in criminal cases. Demand that closing arguments be limited to relation between evidence and verdict.
4.) Both the defendant and the victims, if there are any, should be invisible to the jury. Follow the mode of modern orchestra auditions so that the jury can’t see if the victims and defendant are black or white, male or female. Using a screen and a voice-disguising device (vocoder) would eliminate a great deal of bias in cases.
5.) Make it the law that there must be equally matched lawyers. If there is a criminal case, both the prosecutor and the defense attorney should come from the same office, a general legal office, and the role of prosecutor/defense should be determined by coin flip in each case, or should alternate so that everyone does an equal amount of both. Make sure each side is absolutely equally funded; no spending money the other side doesn’t have. Both defense and prosecution should come from public funds, and be well funded and have the same social access to judges, police and forensic specialists
6.) Eliminate the plea bargain. This is a tool designed to get poor people to go to jail because they can’t afford attorneys, or can’t spend time in jail waiting for a trial. Notably, plea bargaining is illegal or severely restricted in most countries already. The U.S. has one of the most extensive, and highly abused, plea bargaining systems in the world.

Should humans be allowed to practice law?

Judges are known for disliking mandatory sentencing laws; generally, they hold that a human being is better able to understand the nuances of a case, and shouldn’t be prevented from using his or judgment to give more or less lenient sentences depending on circumstance. But initial research, though far from conclusive, has indicated that judges are notoriously bad at this sort of thing. Instead of allowing proper considerations to govern sentencing, factors like how long it’s been since the judge has eaten[1], the race of the suspect[2], and the sex of the victim[3], seem to have much more effect on sentencing than such proper measures as the severity of the crime and the likelihood of re-offense.

Similarly, the justice system is notoriously biased against poor defendants, and juries are terrible at distinguishing trustworthy testimony from untrustworthy testimony[4], relying upon such factors as the appearance of the testifier, and testifier’s command of English and emotional reactions.

Recently, IBM developed an artificially intelligent “lawyer.[5]” It doesn’t do much besides the legal scutwork, the boring research and paper-sorting that lawyers tend to find tedious and deadening. So on that level, it’s probably doing what people going back to Marx hoped mechanization would do, which is take away the worst jobs (although Marx and Russell’s hope that this would free people up for lives of leisure requires a large-scale political economy project which may or may not materialize.) But what if we could create, a la the Deep Blue project, a super-lawyer that found the best legal strategies and could argue the strongest case.

Combine this with an AI judge and jury and, at least potentially, there could be an increase in justice: the AI judge or jury would not sentence based on its hunger or its emotional manipulation by tears or the heartfelt sincerity of (often faulty) eye-witnesses. If both defendant and prosecutor had access to the same AI law programs, there wouldn’t be an issue of a wealthy defendant having an unfair advantage, and a poor defendant an unfair disadvantage. And with a good AI jury, we could avoid the sorts of legal tricks that rely on irrelevant appeals in order to win cases.

It’s a way off, but it at least is a possibility that we can hope for automated justice that lacks the rather nasty implications of letting biased, prejudiced, easily manipulated, and cranky-because-hungry humans decide matters of such importance.






Sacrificing one to save another

Imagine the following situation:

A woman, let’s call her Su, is hit on the head and her memory and personality are destroyed. Over the course of several years, she acquires new memories and a new personality as her brain heals and she is trained back up to adult-levels of competence in most skills. Notably, all of this actually happened, so so far we’re not in a science-fictional thought experiment.[1]

Choice 1: first person

Now, imagine (and this part has never, to the best of my knowledge occurred) a surgeon tells the woman and her family that he can finally “heal” her. She will have all of her memories and personality restored, but it will:

Scenario A: wipe out her current memories and personality, so she (or whoever) will awake exactly as before the accident, but thinking no time has passed.

Scenario B: wipe out her current personality, but not her current memories. When she (or whoever) wakes up, there’ll be an odd recognition of acting quite strangely for many years, and a sense of being restored, but no gap in time. However, the person who awakes will have trouble recognizing herself in her actions, emotions, and responses in the time since the accident.

Would Su, at being told about the surgery, want it? I would guess she would refuse (we could, of course, ask the real Su, but the question is not so much what Su would do as what people in general are likely to choose.) In the U.S. there are strong rights to refuse medical treatment, so there’s no ethical problem here. It seems mostly likely that she would refuse Scenario A, and, based on personal identity x-Phi work like that of Nichols and Strohminger, also very likely that she would refuse scenario B. I would think that people would think of themselves as destroyed in both A and B.

Choice 2: third person

Now, imagine, instead, that after acquiring the new memories and personality, Su is again struck on the head. She is in a coma, and a surgeon comes and tells her family that they have 2 choices:

  1. he can do a surgery which will enact either scenario A, or,
  2. he can do a surgery which will restore her to how she was immediately before the most recent blow to the head.

Does the family have a right to choose 1, destroying NewSu?

Suppose the surgeon also offered them

  1. that he can do a surgery which will enact scenario B

Is this the moral choice? Is it any better, from NewSu’s perspective, than 1?

It seems that in 1 and 3, NewSu is destroyed. However, Su is resurrected. It seems like this might be a moral toss-up for that reason. But in the most similar cases, there is a clear ethical solution:

There is almost no scenario in which a third party can decide that you should be sacrificed so that someone else might live. If that’s what’s happening, then the family cannot rightly choose 1 or 3. Of course, this relies on us as thinking of NewSu as the currently existing Su. But maybe she’s just the most recent Su, or the most recent manifestation of Su, if you want to unify them, or some such. In that case, then again there’s no clear answer.

But: if the surgeon told them that NewSu would wake up on her own in about 6 months, as the brain healed, or, he could do the surgery, but it would restore OldSu, then the choice is perhaps clearer. As much as the family might want OldSu back, then seem to be intervening in a way that kills NewSu.

Or perhaps in a case like this we have no real moral guidance, as our identity and rights concepts are not prepared for the case. But that alone tells us something about the (lack of) robustness and universality of those concepts, especially the identity concept.

[1] See I Forgot to Remember by Su Meck for the full story.

Personal Identity for Post-Personal Beings

Human enhancement and strong AI-based robotics converge upon the creation of entities that are fully capable of rewriting themselves. As so many contemporary philosophers have noted (e.g. Douglas[1], Buchanan[2], Levy[3], Brown[4], Liao, Sandberg, and Savulescu, etc.) this possibility creates ethical dilemmas not envisaged in existing theories. If a self becomes so malleable that it can, at will, jettison essential identity-giving characteristics, how are we to judge, befriend, rely upon, hold responsible, or trust others? While these ethical questions are being approached by neuroethicists and those working in the ethics of enhancement, at base there is an identity question: can a being that is capable of self-rewriting be said to have an identity? Since responsibility, trust, friendship, and, in general, most human interactions that take place across more than a few minutes time rely upon a steadiness in the being of the other person, a new form of person, capable of rapidly altering its own memories, principles, psychological traits, desires and attitudes creates tremendous problems not only ethically, but metaphysically as well. How can we re-identify others when their inner core is unstable? For example: imagine an AI that is sentient and sapient, or a human enhanced such that it can rewrite its memories and personality. Such a being, having desires, would be capable of vice. It could then commit a crime, profit from it, erase all memory of the crime from itself, and alter its character such that it would find such a crime unthinkable. What do we make of the new being? Should it be punished for what it had done? Or is it the case that such complete erasure and rewriting destroys the person who committed the crime? Suppose a friend decides that the character traits and memories that you share with it are holding it back. At one time, such a realization could have met with years of effort at self-alteration, during which the friendship could grow and evolve, or fade away, or alter its character in many other ways. But if, the next day, the friend showed up re-written, no longer enjoying the activities it shared with its friend, what attitude should be taken towards it? Does it even make sense to identity it as the same entity? Animalists (Olson, etc.) have claimed that only the continuous organic being of a person is necessary for identity, but when a person is non-organic, or so enhanced as to be able to overcome its organic limitations, what will count as re-identifying? Are we on the verge of making beings that lack identity? A highly eclectic account is called for here, looking to the continuation of context relative-traits. When criminal guilt is assessed, a “right mind” criteria is applied; if enhancement is created, a “same mind” criteria might need to be instituted. Is this being still, in criminally relevant ways, the same being? Similarly, for relations like friendship, marriage, contractual obligations, and assessment of ethical character, we need to do a fine-grained analysis of precisely which traits were relevant to this relation, and ask to what extent they persist, and under what conditions they changed. This may undo the notion of simple, one-to-one identity, but that may be a necessary consequence of the complexity of interacting with beings who relate to themselves as projects that may be re-written or re-made at will.

[1]Douglas, Thomas. “Human enhancement and supra-personal moral status.”Philosophical studies 162.3 (2013): 473-497.

[2] Buchanan, Allen. “Moral status and human enhancement.” Philosophy & Public Affairs 37.4 (2009): 346-381.

[3] Levy, Neil. Neuroethics: Challenges for the 21st century. Cambridge University Press, 2007.

[4] Harris, John. “Moral progress and moral enhancement.” Bioethics 27.5 (2013): 285-290.

When Should We Defer to Our Robot Superiors?


If the “moral enhancement” crowd are right, we could someday, perhaps soon, produce morally superior human beings. But then we could also, perhaps more easily, produce morally superior robots. All we’d need is a robot with phenomenal consciousness, on the presumption that only an entity that has experiences can have moral status. But if robots could have moral status, they could conceivably have a higher moral status than mere humans (see Douglas: Human Enhancement and Suprapersonal Moral Status on the claim that enhanced humans could have a higher moral status than the unenhanced.)

Would it ever be proper for a class or group of people to defer morally to another group? This has happened, though that’s hardly a full argument that it’s right. But, for example, in the “women and children first” paradigm, it’s thought that women and children might have some greater right to be saved than men (to be fair, this is one of those principles that was referenced far more than it was practiced.) Police officers, soldiers and fire fighters have, on occasion, knowingly sacrificed their lives for others, as though civilians had some greater right to protection, rescue or even life than those in these professions. Conversely, we give special privileges to soldiers, fire fighters and police officers: early seating on airplanes for soldiers, deference to firefighters and police officers on many matters of public safety and the right to enter buildings, speak to strangers, etc; thanking soldiers for their service; special deals on insurance and other discounts for all of these people; special life insurance benefits; line-jumping rights in certain circumstances, etc.

Historically, many have sacrificed themselves for their kings or leaders, assuming that the king, for example, had a higher moral status or greater right to protection or life than, say, a knight or warrior in his employ.

So there is at least some precedent for holding some group or set of people as having higher moral status. Is there any reason this could not apply to robots?

Imagine that we create robots that are sentient and sapient, and who have tremendous value; they’re smarter and more peaceful and more capable of resolving disputes without violence and to the mutual benefit of all involved. They are more empathetic, more capable of caring for others. They have no weakness of will. They are physical stronger, but use this strength with a Confucian wisdom, eschewing self-centeredness so as to have a clearer and more accurate understanding of any situation that calls for strength.

With a few other traits, it would be easy to argue that we should make these robots our leaders. Would we then, in the manner of the medieval knight giving his life for the king, be right in sacrificing ourselves for them, if the situation called for it? Would we be right in deferring to them on moral matters, taking them as moral authorities because of their tremendous processing capabilities combined with their ability to objectively assess situations, put their own interests second, and make fairer, more just and more equitable decisions? What about extending to the robots the kinds of deference we extend to first responders and soldiers? Or the deference we extend to experts, but in this case, taking them as moral decision-making experts? If the robots are so moral as to be self-sacrificing, then perhaps they deserve special treatment in the manner of soldiers and first responders, to compensate them for the pleasures they lose in exposing themselves to danger and death on our behalf?

Or what if they have a moral status that is as much greater than ours than ours is to, say, non-human animals (for those who hold that humans do have such a moral status.) If it is possible for a human to have higher moral status than an ape, then it seems possible that another being could be so much smarter, wiser, more capable of kindness, or even more inherently valuable than a human. Just as it might be right to assign higher moral status to a god or God, maybe one day we could do that to a robot. How then should we treat our robot superiors?

Critical Thinking: Primary Concepts

My Critical Thinking: Primary Concepts mini-text, suitable as a one-to-four week session in just about any class that needs a section on argumentation. Creative commons licensed so you can remix it, edit it, etc. You just can’t sell it!

Should Government Be Involved In Marriage?

Somewhat off my normal beat, but here’s a rough draft for an article on civil marriage:

With marriage rights now extended to same-sex couples, a new chorus of voices has been asking why governments should have anything to do with marriage. Some claim that marriage is essentially a religious institution, others, more libertarian, think that governments simply have no business in our personal lives.

But both of these positions seem to misunderstand what marriage is. Both historically and currently, marriage is and has been a contract. Some of the earliest written documents we have are marriage contracts from ancient Sumeria. And understanding marriage as a contract makes it clear why it is the business of government.

There doesn’t need to be any government agency involved in two people deciding to cohabit, to swear their love to each other, or to take religious vows of fidelity. And one version of marriage could be simply these informal arrangements.

But a contract, as such, is among the most central areas of governmental agency. Without a government to enforce them, contracts lose their value. Even libertarian minimalists understand that it is the business of government to make sure that those who violate contracts are punished, and that contracts that ask that people engage in illegal actions be unenforceable.

It’s a problem of contracts that written language is open to many interpretations, and that’s one reason why there are certain standardized contracts. Wills, for example, have a long history of case law that clarifies what can and cannot be enforced in such documents, and establishes how terms are to be understood. Similarly, adoption and incorporation, understood as contractual relations, are given force and shape by the legislative, judiciary and executive functions of government in establishing types of contracts, interpreting their entailments, and enforcing their terms.

A feature of contracts is that they not only affect those who sign on to them, but can clarify how others interact with the contracted individuals. For example, if an agent of an LLC commits a tort against someone outside of the LLC, but while acting specifically in the business of the LLC, the wronged party can sue the LLC. A partnership agreement can include language that allows the partnership to take on debt, such that a debtor would not see recompense from one member of the partnership, but from the partnership and its assets.

It’s in the government’s interests to make sure that only certain people sign on to contracts. A person with limited capacity to understand the meaning of a contract cannot legally sign on. An adoption contract can be entered into only by parties deemed to be capable of fulfilling its terms.

All of the above applies to marriage contracts and is part of what makes them so valuable. Only parties capable of understanding the contract may enter into it. Some parties are deemed, by statute, as too immature for the contract. And, as in many contracts, parties who are not signed on to the contract are guided in their behavior towards contracted parties. For example, it makes sense that if someone is gravely injured, a hospital should not allow just anyone to ender the injured person’s presence, especially without supervision. A marriage contract is a way of selecting someone who may make decisions for, and enter the company of, an unconscious or diminished person. The hospital is enjoined by the contract, though they did not sign it, just as someone who loans money to an LLC is enjoined by certain aspects of the terms of the LLC.

Most contracts contain many elements, provisions, rights and transferences. A strong body of case law is helpful in creating a consistent set of standards for incorporations, partnerships, mortgages, etc. A marriage contract, similarly has many elements. These can include property sharing, protection against being compelled to testify against a marriage partner; the right to reside in the country of citizenship of either marriage party member; the right to share in certain employment benefits, such as healthcare; presumption of parenthood over a child born or adopted into the marriage; priority of conservatorship; military spousal benefits; automatic renewal of leases signed by one spouse even if that spouse is deceased; the right to sue for wrongful death of a spouse; visiting rights towards a jailed spouse; etc.

This contract cannot exist without an executive to enforce it. There are obvious benefits to this contract for those who wish to enter into a particular kind of partnership with another person. None of this precludes informal arrangements, such as living together or purely religious marriages. It merely establishes a well-reviewed set of case law for those willing to make the commitment to legal marriage.

One response among some libertarians has been to call for the complete privatization of marriage, but this misunderstands the nature of a contract. There is no purely private contract, since contract enforcement still depends upon the existence of a judiciary to interpret in the case of dispute and an executive to enforce the contract in case of breach. Further, the existing marriage contract has been widely vetted and accepted, whereas a new contract will still have to be tested. This could create problems if a couple signed a contract and found, upon judicial review, that it was invalid. Finally, the way a contract enjoins third-parties in their relations with the contracted parties, and the fact that the marriage contract can grant rights such as citizenship, make it irreplaceable with any novel contract which would necessarily lack these benefits.

Of course, with the benefits of the contract come responsibilities. A spouse may cause debt that both parties must bear, for example. And this list of contractual elements has been refined over the years: at one time, in some jurisdictions, a woman lost all property rights during a marriage. In California, during the 19th century, a woman could not be found guilty of a non-capital offense if she committed it in the presence of her husband.

Rightly, these elements of the marriage contract have been jettisoned. It seems likely that the contract will continue to evolve. It’s certainly worthwhile to review this contract. As with incorporation, debt, employment and housing contracts, critical review of existing benefits, protections and responsibilities incurred by the contract can be helpful in refining it.

But to say that marriage is not something for government to be involved in is to misunderstood one important element of what marriage is: a legal relationship that is necessarily mediated by existing laws, and which is reinforced by a long history of judicial review. Romantic, religious and spiritual connections may not require this, but marriage is a more complex partnership than that, and the practical needs of many couples will be best served by this contract.


Environmental Identity

Strohminger and Nichols research (Cognition 131 (2014) 159–171) indicates that people consider moral traits to be more important than memory for identity. This is perhaps not so surprising, although the philosophical literature hadn’t really been looking at consistency of moral traits, focusing instead on other psychological characteristics, especially memory, or on physical continuity.

But it implies a disturbing conclusion when combined with situationist accounts of ethics. If, as writers like Doris (Lack of Character, Cambridge, 2002) and Gilbert Harman claim, our ethics are not so much based on our character traits but our environment, then it seems that personal identity is not internal, but environmental.

Or at least it is if we take the popular view that moral character is essential to identity, and we accept the situationist’s results. We could claim that the common view is wrong, that ethics are not a necessary part of identity. Or we could note that our internal character traits, even on situationist accounts, do provide some part of our ethical makeup, just not the overwhelming or decisive part. Then we could say that that part of our ethical makeup is where identity resides.

Still it would be interesting, and fruitful, to look at how identity is environmental; we may, in some sense, become different characters in differing environments, and even, in a real and important sense, become different people.

That is, if our character can be radically altered, we may not recognize ourselves in our actions, and those who know us best may also not recognize us. An environmental notion of identity could capture these changes and produce an expanded sense of character, self and person. Who we are and where we are may be more deeply linked than the idea of the discrete individual, containing him or herself inside of skin-boundaries and across time, can account for.

Can a Self Persist Across Time?

Hume famously noted that when he introspected, he found no self, just a constantly shifting “bundle” of impressions. The self, such as it was, was at best a fleeting thing, having identity only as long as the bundle remained consistent, and losing that identity as the bundle shifted to new thoughts, feelings, and intentions.

Galen Strawson took this idea to imply that instead of a self, we have a series of selves, “pearls” on a string, as he first conceived them, each passing to the next. The self was a thing that lasted, on Strawson’s account, only for a few seconds or minutes, until its character was dissolved by the next set of impressions.

Others, like Locke and Parfit, have noted that there is a somewhat greater consistency to the self in that it draws upon the same memories and evinces the same habits, at least for a time. These, too, drift, but they last far longer than the seconds or minutes of Hume’s bundles or Strawson’s pearls.

All of these focus solely on the experiencing self. But what if we looked at the underlying hardware, as it were. If Hume and Strawson are right, then we cease to have selves when we sleep. And when we awake, we are new selves, not at all the selves that went to sleep.

But imagine working on a computer. You have a paper in progress, you’re keeping a few browser windows open, there’s a game in another window, and  the hard drive is full of all your previous works. When you put the computer to sleep for the night, it ceases to be actively attending to any of these tasks, but they’re ready and waiting when the computer is roused in the morning.

Similarly, our brains hold much of our mental life in place. Certainly not with the precision (and parsimony) of the computer, but when we go into sleep mode, while some active processes are lost or start to degrade, much remains for when we restart in the morning. And if we think of the self not merely as the experiencing self, the passing show or bundle, but as the entire organism of subjective potential, contained, at least largely or in important ways in the matter of the brain, then there is a self across time.  And, though it isn’t doing much while we sleep (at least during dreamless sleep), that doesn’t mean it doesn’t exist. It’s there, just as our computer’s memory state and capacities are there, waiting to be woken.

Perhaps a non-active self makes no sense, perhaps  that’s not what “self” means. But if by “self” we only mean “self-experience,” there must be still be some self to be experienced, and it seems profitable to think of it as a thing that can sometimes be doing nothing. We don’t think of other things as vanishing when inactive; perhaps the self is best thought of as simply quiescient, rather than gone, when it is not in service. And if that’s the case, then the “bundle” is merely some activity of the self, and not the self itself. The self could be both the active content of consciousness, and our storehouse of ideas and memories, and our capacities to act upon them, all of which have a persistence across time that some fleeting action only gives a glimpse of.