Robot and Frank: Crime and Punishment for Robots

In the m movie Robot and Frank, an elderly man (Frank Langella) who is suffering from mild Alzheimer’s is given a helper-robot, which he calls Robot. Frank enlists Robot’s help in committing a series of burglaries, but the police close in, and are going to use the recordings in Robot’s memory to prosecute Frank. Robot suggests deleting his memory, and Frank is reluctant because Robot has become his friend.

Unlike in common personal identity stories involving memory loss, it’s not robot’s subjective sense of self that is altered by the removal of his memory; he has no subjective sense of self. Rather, Frank believes his buddy, Robot, will ‘die.’

Further, Robot’s personality and moral values don’t change. It’s just that he’s not the same Robot to Frank because what was important in their relationship was the shared memory of their heists together. So unlike some common intutiions about personal identity (as in Strohminger and Nichols [1] ) what matters here is just memory, not moral orientation.

AI identity where the AI has no subjective experience is a good test case for the limits of mental and physical content as identity bearers. They meet Wilkes’ [2] criterion of being “real,” insofar as current AIs have no subjective experiences, but we can become attached to them, and they do have memory.  Of course, many (most?) would assert that the AI is not a person if it lacks self-awareness. (I assume eliminative materialists would not have this problem, and would need some other criterion or criteria for personhood, if they care to retain the concept, and the concept is important for legal purposes.)

An interesting extension of this problem occurs in the recent case where an AI bought drugs [3]. Who is guilty of this crime? If we erase the AI program that bought the drugs but keep the computer it ran on, is the computer guilty? Is no one guilty? Is only the programmer guilty, even though he did not know if the AI would buy drugs, because he released the AI and knew it could buy drugs?

The AI obviously did not use its own money, since it’s not capable of owning money under current law (notably, the law seems to state that only persons own things, understanding legal persons to include corporations and governments; i.e. entities with legal standing, which opens a question as to animal personhood: once an animal is counted as a person, as in the Spanish decision on apes, can the animal own property? Can we say that some stretch of land belongs to a tribe of apes?).  If it can’t own anything, then it lacks an essential character of legal personhood.

What about Robot’s crimes? Frank asked Robot to commit crimes with him, but are Robot’s programmers guilty? If the programmer who set loose the drug-buying AI did not intend for to buy drugs, but gave it funds and a tendency to follow orders, and someone else asked it to buy drugs, who would be guilty, the programmer or the one who asked it to buy drugs (assume the one who asked did so not on his own behalf; he was not going to collect the drugs he just wanted the AI to buy them.)

Since the AI used someone’ s money to buy drugs, who owns the drugs? If drug possession is illegal, it seems the owner of the money is then liable, as he or she owns the drugs. But can she claim that she did not intend to buy drugs? What happened with the money then? Was it stolen? If I lend you money and you buy drugs I’m not guilty (assuming I did not know your plan.) If I put you in charge of my money and you commit a crime with it I assume I’m not guilty…

What is Robot’s guilt or responsibility?

Can Robot be punished? (is that even possible for a being without self –awareness?)

If Robot could be punished, would wiping Robot’s memory make him “die” so he cannot be punished, or would his body be punished?  It seems like we might find Robot  more culpable because he has a body…not on good grounds, but as a continuation of our way of thinking about guilt and responsibility by analogy with other responsible beings.

The question of robot and AI guilt and responsibility will need to be worked on in coming years, assuming AIs advance in sophistication and independence to the point where we cannot strictly blame the creator of an AI for all of the AIs actions. Consider, too, if an AI outlived its human creator and then started committing crimes. In this case, there is clearly no human to punish, and if we think that punishment makes no sense for AIs, then there is no one to punish. A number of interesting possibilities open up here.

1 Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition,131(1), 159-171.

2 Wilkes, K. V., & Wilkes. (1988). Real people: Personal identity without thought experiments. Oxford: Clarendon Press.

3 “This little bot was busted for drugs” http://www.thedailybeast.com/articles/2015/01/24/this-little-robot-bought-ecstasy-and-a-passport-online.html

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s