2007-05-03

Baby-sitting Robots

A while ago in his blog, Scott Adams suggested a thought experiment that can make clear the illusory nature of free will. I would provide a link to it, but his archive doesn't go back that far and doesn't have a search function.

The thought experiment is this: Consider a young woman gently rocking a baby in a cradle. Most human beings would consider the young woman to have free will in the sense that she could choose to treat the baby gently or roughly.

Now replace the young woman with a robot. The robot's arm is controlled by software and rocks the cradle either gently or roughly, depending on the setting of a "mood" variable in the software. If mood is above a certain value, say 5, the robot rocks the cradle gently. If the value of mood falls below 5, the robot gets progressively rougher as the value drops. I think most human beings would not consider the robot to have free will.

So let's make the robot more complex. Let's give it a sense of morality. If its mood falls below 5, it will continue to be gentle unless its ethics function decides that the baby is "evil", in which case it kills the baby. Does this robot have free will? I think most people would still say no.

But you get the idea. We can continue making the robot more sophisticated and complex. At what point does it acquire free will? Never? Then why does it make sense to think that biological robots have free will? Where is the difference?

The implications of the thought experiment go beyond just free will, though. Just like we would agree that the baby-sitting robot does not have free will, we also would not impute personhood to it. If the baby-sitting robot is a valid model of biological robots, then it also does not make sense to impute personhood to a biological robot whose responses are determined by the operation of software. So that would mean that there's no such thing as a person, or soul. Such things are further illusions created by the operation of the robot's thought processor (what's called "consciousness").

Morality is usually presented as a fixed, objective standard external to the robots. However, this thought experiment can show that it's actually just a way robots attempt to reprogram each other (We might tell the baby-sitting robot that gently rocking the baby is "good" while killing it would be "bad", but that just reflects our preferences. Replace the baby with something we don't feel as strongly about, say, another robot, or a brick). Good and evil are just what a given robot admires or fears, what aids or threatens the robot's survival. Since each robot may admire or fear different things, good and evil wind up being relative to the perspective of the robot in question.

For the arms dealer or terrorist, war in Iraq is a good thing. For the Iraqi man on the street, war in Iraq is a bad thing.

No comments: