Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.
That's awfully close to the same kind of rationale that's used to justify Intelligent Design.
I don't see where you're getting that from?
My entire point was that the fact that you can not prove the existence or non-existence of free will makes it completely irrelevant to the discussion at hand. ID takes the completely irrational view that if you can't ever prove the existence or non-existence of something you must act as if it exists.
That's about as far away from my point as you can get really.
Your evidence for that is that we have some kind of subconscious programming that makes us think that we have free will, when we really don't. Yet you don't prove it - and I don't see any way that we can disprove it, because you could always claim that we were just acting under this programming's influence. Yet even though we can't ever prove the existence or non-existence of it, you act as if it exists and is a good enough reason to decide that free will is completely irrelevant.
In a moralistic sense, free will is
very important. If we assume that nobody has any free will, there isn't much point to passing laws and assigning penalties for violating them, because nobody has any choice in the matter. Nobody really has any rights at all, because the very concept of rights becomes more or less meaningless, because everybody has no control over their actions anyway. Society itself, especially American society, is based on the idea that we have free will. Whether or not that's a delusion it's the state of things.
So for a discussion on how much rights an entity should be granted, it's a very valid question to raise. Obviously, a TV cannot prevent its owner from electrocuting himself when he tries to repair it. A gun cannot stop its owner from shooting down an innocent man. These objects are not considered to have free will, and so are not intentionally punished when a crime is committed with them, or they cause unjustified harm to someone.
On the other hand, the manufacturers of the TV or the gun could be at fault. If the TV electrocuted its owner in its normal course of operation, or the safety on a gun failed, the manufacturer could be held liable.
If a robot is expected to operate with the same kind of rights and responsibilities as a normal individual, they must possess comparable ability to evaluate the consequences and make decisions based on those consequences. Otherwise, they're wholly unsuited for participating in human society. If a robot does not possess these abilities, it does not possess free will.
Example A: A robot's owner orders it to rob a jewelry shop, knowing that it must obey all orders given by its owner. The robot does so. The owner is at fault, because the robot does not have free will.
Example B: A robot is ordered by its owner to rob a jewelry shop. The robot does not have overriding programming which forces it to follow its owner's orders. Yet it still chooses to do so. The robot is at fault, because it did have the free will to contradict its owner's orders and chose not to.
Thus the robot's ability to exercise free will is extremely important in assessing how the robot should be handled in a legal sense.