The appearance of free will is important. Actual free will isn't except at a philosophical level. I should have made that more clear but I thought that the rest of the post made that point and I didn't want to state it yet again.
Furthermore you're also confused because there are two debates going on here and you're having trouble separating them. Partly my fault but it looks like clarification is needed.
1) There's the debate between thesizzler and myself where he asserted that humans have free will and robots don't. However as he can not definitively prove either point that makes it irrelevant to the discussion. It's an unprovable assertion and as such has no place in a scientific discussion.
2) There's the debate you stated and I've continued about whether free will or merely it's appearance is important in AI topics.
Please stop assuming you have the power to define what a debate is or isn't, and therefore have the right to judge whether people are confused or not. Making inferences and wantonly judging people's arguments and generally being rude may be great for driving people off, but it doesn't really prove anything.
I'm confused because you've claimed that free will is important to the thread, but irrelevant to the discussion, and you've talked about the 'appearance' of free will and pseudo-free will. You've agreed that assuming something exists even though it can't be proven or disproven is a fallacy, but your entire argument about free will seems to be based on the idea that you must assume free will exists, even if it can't be proven or disproven.
Furthermore, you yourself stated
But why are you making that assumption? I'm still starting from the premise that it's unprovable in either direction. Science and philosophy must always assume that there are no absolute truths and everything must be questioned.
Which seems to imply that you're assuming that this debate is philosophical in nature, so actual free will is important to the discussion. Yet you state above that actual free will isn't important except in a philosophical discussion, as if this isn't one.
As far as I'm concerned:
1) Free will is important to the question of robot rights because society is based on the idea that free will exists and that its members (Persons) have free will.
2) Proving that one person is responsible for violating another person's rights requires that it be proven beyond all reasonable doubt.
3) It logically follows from these two premises that if we are to prove that some category of individuals gain the mantle of personhood, we must prove beyond all reasonable doubt that they fall under the same assumed definition of members (Having free will)
4) Because robots are not currently considered persons, the mere appearance of free will is not enough for them to be considered persons (And therefore they cannot be guaranteed the same rights and protections)
But you're wrong. I've never said free will doesn't exist. I've said that if TheSizzler wants to assert that robots can never achieve it first he must prove it exists and secondly must prove that robots can not attain it. If he can't do both things then the issue of free will is entirely irrelevant to his argument.
I'm mostly arguing its relevance to the discussion.
And I'm arguing that actual free will is irrelevant. Only the appearance.
Let me ask you this. How do you test for free will? How do you design a test that will give differing results for a machine with free will and one designed to appear as though it has free will?
I don't think you can. Any test can simply be defeated by better programming. So given that you can't test for free will the only important matter is whether or not a machine appears to have free will. i.e can it pass every single test for free will that we can throw at it.
That's what the thought experiment I posted was about.[/quote]
The same argument can easily be used to contest evolution on the basis of intelligent design. Someone may argue that 'nobody was around to see it happen', or 'maybe God just left evidence to fool all the nonbelievers'. Yet most scientists will assert that evolution is testable and proven. They still can't disprove that there isn't some powerful intelligence that exists and played or is playing some part in evolution, but they can establish enough testable evidence for evolution that they consider it to have been proven beyond all reasonable doubt. So evolution is regarded as an actual 'fact'.
I believe that a proper test would take a long time to design, and would be designed by somebody with some kind of experience with AI and/or psychology. It sounds to me that if I come up with some test, you will simply come up with some way of refuting it or going around it, and while it might be interesting to see the outcome, I don't want to start another one of your sub-debates.

However, I will say that I believe it is possible to devise such a test, given that it there is significant precedent that we can prove that an individual is or was incapable of acting normally (eg the insanity plea). I would assume that such a test would involve questions that would test the robot's ability to judge right from wrong, and its ability to interact with people, and judge from those interactions what was appropriate or inappropriate in a given situation.
I also believe that such a test would involve an investigation into the robot's software and hardware, in order to determine whether there were hidden triggers which would restrict the robot's ability to act freely. Presumably the test would also interview persons who had interacted with the robot outside of the court setting, in order to gauge the robot's behavior in a more realistic environment. I wouldn't rule out long-term observation or checkups.
Through such a test, I believe that you could prove beyond reasonable doubt that a robot possessed free will comparable to a standard human.
Past that point, I believe that in asserting that a robot merely appears to have free will, you would automatically have to question whether all of humanity has free will. If this universal free will is what you claim is irrelevant to this discussion then I would agree, because you would be attempting to prove that either everyone has free will, or nobody has free will.
I didn't say the robot had free will. I didn't say it hadn't. I said that to all intents and purposes, it appears to have free will. It may or may not have. Remember that a robot which actually did have free will would pass the same tests with exactly the same results. Now either you've misunderstood the experiment or you've basically stated that no robot can ever be assumed to have free will because it may always be acting under the effects of programming that makes it appear as though it has free will.
Now that's a very different argument from the one I thought you were making so I'm going to back out now until you've clarified whether that actually is the point you were making or if that was a misunderstanding of the thought experiment.
You stated "The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary."
Practically speaking, there's not any difference between that and assuming that the AI has free will, because you'll be treating it the same way and taking the same actions. So for all practical purposes, you are assuming that the AI has free will. Just because you point out that you know that the AI might
not have free will doesn't excuse you from providing justification for your actions.