I say that the baseline is free will because by and large that is what our government, justice system, etc. is based on.
That's very circular logic. The legal system is based on the assumption of free will because only the presence of free will creates the need for a legal system.
If *only* the presence of free will creates the need for a legal system, and we have a need for a legal system, what does that suggest?
Surely that's not what you meant.
I think most people would say that our justice system has progressed since the Salem witch trials, or when people attributed criminal behavior to demons or spirits. In that time we have held people more accountable for their actions - rather than say that they are possessed, we may diagnose them with some kind of mental illness, or find some kind of psychological quirk.
Ironically I was about to raise a similar point myself. Why are you assuming that this is where the progress ends?
I didn't say this was where progress ends.
What if in 400 years from now we don't have prisons at all because humanity has proved that all criminal behaviour (and not just some of it) is due to psychological problems? We already use criminal profiling to catch murderers and such profiling always talks about the murderer's need to do x or y. Yet when we catch them we decide that they were 100% responsible for their actions and thus have to be sent to jail for them. What if in 400 years prison is viewed as being as stupid a notion as demonic possession?
What if a large number of things attributed to free will are actually not due to free will at all?
Are you actually making the claim that all crime is due to psychological problems which are completely outside the criminal's control? Or are you just spewing extraneous crap because you don't feel like directly refuting my point?
I can elaborate. By moving the blame from unprovable entities such as demons or spirits, and moving it to more provable things like psychological afflictions, we've made a shift from choosing to believe that people are affected by unprovable forces that _might_ interfere with free will (Which you keep relying on to support your argument) to choosing to believe that people are affected by (arguably) provable forces that we can show would interfere with free will.
Even if we were to decide that, 400 years from now, all crime is caused by psychological illness - we would still be relying on evidence.
Do most of the people around you choose to decide to eat at various times, even if they're hungry? Do they choose to sometimes not sleep or nap, even if they feel tired? Do they perform tasks only when directly told to, or are they capable of doing things simply because they want to get something out of it? Do they do things for other people, even though they don't want to? Most humans I've seen demonstrate the ability for this kind of behavior. All of it implies that they have free will.
Again with the assumption of binary free will that is either on or off. What if whether you snap your fingers is all your choice, when you sleep is 90% your choice and whether or not you murder the pretty girl who just walked past is down the unique set of psychological quirks you've built up during your lifetime and is something you have little control over?
A) You're not even talking about the same point I was.
B) You're not even trying to give any evidence for your "What if" statements.
Now, what evidence does your sufficiently high intelligence offer which proves we are all deluding ourselves?
In the same way that a sufficiently intelligent being could see that the Salem witch trials were bull****.
Nope. Not good enough.
The best decision...for who? By what criteria? Is it the morally best course of action? Is it the most beneficial course of action? Is it the most logical course of action? Is it the most consistent course of action?
It's an interesting example, but as long as he can act mentally defective, he hasn't lost free will. I believe that us humans are capable of knowing the best course of action but, even so, we can still decide to ignore it and do something else instead.
But why on earth would he "act mentally deficient"? You're taking a very anthropomorphic view on the subject. Why would a highly intelligent being deliberately choose to do that? Again I'm not assuming that AIs will be comparable to us in intelligence. What if the first AI is to us what we are to chimps? Cats? Woodlice even? Would you still expect the same systems that govern our existence to be necessarily be relevant at all to such an AI?
Humans are stupid. We can choose to make the wrong choice. But that doesn't mean that every intelligent being has to be like that.
Simply because a being doesn't choose to take an action doesn't mean that it can't.
If an intelligent being can't make the wrong choice, it doesn't have free will.
Nope. As far as I can see from that you're still assuming binary free will and simply assigning a percentage chance that the robot has it or not. I'm talking about something different. I'm talking about their being a spectrum of free will with certain actions which are under your complete control and others being only partially or not at all under your control. A robot under Asimov's laws does not have free will.
Now who's assuming binary free will?
In virtually every one of Asimov's stories, robots have displayed a certain amount of free will, even while following the laws. Bicentennial Man, for instance.
Yet this does not mean it would act uncontrollably because we understand exactly what the limits on its free will are and have determined what danger they present.
No, we haven't. Even when the laws were known in Asimov's robot stories, robot behavior was not 100% predictable.
Besides you keep bringing up the law. And the law does make a binary distinction between free will and insanity. A man who commited a murder is either guilty or insane.
That's not completely true. A man who committed a murder may be guilty of first degree murder, second degree murder, or (in some states) third degree murder, each with their own varying definitions of how responsible the murderer was for the murder. (EG were their actions completely considered or an emotional outburst?)
There is no well he was mostly insane but he could have chosen not to do x so we give him a reduced prison sentence for that minor mistake and treat him for the insanity which is mostly to blame. Suppose we have a person who will medically qualify as a psychopath today but wouldn't a year ago because the tendencies weren't quite strong enough yet. Would you say that a murder committed 6 months ago was 100% free will? What about 3 months ago? Yesterday? At which point does the choice suddenly flip between insanity and free will?
You'd have to ask someone who actually wants to argue about the fine points of mental illnesses and the proper treatment of them.
I don't see why you choose to be obsessed with the tests being wrong. Of course they'll be inaccurate. So what do you propose as an alternative?
You've missed the point completely if you think I need to propose an alternative.
If you are going to argue against determining whether a robot has free will via a scientific process, you had damn well better have an alternative. You have completely failed to provide any substantial evidence to support your point. Hell, you seem to keep on changing your point. First you were saying that free will was irrelevant, then you started differentiating between the appearance of free will and actual free will, and now you're starting to complain that I haven't suitably addressed partial free will (When prior to that point, neither had you)
That's like saying "Well science might not give us the right answer so are you proposing voodoo as an alternative?" I'm simply saying that science may not give you a conclusive answer.
And I've already explicitly stated that I believe that a test for free will could give you a wrong answer. Now you're arguing that the tests might be inconclusive? Decide what you're actually objecting to.

Asking me for an alternative simply proves that you're not paying attention to what I'm saying.
No, it just means that I have more respect for myself than you apparently do.
I've stated my solution. I've stated my supporting evidence. I've made an effort to provide substantial evidence.
You have stated no solution. You haven't come up with any supporting evidence. You've only made an effort to come up with unprovable thought experiments and somebody else's fictional characters.
There is no alternative (at least no sensible one). I'm asking what do you do when the tests ARE inconclusive. Not what other tests you would do instead.
I've acknowledged that the tests might not be right. That the tests might not be conclusive is your argument, one that I'm not participating in until you can prove your point. (And actually have a consistent point.)
I asked you what do you do with a robot that you're 50% certain has free will. What you're suggesting so far sounds a little too much like this
Man 1: I think he's dead. But I'm not a doctor. I can't tell
Man 2: How certain are you he's dead?
Man 1: I'm 50:50. He could be in a deep coma.
Man 2: Well then. If you're 50% certain he's dead we'll only bury him up to his waist.
I'm sorry that you think it sounds like that's what I'm saying. That's not what I'm saying.
What do you do when your 50% robot is accused of a crime?
You should figure out if the accusation is true or not.
