[...]
Will is the easier part, ironically. It's simply the fact that some output results from input. "Free" is more difficult to define, because it's realted to how the output is produced from input.
[...]
I myself think that every sentient system has a free will, because it's not simply producing always the same output from certain input, but affecting the process of decision consciously. It doesn't matter that these processes are bound to matter - everything is. The electrochemical reactions going on in the brain *are* the sentience, consciousness and free will; those things are not just a by-product of the reactions. They are the same thing.
It all boils down to definitions though. If you want to say that free will doesn't exist because brains are just a bunch of matter doing it's thing in the head, fine. In my opinion though, the key point is that the brain affects itself as much as outside world affects it, and thus the freedom of will is IMHO fulfilled - since the output is not simply dependant from input.
Note that chance has little to do with this. You can obviously make a simple computer to produce varying output from same input (random seed generator is the prime example of this) but whether or not this has any coherence, sentience or free will is obvious - no. It's simply random...
Okay, that means a simple self-learning AI to play tic-tac-toe has free-will?
The world is quite limited, but inside this world the AI can do any decision it wants to do.
The outside world _and_ its own experiences decide what it will do.
Of course we soon land at the point of kara that a "ueber-intelligent being" would no longer have a free will, because making mistakes would be counter productive.
I think we are mission out something.
Hm, lets add some hypothetical "feelings".
After having won for 10000000 times it begins to feel "boring", so the AI decides to let the human win some times also.
Its like a human thinking "this is too boring I'll let the fighter escape and chase it then again and boom a hidden other fighter shoots him out of the sky."
But when is something boring? I think, when you don't need to think anymore or nothing new "seems" to happen.
And that is a very interesting point, we always have a point of view here and what the brain concentrates on at the moment.
i.e. a mission can be completely boring to me, because I played it 100x times already and think I know exactly what happens.
But it could be that (in this case due to random factors) something entirely different happens, but I just don't notice it, because I'm not paying attention to it.
So the outside world influences me to be bored, but normally it should influence me to be not bored, but as I am using a point of view I am not seeing this "influence".
Of course this all rather "proves" your point of outside and inside world influencing actions. Feelings are also only inside world.
--
I just want to help me build my bad ass AI that takes over the world by infecting old Windows computers with a virus and sending lots of SPAM mails to cover up the slow taking over

... or wait ...
--
I think regarding free will what matters for me personally is that I have "desires" and "feelings" and that I can influence those with my own actions to a certain degree. This seems to me that I have a free will, which gives me a feeling of "freedom".

(yahoo, they programmed freedom into us ...)
On a personal thing, I think most humans just want to be "happy".
lol, that means an AI to be like human must at least have:
- feelings and the desire to have good feelings and be "happy"
* Can you imagine shivans getting angry, because you killed their wing mates?
* Can you imagine a shivan getting happy by killing humans and such aiming better.
I really would like to see how an emotion driven AI would perform in FS2 battle.
(the above assumes shivans have a free will and feelings of course, but it could of course only be true for terran pilots, which some of them learned to suppress emotions better than others ...).
Hell, you could even have a "virtual academy" with all the pilots created based on some "genetic algorithm" and then chose them based on their ranking for the mission based on the AI level.
i.e. A "Captain" scored perfect in AI school, but still might not be completely emotionally stable (he's insane!

).
If things were done that way, they would at least no longer seem to be that predictable.
(of course humans are sometimes really predictable also; might be funny to have an AI that thinks "damn, why is this pilot so stupid like the rest of them")
On the other hand random can be inserted as well, not in the inner world, but in the things happening in the outer world, i.e. a different pilot would be chosen based on level, but also on random factors (i.e. the other person could be ill

) ...
I think we should just connect FS2 to Second Life and get the pilot data from there - or from the sims games.

Seriously if their avatars had any feelings and skills attached, it would be easy to create human profiles out of them

.
Okay, I'm really getting off-topic here, but I think we should first create that AI with free will, then we can worry about the consequences.

And what would be better than BtRL to start such an AI. They have a resurrection ship after all and seemingly (they think to have) feelings ... :-).
Which is a cool point. Unless with us humans you can transfer an AI (even with free will) and such preserve the mind or even copy it at some point, but I guess it would be as unethical as to clone someone ...
And for the original poster:
I also don't think the laws will work to protect society as a robot cannot properly do all of them. A robot cannot even blow up itself (hurts 3 and 1 and 2 afterwards).
There can be a situation, where he has to decide against one of them:
He has a gun, "enemy" human has a gun and to be protecting human has no gun. "enemy" will shoot in 5 secs.
There is no time to do anything else as to fire the gun at an enemy to protect human thus breaking rule 1.
If he doesn't fire them he is also breaking rule 1 (due to inaction).
If he doesn't he is also breaking rule 2.
If he does blow up, due to not deciding he breaks rule 3 and 2 and 1.
If he goes for what breaks the least rules its to fire the gun.
And I also fear that overprotection might occur.
cu
Fabian
PS: Is there anyone else interested or is there already a long buried thread about a new AI for FS2?