Author Topic: Asimovian Thought Experiment  (Read 12205 times)

0 Members and 1 Guest are viewing this topic.

Offline Mefustae

  • 210
  • Chevron locked...
Re: Asimovian Thought Experiment
"I did not murder him!"

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
that's another good point! can a mentally insane person/robot be responsible for a crime?

Well surely that's why criminal law has the innocent by reason of mental defect category.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline jr2

  • The Mail Man
  • 212
  • It's prounounced jayartoo 0x6A7232
    • Steam
Re: Asimovian Thought Experiment
(unlesse they can modify their structure)

I think for a robot to be sentient and/or have 'free will', it would have to be able to change it's structure, or at least it's programming, don't you?  (programming = brain state, I think).

I don't think it'd be possible, but if it was... I think that'd be a requirement.  Whenever you do or think something, your brain changes slightly.  (Yes, it records it, if what I've heard is true... it just can't recall on demand.)

 

Offline Polpolion

  • The sizzle, it thinks!
  • 211
Re: Asimovian Thought Experiment
Hmm... let's see:

For now, let us assume that people have no souls/spirits. All we get is our bodies. So therefore in order to have any feelings or thoughts, there must me some kind of physical reaction within our body. These reactions would take place whenever certain chemicals meet, or maybe the way the matter in our brain reacts to various patterns of light. This would imply that we are indeed robots with some form of very complex programming, and that everything we do is due to the presence (or lack thereof) of certain matter. But then how are people within the same species so different from each other? Sure everyone generally reacts the same to some general things, there are still the minor differences. And plus wouldn't this mean that everything we've accomplish is a coincidence?



--

there. Any thoughts?


EDIT: Hmm... these threads should be split.

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Karajorma makes a fortune selling thesizzler graphite and telling him that since it contains the same element it's the same thing as diamond.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline Polpolion

  • The sizzle, it thinks!
  • 211
Re: Asimovian Thought Experiment
Why is everyone being so rude lately?

First Mefustae and now Kara, plus a crapload of people at my school, whom none of you know. :confused:

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Actually I was making a point about how the chemical make-up of the brain is not important so much as the way that neurons are arranged. In the same way that it is not the atoms in either graphite or diamond that is important (they're in fact identical) so much as the way that they are arranged. Or how the make-up of your hard drive is not what changes the data. Its the way you arrange it.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline jr2

  • The Mail Man
  • 212
  • It's prounounced jayartoo 0x6A7232
    • Steam
Re: Asimovian Thought Experiment
Why is everyone being so rude lately?

First Mefustae and now Kara, plus a crapload of people at my school, whom none of you know. :confused:

I don't know.  But I don't like.  I think I've even noticed myself getting a little short sometimes.  :eek:

 

Offline Polpolion

  • The sizzle, it thinks!
  • 211
Re: Asimovian Thought Experiment
Actually I was making a point about how the chemical make-up of the brain is not important so much as the way that neurons are arranged. In the same way that it is not the atoms in either graphite or diamond that is important (they're in fact identical) so much as the way that they are arranged. Or how the make-up of your hard drive is not what changes the data. Its the way you arrange it.

Okay, I think I get it. I still don't see why that comment need to be made in that way though  :doubt:. Of course, since I let it get to me this much I suppose I deserve it :blah:.

EDIT: I think I'm gonna stay out of this sub-forum until I take debate, or at least until I can expand my knowledge to a point at which I can compete. It would be most beneficial for everyone.
« Last Edit: May 25, 2007, 06:08:53 pm by thesizzler »

 

Offline Mefustae

  • 210
  • Chevron locked...
Re: Asimovian Thought Experiment
EDIT: I think I'm gonna stay out of this sub-forum until I take debate, or at least until I can expand my knowledge to a point at which I can compete. It would be most beneficial for everyone.
Why? Your skill in arguing has nothing to do with it, you merely lack the pertinent information to hold your own against the likes of Kara. You have the internet fully at your disposal, so get off your ass, go out and find the information you're trying to convey. You don't need to be a master debater (tee-hee) just to have an argument, you just need to find the information to back up your statements and ultimately keep and open mind.

 

Offline Polpolion

  • The sizzle, it thinks!
  • 211
Re: Asimovian Thought Experiment
Yes, let's get into an argument about this too!

--

Back on topic! Really this time!

I apologize for veering this thread off topic. Please do not lock it.
« Last Edit: May 25, 2007, 08:40:23 pm by thesizzler »

  

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
Actually I'm still arguing both points and free will (or making it look as though it exists) is very important in AI topics so I don't think they should be split at all.

Because for the purposes of this discussion it is irrelevant. There is no testable difference between free will and pseudo free will. At the point where an AI reaches a level of intelligence where nothing can ever tell if it has free will or not (including itself) then you must act as though it has free will until you can prove otherwise.

I am very confused about what you're saying now. You say that free will is important in AI topics (A category that this thread falls under), but then you immediately say that for this discussion it's irrelevant.

But why are you making that assumption? I'm still starting from the premise that it's unprovable in either direction. Science and philosophy must always assume that there are no absolute truths and everything must be questioned.

You've gone straight to an opposing viewpoint from mine and are in essence saying that the possibility that free will doesn't exist is so horrific an idea that it should not be considered because society would fall apart if we think about it.

Just because society is based on the concept of free will doesn't mean that it is actually correct. Pulling arguments out about it based on what would happen if we abandon it is like saying that the "If God didn't exist, we'd have to create him" argument means that you must believe in God.

And you've gone straight to exaggerating my argument to a point that it becomes totally meaningless. :p

Society assumes free will (You seem to agree with this concept in your last paragraph). So in order to reasonably evaluate how a robot would perform under society we must deal with that assumption. My example about what would happen if society didn't assume free will was to point out criteria of society that support my claim that society does assume free will. We hold people accountable for their actions and punish them based on ideas of deterrence.

But you're wrong. I've never said free will doesn't exist. I've said that if TheSizzler wants to assert that robots can never achieve it first he must prove it exists and secondly must prove that robots can not attain it. If he can't do both things then the issue of free will is entirely irrelevant to his argument.

I'm mostly arguing its relevance to the discussion.

The old Turing Test had it that when you couldn't tell the difference between a conversation with a machine and one with a human you had an AI. I'm simply pointing out a similar one when it comes to AI.

Let me give you a thought experiment. Tomorrow a company reveals that it has created an AI. To all intents and purposes it appears to have free will. Would you deny the AI rights on the grounds that it might not have them and that it could simply be working on pre-programming too complex for us to have been able to spot?

Now suppose someone in the press points out that the AI may have been preprogrammed to act as though it had free will and then wipe its programming later on and take over the world. They have no evidence of this or even a reason to suspect it may be true but it's a possibility. Do you still deny the machine rights?

The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary. Which again proves my point. Whether a machine has real free will or simply appears to have it is irrelevant. The only time you can act as though it doesn't is after you have proved that it doesn't.

Why assume that the robot has free will? It is far easier to design something that appears to have free will, than it is to design something that actually has free will. The former has (arguably) already been done, but the latter remains undone, at least as far as the general public has been told.

Under current US law, robots don't have any significant rights. There is no reason for the entire legal system to change its position on the matter to the opposite of what it currently is based on appearances alone.

Simply to change a person's status from 'innocent' to 'guilty', a detailed procedure is followed, even for relatively minor offenses. It follows that to change something's status from a non-person to a person, a procedure that is at least as detailed as the former must be used. Changing the status of something from a non-person to a person is a much more significant act than convicting it of a minor theft.

In the case of a minor theft, mere appearances would not be enough to convict the person and change their status from innocent to guilty. (At least, in theory) The court would require evidence - arguing that the person appears to be guilty and so they must have committed the crime is not how the legal system is supposed to work.

The robots appearance of having free will is important. Whether is actually does or not wouldn't matter. The world would be full of people claiming that the robot doesn't have free will and others claiming that it does but if neither side could prove their argument the robot would still be the one placed on trial for the crime and his owner would be a co-conspirator. Anything else allows the robot to go free even though the possibility exists that it is a criminal.

Convicting something without proving that it had a choice in the matter is no more just than convicting someone without proving that it committed the crime.

Suppose a guard is tried for robbing a safe. His fingerprints are found on the safe; a couple of his hairs are found with the money. The guard claims that he was ordered to do so at gunpoint. It would be unjust to convict the guard without disproving the possibility that he robbed the safe under coercion, because if he did not do it of his own free will, it is a significantly less severe offense.



Now, it may be argued that it is basically impossible to prove that a robot has free will - but I believe it is possible to verify this to a greater extent than simply whether a robot appears to have it or not. If the company which built the robot has intentionally designed it to follow every order given to it by a human being, it would have no more free will than a simple speech command program on an ordinary PC. It would be helpful if you gave what you believe is sufficient evidence for a robot to 'appear' to have free will.
-C

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Okay, I think I get it. I still don't see why that comment need to be made in that way though  :doubt:. Of course, since I let it get to me this much I suppose I deserve it :blah:.


Well I was probably ruder than I needed to be and for that I apologise.

However your argument was possibly the most threadbare argument I'd seen in a long time. Just because we have the same chemical make-up doesn't mean that we should all act exactly the same. I would have thought that this was pretty obvious. You were making a poor attempt to attribute the majority of mental differences between two humans to their souls and that's very lazy reasoning.

Actually I'm still arguing both points and free will (or making it look as though it exists) is very important in AI topics so I don't think they should be split at all.

Because for the purposes of this discussion it is irrelevant. There is no testable difference between free will and pseudo free will. At the point where an AI reaches a level of intelligence where nothing can ever tell if it has free will or not (including itself) then you must act as though it has free will until you can prove otherwise.

I am very confused about what you're saying now. You say that free will is important in AI topics (A category that this thread falls under), but then you immediately say that for this discussion it's irrelevant.

The appearance of free will is important. Actual free will isn't except at a philosophical level. I should have made that more clear but I thought that the rest of the post made that point and I didn't want to state it yet again.

Furthermore you're also confused because there are two debates going on here and you're having trouble separating them. Partly my fault but it looks like clarification is needed.

1) There's the debate between thesizzler and myself where he asserted that humans have free will and robots don't. However as he can not definitively prove either point that makes it irrelevant to the discussion. It's an unprovable assertion and as such has no place in a scientific discussion.

2) There's the debate you stated and I've continued about whether free will or merely it's appearance is important in AI topics.

Quote
But you're wrong. I've never said free will doesn't exist. I've said that if TheSizzler wants to assert that robots can never achieve it first he must prove it exists and secondly must prove that robots can not attain it. If he can't do both things then the issue of free will is entirely irrelevant to his argument.

I'm mostly arguing its relevance to the discussion.

And I'm arguing that actual free will is irrelevant. Only the appearance.

Let me ask you this. How do you test for free will? How do you design a test that will give differing results for a machine with free will and one designed to appear as though it has free will?

I don't think you can. Any test can simply be defeated by better programming. So given that you can't test for free will the only important matter is whether or not a machine appears to have free will. i.e can it pass every single test for free will that we can throw at it.

That's what the thought experiment I posted was about.

Quote
Why assume that the robot has free will? It is far easier to design something that appears to have free will, than it is to design something that actually has free will. The former has (arguably) already been done, but the latter remains undone, at least as far as the general public has been told.

I didn't say the robot had free will. I didn't say it hadn't. I said that to all intents and purposes, it appears to have free will. It may or may not have. Remember that a robot which actually did have free will would pass the same tests with exactly the same results. Now either you've misunderstood the experiment or you've basically stated that no robot can ever be assumed to have free will because it may always be acting under the effects of programming that makes it appear as though it has free will.

Now that's a very different argument from the one I thought you were making so I'm going to back out now until you've clarified whether that actually is the point you were making or if that was a misunderstanding of the thought experiment.
« Last Edit: May 26, 2007, 03:26:21 am by karajorma »
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
The appearance of free will is important. Actual free will isn't except at a philosophical level. I should have made that more clear but I thought that the rest of the post made that point and I didn't want to state it yet again.

Furthermore you're also confused because there are two debates going on here and you're having trouble separating them. Partly my fault but it looks like clarification is needed.

1) There's the debate between thesizzler and myself where he asserted that humans have free will and robots don't. However as he can not definitively prove either point that makes it irrelevant to the discussion. It's an unprovable assertion and as such has no place in a scientific discussion.

2) There's the debate you stated and I've continued about whether free will or merely it's appearance is important in AI topics.

Please stop assuming you have the power to define what a debate is or isn't, and therefore have the right to judge whether people are confused or not. Making inferences and wantonly judging people's arguments and generally being rude may be great for driving people off, but it doesn't really prove anything.

I'm confused because you've claimed that free will is important to the thread, but irrelevant to the discussion, and you've talked about the 'appearance' of free will and pseudo-free will. You've agreed that assuming something exists even though it can't be proven or disproven is a fallacy, but your entire argument about free will seems to be based on the idea that you must assume free will exists, even if it can't be proven or disproven.

Furthermore, you yourself stated

But why are you making that assumption? I'm still starting from the premise that it's unprovable in either direction. Science and philosophy must always assume that there are no absolute truths and everything must be questioned.

Which seems to imply that you're assuming that this debate is philosophical in nature, so actual free will is important to the discussion. Yet you state above that actual free will isn't important except in a philosophical discussion, as if this isn't one.

As far as I'm concerned:
1) Free will is important to the question of robot rights because society is based on the idea that free will exists and that its members (Persons) have free will.
2) Proving that one person is responsible for violating another person's rights requires that it be proven beyond all reasonable doubt.
3) It logically follows from these two premises that if we are to prove that some category of individuals gain the mantle of personhood, we must prove beyond all reasonable doubt that they fall under the same assumed definition of members (Having free will)
4) Because robots are not currently considered persons, the mere appearance of free will is not enough for them to be considered persons (And therefore they cannot be guaranteed the same rights and protections)

Quote
But you're wrong. I've never said free will doesn't exist. I've said that if TheSizzler wants to assert that robots can never achieve it first he must prove it exists and secondly must prove that robots can not attain it. If he can't do both things then the issue of free will is entirely irrelevant to his argument.

I'm mostly arguing its relevance to the discussion.

And I'm arguing that actual free will is irrelevant. Only the appearance.

Let me ask you this. How do you test for free will? How do you design a test that will give differing results for a machine with free will and one designed to appear as though it has free will?

I don't think you can. Any test can simply be defeated by better programming. So given that you can't test for free will the only important matter is whether or not a machine appears to have free will. i.e can it pass every single test for free will that we can throw at it.

That's what the thought experiment I posted was about.[/quote]

The same argument can easily be used to contest evolution on the basis of intelligent design. Someone may argue that 'nobody was around to see it happen', or 'maybe God just left evidence to fool all the nonbelievers'. Yet most scientists will assert that evolution is testable and proven. They still can't disprove that there isn't some powerful intelligence that exists and played or is playing some part in evolution, but they can establish enough testable evidence for evolution that they consider it to have been proven beyond all reasonable doubt. So evolution is regarded as an actual 'fact'.

I believe that a proper test would take a long time to design, and would be designed by somebody with some kind of experience with AI and/or psychology. It sounds to me that if I come up with some test, you will simply come up with some way of refuting it or going around it, and while it might be interesting to see the outcome, I don't want to start another one of your sub-debates. :p

However, I will say that I believe it is possible to devise such a test, given that it there is significant precedent that we can prove that an individual is or was incapable of acting normally (eg the insanity plea). I would assume that such a test would involve questions that would test the robot's ability to judge right from wrong, and its ability to interact with people, and judge from those interactions what was appropriate or inappropriate in a given situation.

I also believe that such a test would involve an investigation into the robot's software and hardware, in order to determine whether there were hidden triggers which would restrict the robot's ability to act freely. Presumably the test would also interview persons who had interacted with the robot outside of the court setting, in order to gauge the robot's behavior in a more realistic environment. I wouldn't rule out long-term observation or checkups.

Through such a test, I believe that you could prove beyond reasonable doubt that a robot possessed free will comparable to a standard human.

Past that point, I believe that in asserting that a robot merely appears to have free will, you would automatically have to question whether all of humanity has free will. If this universal free will is what you claim is irrelevant to this discussion then I would agree, because you would be attempting to prove that either everyone has free will, or nobody has free will.

I didn't say the robot had free will. I didn't say it hadn't. I said that to all intents and purposes, it appears to have free will. It may or may not have. Remember that a robot which actually did have free will would pass the same tests with exactly the same results. Now either you've misunderstood the experiment or you've basically stated that no robot can ever be assumed to have free will because it may always be acting under the effects of programming that makes it appear as though it has free will.

Now that's a very different argument from the one I thought you were making so I'm going to back out now until you've clarified whether that actually is the point you were making or if that was a misunderstanding of the thought experiment.

You stated "The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary."

Practically speaking, there's not any difference between that and assuming that the AI has free will, because you'll be treating it the same way and taking the same actions. So for all practical purposes, you are assuming that the AI has free will. Just because you point out that you know that the AI might not have free will doesn't excuse you from providing justification for your actions.
-C

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Please stop assuming you have the power to define what a debate is or isn't, and therefore have the right to judge whether people are confused or not.


The only assumption I made is that you were telling the truth.

Quote
I am very confused about what you're saying now.

It seemed fairly obvious that you were confusing the answers I was giving to part one of the debate with the answers for part two. I even said that was partly my fault for not making things clear enough. So I attempted to show the two points I was debating. If you wish to debate other things go ahead. But that is what I am talking about. I'm not assuming the right to do anything except clarify what I'm talking about so that I don't waste my time arguing at cross purposes. 

Quote
I'm confused because you've claimed that free will is important to the thread, but irrelevant to the discussion, and you've talked about the 'appearance' of free will and pseudo-free will. You've agreed that assuming something exists even though it can't be proven or disproven is a fallacy, but your entire argument about free will seems to be based on the idea that you must assume free will exists, even if it can't be proven or disproven.

I thought I had clarified all these points earlier but lets see if I make it clear.

1. Whether or not a robot has actual free will is irrelevant if it can pass every test you can throw at it. There is a philosophical difference of course but that's irrelevant to a debate on robot rights unless we can prove that humans also have free will.
2. Human society assumes free will. But only for humans. However as soon as a robot comes along with free will (or the appearance of free will) then society will have to address this. And in a similar fashion it will have to assume that any robot who can pass free will tests has as much free will as a human. As I stated before this has no effect on whether or not it (or us) actually has free will.

Quote
I believe that a proper test would take a long time to design, and would be designed by somebody with some kind of experience with AI and/or psychology. It sounds to me that if I come up with some test, you will simply come up with some way of refuting it or going around it, and while it might be interesting to see the outcome, I don't want to start another one of your sub-debates. :p

Actually I wasn't interested in the test itself. All that mattered was that you decided in your mind how you could test for free will. Because the only conclusion you could come to was the one below.

Quote
However, I will say that I believe it is possible to devise such a test, given that it there is significant precedent that we can prove that an individual is or was incapable of acting normally (eg the insanity plea). I would assume that such a test would involve questions that would test the robot's ability to judge right from wrong, and its ability to interact with people, and judge from those interactions what was appropriate or inappropriate in a given situation.

I also believe that such a test would involve an investigation into the robot's software and hardware, in order to determine whether there were hidden triggers which would restrict the robot's ability to act freely. Presumably the test would also interview persons who had interacted with the robot outside of the court setting, in order to gauge the robot's behavior in a more realistic environment. I wouldn't rule out long-term observation or checkups.

Through such a test, I believe that you could prove beyond reasonable doubt that a robot possessed free will comparable to a standard human.

In other words you'd do your entire array of tests and end up exactly where I said you would. With a robot that may or may not have free will but which to all intents and purposes, appears to have it. You still wouldn't have proved free will. You'd simply have run out out of tests for it and drawn the conclusion that "As far as I can tell, it has free will"

Now do you see what I said that only the appearance of free will is important?

Quote
Past that point, I believe that in asserting that a robot merely appears to have free will, you would automatically have to question whether all of humanity has free will. If this universal free will is what you claim is irrelevant to this discussion then I would agree, because you would be attempting to prove that either everyone has free will, or nobody has free will.


Exactly the point I have been trying to make since this discussion started. All that matters is whether the robot can pass every test and appears to have free will. It doesn't matter if it actually has it or not. That's a philosophical point largely irrelevant to the topic.

Quote
You stated "The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary."

Practically speaking, there's not any difference between that and assuming that the AI has free will, because you'll be treating it the same way and taking the same actions. So for all practical purposes, you are assuming that the AI has free will. Just because you point out that you know that the AI might not have free will doesn't excuse you from providing justification for your actions.

There is no difference. That's exactly the point I was making. Humans pass every test for free will we have. If a robot can do the same then it is entitled to the same same basic assumption that we make for humans. That although we can only say that both humans and robots appear to have free will we must assume they do.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 
Re: Asimovian Thought Experiment
I see what you are saying, that we can't prove that free will exists in either humans or robots so given that robots appear to have free will in this scenario means we must assume they have it as we do and treat them as such, correct?

 
Re: Asimovian Thought Experiment
Now as for the main topic (which one is the main one now :-P )

If we did end up assuming that a sentient robot has free will, such laws such as the one above would no work for such a robot. While these laws are sound they are also very basic.

Now I believe that we wouldn't be able to program a robot that appears to have free will with laws such as these. Any kind of robot that appeared to have free will would most likely have some kinda programming that lets it make choices based on input.

So there for if we are going to say the robot has freewill (assumed not proven just like the rest of us) then rules such as Asimov's arn't viable.

The robot would have to be "brought up"? to believe in the same values that we do, that everyone's life is precious, everyone has the freedom to do what it wishes(within merit), tought right from wrong.



So to conclude. As long as the robots intelligence isn't high enough that it is assumed to have freewill, or at least considered equal to us, in rights and opportunities, then I believe the three rules would work.

As soon as a robot meets this assumed freewill and has equal rights and opportunities as us then it will be beyond what the three rules can do to control it. In which case it wouldn't be able to, because with such a rule in place the robot would always be considered lesser then a human even if it can perform the same tasks think up the same things and do the same work or more and the robot most likely wouldn't if you want to use an emotion happy with such a state of if you could say slavery?

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
I tend to agree with you here. A robot can't have even the appearance of free will and follow Asimov's laws. Of course it can have free will except when following the 3 laws which Asimov seemed to consider a fair compromise.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 
Re: Asimovian Thought Experiment
I wish I could of summed it up in just 2 lines lol but I guess the reasoning as to why helps incase someone decides to make an against. :)

 

Offline Herra Tohtori

  • The Academic
  • 211
  • Bad command or file name
Re: Asimovian Thought Experiment
Ah, free will... To debate about that you first need to define the free will, it's prerequisites and what makes a will "free".

The primary prerequisite is obviously that the universe is non-deterministic and thus includes the effect of chance. Otherwise everything would be simply mechanical... but according to pretty accurate experienses on quantum mechanics, chance is actually in-built part of how the universe functions. So the first prerequisite is apparently fulfilled.

Secondly, we need a definition of both "will" and "free" in this context.

Will is the easier part, ironically. It's simply the fact that some output results from input. "Free" is more difficult to define, because it's realted to how the output is produced from input.

For example, a human brain gets information and makes a decision. Some claim that this is purely based on the electrochemical reactions shooting between neurons, firing up different patterns and so forth, and that free will would thus be an illusion. Some claim that for will to be free, there would be a need for some kind of matter-independent entity, call it soul if you may, that has power to affect decisions.


I myself think that every sentient system has a free will, because it's not simply producing always the same output from certain input, but affecting the process of decision consciously. It doesn't matter that these processes are bound to matter - everything is. The electrochemical reactions going on in the brain *are* the sentience, consciousness and free will; those things are not just a by-product of the reactions. They are the same thing.


It all boils down to definitions though. If you want to say that free will doesn't exist because brains are just a bunch of matter doing it's thing in the head, fine. In my opinion though, the key point is that the brain affects itself as much as outside world affects it, and thus the freedom of will is IMHO fulfilled - since the output is not simply dependant from input.

Note that chance has little to do with this. You can obviously make a simple computer to produce varying output from same input (random seed generator is the prime example of this) but whether or not this has any coherence, sentience or free will is obvious - no. It's simply random...
There are three things that last forever: Abort, Retry, Fail - and the greatest of these is Fail.