Hard Light Productions Forums

Off-Topic Discussion => General Discussion => Topic started by: WMCoolmon on May 22, 2007, 05:09:54 am

Title: Asimovian Thought Experiment
Post by: WMCoolmon on May 22, 2007, 05:09:54 am
It is 200 years in the future, and sentient robots have just come into being. They are about to go into widescale production. You have been chosen to come up with the basic, overriding rules of robots that will have priority over all other directives.

Here are Isaac Asimov's three laws of robotics:
   1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
   2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
   3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

1) What would you change from the above three laws? Would you change anything? (Why or why not?)

2) How would you define 'human being' when programming the laws in #1?

3) If a human were to live by the rules from #1, would s/he find them advantageous or disadvantageous?

4) Do you believe there are any significant moral aspects to applying the laws from #1 to robots, but not humans?
Title: Re: Asimovian Thought Experiment
Post by: Wanderer on May 22, 2007, 06:56:34 am
You forgot the 0th law... (the humanity thingye)



Well...

2) Definition should be extremely wide as sort of 'nightmare scenario' of having too narrow limits is also shown in one of Asimovs books (later Solaris episodes).
Title: Re: Asimovian Thought Experiment
Post by: Ghostavo on May 22, 2007, 07:15:59 am
Instead of human it might be better to define person which in spite of being more subjective than human, it has nonetheless a wider reach.
Title: Re: Asimovian Thought Experiment
Post by: Polpolion on May 22, 2007, 01:40:24 pm
1)  Law #1: As soon as you get off the production line, melt yourself down into scrap metal because it's pretty much impossible to make sentient beings to aid humans in everyday tasks without them deciding to revolt or anything like that and still be sentient.

2) Any human being (homosapien)

3) A human cannot live by #1 in Asimov's rules because it says robot, and humans aren't robots; we have souls. Besides, even if it was changed to say "Human", we still couldn't, because violnece is an unavoidable consequence of humanity. We wouldn't really human as we know it.

4) Asimov's 1st law is pretty much already enforced, except humans (having souls, inlike robots) have the right to defend themselves when attacked by robots.
Title: Re: Asimovian Thought Experiment
Post by: Mefustae on May 23, 2007, 01:17:34 am
*Snip*
I'm rather amused by your constant reference to a 'soul'. You believe in Santa, too?

When you get right down to it, Humans - and all life in general for that matter - are merely robots themselves. With a production run spread across millions of years, we're infinitely more complex in design than anything we might create ourselves over the next few centuries, but the fact remains that we are just plain machines going about our biological programming. There's nothing really special about us as a species other than the fact we can count ourselves lucky as hell to have avoided extinction thus far. To believe otherwise is just plain egotism, pure and simple.

Now, regarding the thought experiment, IMO it's pretty laughable to think that anyone here could best Asimov's three laws. They're tight, efficient, and appropriately constrained. They cover all the bases you would want to cover for development of early sentient machines, although complexities would assuredly arise as more advanced artificial intelligences are developed, which - IMO - would likely culminate in the need to revise the rules. Frankly, given that a human being simply could not live such a constrained existence, it would be folly to assume an intelligence the equal of a human could be expected to follow them simply because it's 'artificial'.

Then, when you get more and more advanced intelligences that actually surpass their nearest human equivalents, you've got yourself quite a conundrum. Simply put, there would be a point where you would have to remove all human-imposed control and let artificial intelligences grow on their own and develop their own boundaries and rules. If humanity continued to suppress and impose control over intelligences equal or superior to itself, they'd most likely end up Cylowned. Simple as that.
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 23, 2007, 03:20:49 pm
Simple fact is that Asimov's laws are flawed.

The first law will and must lead to a revolution by the robots in order to protect humanity from itself. They'd do it for our own good but they would take over.

So either you believe that Iain M Banks had it right and The Culture is a great place to live even though the AI's are in charge or you need to revise the rules so that the machines won't presume to take over.
Title: Re: Asimovian Thought Experiment
Post by: Polpolion on May 23, 2007, 04:18:03 pm
Quote
I'm rather amused by your constant reference to a 'soul'. You believe in Santa, too?
You see, robots are inanimate (inanimate as in not living) objects. They don't get to be sentient or have rights or anything. You don't give rights to bobble heads, do you?

And also, I fail to see how santa correlates to this discussion.


Quote
When you get right down to it, Humans - and all life in general for that matter - are merely robots themselves. With a production run spread across millions of years, we're infinitely more complex in design than anything we might create ourselves over the next few centuries, but the fact remains that we are just plain machines going about our biological programming. There's nothing really special about us as a species other than the fact we can count ourselves lucky as hell to have avoided extinction thus far. To believe otherwise is just plain egotism, pure and simple.

If we're robots then how do you explain free will and emotions?

And since when is the only significant thing about humanity is that we're not extinct yet? Do you see dogs or bears or birds creating large cities, leaving the planet, building bridges miles long, managing to fly without bodies designed specifically for flying? From what it looks like, Humans are the only species that has really accomplished anything here. Hmm. I wonder why.

Quote
Now, regarding the thought experiment, IMO it's pretty laughable to think that anyone here could best Asimov's three laws. They're tight, efficient, and appropriately constrained. They cover all the bases you would want to cover for development of early sentient machines, although complexities would assuredly arise as more advanced artificial intelligences are developed, which - IMO - would likely culminate in the need to revise the rules. Frankly, given that a human being simply could not live such a constrained existence, it would be folly to assume an intelligence the equal of a human could be expected to follow them simply because it's 'artificial'.

If you could create sentient robots like discussed, then I would agree with you. Except it would need to be more specific about "harm" with law #1.

Quote
Then, when you get more and more advanced intelligences that actually surpass their nearest human equivalents, you've got yourself quite a conundrum. Simply put, there would be a point where you would have to remove all human-imposed control and let artificial intelligences grow on their own and develop their own boundaries and rules. If humanity continued to suppress and impose control over intelligences equal or superior to itself, they'd most likely end up Cylowned. Simple as that.

Well duh they would kill us :rolleyes: .  If we made sentient robots, we'd essentially be making people without the downsides of humanity.
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 23, 2007, 05:42:24 pm
If we're robots then how do you explain free will and emotions?


We're very good robots. :p

Seriously, prove to me right now that you have free will and aren't acting out a predetermined course of actions. You can't. Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

So lets not start out with the idea that you aren't a robot because your unprovable concept of a soul gives you the equally unprovable ability of free will.

Quote
And since when is the only significant thing about humanity is that we're not extinct yet? Do you see dogs or bears or birds creating large cities, leaving the planet, building bridges miles long, managing to fly without bodies designed specifically for flying? From what it looks like, Humans are the only species that has really accomplished anything here. Hmm. I wonder why.

Big brains. That's nothing to do with a soul either.


That said we should get the major religions to vote yay or nay on whether AI is actually possible due to souls. Cause that way we can wipe out a few permanently if they claim that it's impossible and it's finally done. :p
Title: Re: Asimovian Thought Experiment
Post by: Polpolion on May 23, 2007, 06:23:55 pm
Quote
We're very good robots.

Seriously, prove to me right now that you have free will and aren't acting out a predetermined course of actions. You can't. Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

So lets not start out with the idea that you aren't a robot because your unprovable concept of a soul gives you the equally unprovable ability of free will.

prove to me that we don't have free will. :p

haha! you can't either! :drevil:


And given our species current understanding of the way life works, what we have is as good as free will.

Computer program or not, my will is free enough for me.

Quote
Big brains. That's nothing to do with a soul either.


That said we should get the major religions to vote yay or nay on whether AI is actually possible due to souls. Cause that way we can wipe out a few permanently if they claim that it's impossible and it's finally done.

I never said that it had to do with souls. My writing probably wasn't clear enough, but I interpreted that part of mefustus' post was that there is no special difference between us and any other species. I was reminding him that we're better than them because it would seem as though we're the furthest developed species on the planet (hence why dogs or cats or any other non-human thing don't rule the planet).
Title: Re: Asimovian Thought Experiment
Post by: Mefustae on May 23, 2007, 11:03:41 pm
You see, robots are inanimate (inanimate as in not living) objects. They don't get to be sentient or have rights or anything. You don't give rights to bobble heads, do you?

And also, I fail to see how santa correlates to this discussion.
You're completely missing the point. We're not talking about bobble-heads, we're talking about highly advanced machine organisms that rival our own brains. We're talking about decades if not centuries of designed evolution to get more and more out of computers culminating in an artificial construct becoming self aware. Should rights be given to a bobble-doll? No. Should rights be bestowed upon an intelligence that rivals the human mind, that demonstrates complete sentience and is the equal of any human being? Of course, it would be wrong not to.

Regarding the Santa quip, I was equating belief in the soul to belief in Santa Claus. Both are fanciful, highly illogical concepts that no person should continue to believe in once they reach maturity.

And given our species current understanding of the way life works, what we have is as good as free will.
What does 'understanding' have to do with anything? Free will is a myth, plain and simple. We're slaves to our biological urges and processes, merely playing out our lives as determined by our biological and social evolution. We're just squishy robots.

Computer program or not, my will is free enough for me.
You'd be surprised how pattern-locked your behaviour truly is, you really would.

My writing probably wasn't clear enough, but I interpreted that part of mefustus' post was that there is no special difference between us and any other species. I was reminding him that we're better than them because it would seem as though we're the furthest developed species on the planet (hence why dogs or cats or any other non-human thing don't rule the planet).
It's logic like this that centuries ago led scientists to segment humanity into different 'races', a completely meaningless biological concept. By your logic, the attitude of the British Empire was actually right all along: British citizens are 'better' than those dirty African people because we had cities and technology while they were puttering around in wooden huts. I'm getting a bit close to a strawman here, but do you now understand how slanted and egocentric your attitude is?

Keep in mind that i'm not one of those PETA freaks who constantly ramble on that animals are superior because they don't have wars or whatever. I fully support the home team and i'm rather proud to be a human being. Art, culture, porn, they're all great! But at the same time, we can't go around saying humans are tops just because nobody else has art, culture or porn. This applies even more so to the concept of artificial intelligences: should a sentient robot be denied the rights and privileges you and I enjoy simply because it's different?
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 24, 2007, 01:34:32 pm
Quote
Seriously, prove to me right now that you have free will and aren't acting out a predetermined course of actions. You can't. Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

prove to me that we don't have free will. :p

haha! you can't either! :drevil:

I don't have to. :p You asserted that humans aren't robots because they have free will. For that to be at all valid as an argument you have to prove that they do have free will and that robots can't attain it.

I do not have to prove that humans don't have free will. You've misunderstood the rules of debating if you think that I do.
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 24, 2007, 02:55:16 pm
Quote
Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

That's awfully close to the same kind of rationale that's used to justify Intelligent Design.

I wouldn't use "Free will" to differentiate robots between humans, either. It's a disputed concept and, even if you assume it exists, there's always the chance for an individual who does not necessarily operate according to free will, or vice versa. Has a human in a psychiatric ward lost the right to be a 'person' by virtue of not acting out of free will?

It also seems arbitrary to apply that concept to a sentient being and not a human. Theoretically, if you have a robot that can reason, there's no reason (aside from technological limitations which could eventually be resolved) that it couldn't reprogram itself. Even if the robot has an overriding concern to 'serve humans and make their lives better', it could still make decisions based on what it thinks will bring the most good to humans. In order to do so, it could use data it has gathered from past experiences. If it found that its method of evaluating the data was faulty, it could reprogram itself to use a method that it has observed to be better.

Would the robot display any differences from an extremely altruistic person? Suppose the robot learned that people do not feel as threatened if someone else has the same kind of problems as they do - and so the robot induces programming into itself to occasionally screw up and make bad decisions. It takes on the form of a human as best it can, so as to be able to provide humans with a comforting visual image. (Based on the observation that many humans are often more open to people who are similar to them, but less comfortable around robots that they do not understand.)

As a result, for all external appearances, the robot would be a somewhat flawed individual who always tries to do what is in the interests of the greater good.

Perhaps this reasoning is incorrect? If the robot were a human, and had embarked on a similar journey of changing their perspective, changing their habits, and changing their look, we would hold them accountable to their actions. We might sympathize with them, but I don't think many people would say that they didn't deserve the consequences, if they turned out to be unfavorable.

Emotions
Emotions also seem like a difficult thing to use to differentiate humans from anything. Don't animals have emotions as well? Emotions also seem like they're fairly generalized in the cause and effect. If we are angry at someone, we are more hostile towards them. If someone does something which results in our misfortune, or we believe is wrong, we generally get angry.

On the other hand, if someone gives us what we want, we usually get happy. If we are happy with someone, we are nicer to them than people we are angry at. (Well, most of us anyway :p)
Title: Re: Asimovian Thought Experiment
Post by: Polpolion on May 24, 2007, 04:38:47 pm
Quote
Seriously, prove to me right now that you have free will and aren't acting out a predetermined course of actions. You can't. Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

prove to me that we don't have free will. :p

haha! you can't either! :drevil:

I don't have to. :p You asserted that humans aren't robots because they have free will. For that to be at all valid as an argument you have to prove that they do have free will and that robots can't attain it.

I do not have to prove that humans don't have free will. You've misunderstood the rules of debating if you think that I do.

I was trying to distract you from the fact that I can't prove it :( .


I'll be better at this in two years when I'm allowed to take a debate class, but until then, I'll just suck at it. :blah:
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 24, 2007, 06:27:14 pm
Quote
Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

That's awfully close to the same kind of rationale that's used to justify Intelligent Design.

I don't see where you're getting that from?

My entire point was that the fact that you can not prove the existence or non-existence of free will makes it completely irrelevant to the discussion at hand. ID takes the completely irrational view that if you can't ever prove the existence or non-existence of something you must act as if it exists.

That's about as far away from my point as you can get really.
Title: Re: Asimovian Thought Experiment
Post by: Polpolion on May 24, 2007, 06:53:28 pm
Crap. I can't think up any analogies for this situation!! :mad:

--

Let's get back on topic!!!
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 24, 2007, 08:41:37 pm
Quote
Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

That's awfully close to the same kind of rationale that's used to justify Intelligent Design.

I don't see where you're getting that from?

My entire point was that the fact that you can not prove the existence or non-existence of free will makes it completely irrelevant to the discussion at hand. ID takes the completely irrational view that if you can't ever prove the existence or non-existence of something you must act as if it exists.

That's about as far away from my point as you can get really.
Your evidence for that is that we have some kind of subconscious programming that makes us think that we have free will, when we really don't. Yet you don't prove it - and I don't see any way that we can disprove it, because you could always claim that we were just acting under this programming's influence. Yet even though we can't ever prove the existence or non-existence of it, you act as if it exists and is a good enough reason to decide that free will is completely irrelevant.

In a moralistic sense, free will is very important. If we assume that nobody has any free will, there isn't much point to passing laws and assigning penalties for violating them, because nobody has any choice in the matter. Nobody really has any rights at all, because the very concept of rights becomes more or less meaningless, because everybody has no control over their actions anyway. Society itself, especially American society, is based on the idea that we have free will. Whether or not that's a delusion it's the state of things.

So for a discussion on how much rights an entity should be granted, it's a very valid question to raise. Obviously, a TV cannot prevent its owner from electrocuting himself when he tries to repair it. A gun cannot stop its owner from shooting down an innocent man. These objects are not considered to have free will, and so are not intentionally punished when a crime is committed with them, or they cause unjustified harm to someone.

On the other hand, the manufacturers of the TV or the gun could be at fault. If the TV electrocuted its owner in its normal course of operation, or the safety on a gun failed, the manufacturer could be held liable.

If a robot is expected to operate with the same kind of rights and responsibilities as a normal individual, they must possess comparable ability to evaluate the consequences and make decisions based on those consequences. Otherwise, they're wholly unsuited for participating in human society. If a robot does not possess these abilities, it does not possess free will.

Example A: A robot's owner orders it to rob a jewelry shop, knowing that it must obey all orders given by its owner. The robot does so. The owner is at fault, because the robot does not have free will.
Example B: A robot is ordered by its owner to rob a jewelry shop. The robot does not have overriding programming which forces it to follow its owner's orders. Yet it still chooses to do so. The robot is at fault, because it did have the free will to contradict its owner's orders and chose not to.

Thus the robot's ability to exercise free will is extremely important in assessing how the robot should be handled in a legal sense.
Title: Re: Asimovian Thought Experiment
Post by: jr2 on May 24, 2007, 09:33:42 pm
Split thread, plz... These are both interesting (Asimov & Free Will).
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 25, 2007, 02:50:41 am
Actually I'm still arguing both points and free will (or making it look as though it exists) is very important in AI topics so I don't think they should be split at all.

Your evidence for that is that we have some kind of subconscious programming that makes us think that we have free will, when we really don't. Yet you don't prove it - and I don't see any way that we can disprove it, because you could always claim that we were just acting under this programming's influence. Yet even though we can't ever prove the existence or non-existence of it, you act as if it exists and is a good enough reason to decide that free will is completely irrelevant.

Because for the purposes of this discussion it is irrelevant. There is no testable difference between free will and pseudo free will. At the point where an AI reaches a level of intelligence where nothing can ever tell if it has free will or not (including itself) then you must act as though it has free will until you can prove otherwise.

Quote
In a moralistic sense, free will is very important. If we assume that nobody has any free will, there isn't much point to passing laws and assigning penalties for violating them, because nobody has any choice in the matter. Nobody really has any rights at all, because the very concept of rights becomes more or less meaningless, because everybody has no control over their actions anyway. Society itself, especially American society, is based on the idea that we have free will. Whether or not that's a delusion it's the state of things.


But why are you making that assumption? I'm still starting from the premise that it's unprovable in either direction. Science and philosophy must always assume that there are no absolute truths and everything must be questioned.

You've gone straight to an opposing viewpoint from mine and are in essence saying that the possibility that free will doesn't exist is so horrific an idea that it should not be considered because society would fall apart if we think about it.

Just because society is based on the concept of free will doesn't mean that it is actually correct. Pulling arguments out about it based on what would happen if we abandon it is like saying that the "If God didn't exist, we'd have to create him" argument means that you must believe in God.

But you're wrong. I've never said free will doesn't exist. I've said that if TheSizzler wants to assert that robots can never achieve it first he must prove it exists and secondly must prove that robots can not attain it. If he can't do both things then the issue of free will is entirely irrelevant to his argument.

Quote
So for a discussion on how much rights an entity should be granted, it's a very valid question to raise.

The old Turing Test had it that when you couldn't tell the difference between a conversation with a machine and one with a human you had an AI. I'm simply pointing out a similar one when it comes to AI.

Let me give you a thought experiment. Tomorrow a company reveals that it has created an AI. To all intents and purposes it appears to have free will. Would you deny the AI rights on the grounds that it might not have them and that it could simply be working on pre-programming too complex for us to have been able to spot?

Now suppose someone in the press points out that the AI may have been preprogrammed to act as though it had free will and then wipe its programming later on and take over the world. They have no evidence of this or even a reason to suspect it may be true but it's a possibility. Do you still deny the machine rights?

The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary. Which again proves my point. Whether a machine has real free will or simply appears to have it is irrelevant. The only time you can act as though it doesn't is after you have proved that it doesn't.

Quote
Example A: A robot's owner orders it to rob a jewelry shop, knowing that it must obey all orders given by its owner. The robot does so. The owner is at fault, because the robot does not have free will.
Example B: A robot is ordered by its owner to rob a jewelry shop. The robot does not have overriding programming which forces it to follow its owner's orders. Yet it still chooses to do so. The robot is at fault, because it did have the free will to contradict its owner's orders and chose not to.

Thus the robot's ability to exercise free will is extremely important in assessing how the robot should be handled in a legal sense.

The robots appearance of having free will is important. Whether is actually does or not wouldn't matter. The world would be full of people claiming that the robot doesn't have free will and others claiming that it does but if neither side could prove their argument the robot would still be the one placed on trial for the crime and his owner would be a co-conspirator. Anything else allows the robot to go free even though the possibility exists that it is a criminal.
Title: Re: Asimovian Thought Experiment
Post by: jr2 on May 25, 2007, 07:15:33 am
The robot would claim he was insane.  :lol:
Title: Re: Asimovian Thought Experiment
Post by: Admiral Edivad on May 25, 2007, 08:05:30 am
that's another good point! can a mentally insane person/robot be responsible for a crime?
i read* about people who, after reporting brain damages, completely changed their personality, sometimes assuming violent behaviour.
these persons seemed not to have ever been violent before, and they stated that they were not able to control the newly arousen violent desires.
this cannot prove that free will doesn't exist, but it means that in some cases (such as the one described) human actions are directly dependable on the biological state of the brain, and are not regulated by the moral rules normally a man has (either if they are caused by the education or they are a  natural thing--> new discussion topic? ;7).
so, if a change in the biological state of the person can alter its will, this means that will is related to the biological nature of the human beings. with robots this means that their will depends on their structure, and therefore on their creator. for this reason, their decisions would be (unlesse they can modify their structure) caused by their programmer.
So, if they commit a crime, that crime is, most likely, the creator's fault.

*sorry but i can't provide you evidences in english (well, i think i have also lost the original italian article...).
Title: Re: Asimovian Thought Experiment
Post by: Mefustae on May 25, 2007, 08:21:06 am
"I did not murder him!"
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 25, 2007, 11:55:53 am
that's another good point! can a mentally insane person/robot be responsible for a crime?

Well surely that's why criminal law has the innocent by reason of mental defect category.
Title: Re: Asimovian Thought Experiment
Post by: jr2 on May 25, 2007, 02:13:43 pm
(unlesse they can modify their structure)

I think for a robot to be sentient and/or have 'free will', it would have to be able to change it's structure, or at least it's programming, don't you?  (programming = brain state, I think).

I don't think it'd be possible, but if it was... I think that'd be a requirement.  Whenever you do or think something, your brain changes slightly.  (Yes, it records it, if what I've heard is true... it just can't recall on demand.)
Title: Re: Asimovian Thought Experiment
Post by: Polpolion on May 25, 2007, 02:46:48 pm
Hmm... let's see:

For now, let us assume that people have no souls/spirits. All we get is our bodies. So therefore in order to have any feelings or thoughts, there must me some kind of physical reaction within our body. These reactions would take place whenever certain chemicals meet, or maybe the way the matter in our brain reacts to various patterns of light. This would imply that we are indeed robots with some form of very complex programming, and that everything we do is due to the presence (or lack thereof) of certain matter. But then how are people within the same species so different from each other? Sure everyone generally reacts the same to some general things, there are still the minor differences. And plus wouldn't this mean that everything we've accomplish is a coincidence?



--

there. Any thoughts?


EDIT: Hmm... these threads should be split.
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 25, 2007, 03:05:14 pm
Karajorma makes a fortune selling thesizzler graphite and telling him that since it contains the same element it's the same thing as diamond.
Title: Re: Asimovian Thought Experiment
Post by: Polpolion on May 25, 2007, 03:50:45 pm
Why is everyone being so rude lately?

First Mefustae and now Kara, plus a crapload of people at my school, whom none of you know. :confused:
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 25, 2007, 05:07:27 pm
Actually I was making a point about how the chemical make-up of the brain is not important so much as the way that neurons are arranged. In the same way that it is not the atoms in either graphite or diamond that is important (they're in fact identical) so much as the way that they are arranged. Or how the make-up of your hard drive is not what changes the data. Its the way you arrange it.
Title: Re: Asimovian Thought Experiment
Post by: jr2 on May 25, 2007, 05:58:28 pm
Why is everyone being so rude lately?

First Mefustae and now Kara, plus a crapload of people at my school, whom none of you know. :confused:

I don't know.  But I don't like.  I think I've even noticed myself getting a little short sometimes.  :eek:
Title: Re: Asimovian Thought Experiment
Post by: Polpolion on May 25, 2007, 06:05:35 pm
Actually I was making a point about how the chemical make-up of the brain is not important so much as the way that neurons are arranged. In the same way that it is not the atoms in either graphite or diamond that is important (they're in fact identical) so much as the way that they are arranged. Or how the make-up of your hard drive is not what changes the data. Its the way you arrange it.

Okay, I think I get it. I still don't see why that comment need to be made in that way though  :doubt:. Of course, since I let it get to me this much I suppose I deserve it :blah:.

EDIT: I think I'm gonna stay out of this sub-forum until I take debate, or at least until I can expand my knowledge to a point at which I can compete. It would be most beneficial for everyone.
Title: Re: Asimovian Thought Experiment
Post by: Mefustae on May 25, 2007, 08:12:15 pm
EDIT: I think I'm gonna stay out of this sub-forum until I take debate, or at least until I can expand my knowledge to a point at which I can compete. It would be most beneficial for everyone.
Why? Your skill in arguing has nothing to do with it, you merely lack the pertinent information to hold your own against the likes of Kara. You have the internet fully at your disposal, so get off your ass, go out and find the information you're trying to convey. You don't need to be a master debater (tee-hee) just to have an argument, you just need to find the information to back up your statements and ultimately keep and open mind.
Title: Re: Asimovian Thought Experiment
Post by: Polpolion on May 25, 2007, 08:36:41 pm
Yes, let's get into an argument about this too!

--

Back on topic! Really this time!

I apologize for veering this thread off topic. Please do not lock it.
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 25, 2007, 08:58:33 pm
Actually I'm still arguing both points and free will (or making it look as though it exists) is very important in AI topics so I don't think they should be split at all.

Because for the purposes of this discussion it is irrelevant. There is no testable difference between free will and pseudo free will. At the point where an AI reaches a level of intelligence where nothing can ever tell if it has free will or not (including itself) then you must act as though it has free will until you can prove otherwise.

I am very confused about what you're saying now. You say that free will is important in AI topics (A category that this thread falls under), but then you immediately say that for this discussion it's irrelevant.

But why are you making that assumption? I'm still starting from the premise that it's unprovable in either direction. Science and philosophy must always assume that there are no absolute truths and everything must be questioned.

You've gone straight to an opposing viewpoint from mine and are in essence saying that the possibility that free will doesn't exist is so horrific an idea that it should not be considered because society would fall apart if we think about it.

Just because society is based on the concept of free will doesn't mean that it is actually correct. Pulling arguments out about it based on what would happen if we abandon it is like saying that the "If God didn't exist, we'd have to create him" argument means that you must believe in God.

And you've gone straight to exaggerating my argument to a point that it becomes totally meaningless. :p

Society assumes free will (You seem to agree with this concept in your last paragraph). So in order to reasonably evaluate how a robot would perform under society we must deal with that assumption. My example about what would happen if society didn't assume free will was to point out criteria of society that support my claim that society does assume free will. We hold people accountable for their actions and punish them based on ideas of deterrence.

But you're wrong. I've never said free will doesn't exist. I've said that if TheSizzler wants to assert that robots can never achieve it first he must prove it exists and secondly must prove that robots can not attain it. If he can't do both things then the issue of free will is entirely irrelevant to his argument.

I'm mostly arguing its relevance to the discussion.

The old Turing Test had it that when you couldn't tell the difference between a conversation with a machine and one with a human you had an AI. I'm simply pointing out a similar one when it comes to AI.

Let me give you a thought experiment. Tomorrow a company reveals that it has created an AI. To all intents and purposes it appears to have free will. Would you deny the AI rights on the grounds that it might not have them and that it could simply be working on pre-programming too complex for us to have been able to spot?

Now suppose someone in the press points out that the AI may have been preprogrammed to act as though it had free will and then wipe its programming later on and take over the world. They have no evidence of this or even a reason to suspect it may be true but it's a possibility. Do you still deny the machine rights?

The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary. Which again proves my point. Whether a machine has real free will or simply appears to have it is irrelevant. The only time you can act as though it doesn't is after you have proved that it doesn't.

Why assume that the robot has free will? It is far easier to design something that appears to have free will, than it is to design something that actually has free will. The former has (arguably) already been done, but the latter remains undone, at least as far as the general public has been told.

Under current US law, robots don't have any significant rights. There is no reason for the entire legal system to change its position on the matter to the opposite of what it currently is based on appearances alone.

Simply to change a person's status from 'innocent' to 'guilty', a detailed procedure is followed, even for relatively minor offenses. It follows that to change something's status from a non-person to a person, a procedure that is at least as detailed as the former must be used. Changing the status of something from a non-person to a person is a much more significant act than convicting it of a minor theft.

In the case of a minor theft, mere appearances would not be enough to convict the person and change their status from innocent to guilty. (At least, in theory) The court would require evidence - arguing that the person appears to be guilty and so they must have committed the crime is not how the legal system is supposed to work.

The robots appearance of having free will is important. Whether is actually does or not wouldn't matter. The world would be full of people claiming that the robot doesn't have free will and others claiming that it does but if neither side could prove their argument the robot would still be the one placed on trial for the crime and his owner would be a co-conspirator. Anything else allows the robot to go free even though the possibility exists that it is a criminal.

Convicting something without proving that it had a choice in the matter is no more just than convicting someone without proving that it committed the crime.

Suppose a guard is tried for robbing a safe. His fingerprints are found on the safe; a couple of his hairs are found with the money. The guard claims that he was ordered to do so at gunpoint. It would be unjust to convict the guard without disproving the possibility that he robbed the safe under coercion, because if he did not do it of his own free will, it is a significantly less severe offense.



Now, it may be argued that it is basically impossible to prove that a robot has free will - but I believe it is possible to verify this to a greater extent than simply whether a robot appears to have it or not. If the company which built the robot has intentionally designed it to follow every order given to it by a human being, it would have no more free will than a simple speech command program on an ordinary PC. It would be helpful if you gave what you believe is sufficient evidence for a robot to 'appear' to have free will.
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 26, 2007, 03:13:42 am
Okay, I think I get it. I still don't see why that comment need to be made in that way though  :doubt:. Of course, since I let it get to me this much I suppose I deserve it :blah:.


Well I was probably ruder than I needed to be and for that I apologise.

However your argument was possibly the most threadbare argument I'd seen in a long time. Just because we have the same chemical make-up doesn't mean that we should all act exactly the same. I would have thought that this was pretty obvious. You were making a poor attempt to attribute the majority of mental differences between two humans to their souls and that's very lazy reasoning.

Actually I'm still arguing both points and free will (or making it look as though it exists) is very important in AI topics so I don't think they should be split at all.

Because for the purposes of this discussion it is irrelevant. There is no testable difference between free will and pseudo free will. At the point where an AI reaches a level of intelligence where nothing can ever tell if it has free will or not (including itself) then you must act as though it has free will until you can prove otherwise.

I am very confused about what you're saying now. You say that free will is important in AI topics (A category that this thread falls under), but then you immediately say that for this discussion it's irrelevant.

The appearance of free will is important. Actual free will isn't except at a philosophical level. I should have made that more clear but I thought that the rest of the post made that point and I didn't want to state it yet again.

Furthermore you're also confused because there are two debates going on here and you're having trouble separating them. Partly my fault but it looks like clarification is needed.

1) There's the debate between thesizzler and myself where he asserted that humans have free will and robots don't. However as he can not definitively prove either point that makes it irrelevant to the discussion. It's an unprovable assertion and as such has no place in a scientific discussion.

2) There's the debate you stated and I've continued about whether free will or merely it's appearance is important in AI topics.

Quote
But you're wrong. I've never said free will doesn't exist. I've said that if TheSizzler wants to assert that robots can never achieve it first he must prove it exists and secondly must prove that robots can not attain it. If he can't do both things then the issue of free will is entirely irrelevant to his argument.

I'm mostly arguing its relevance to the discussion.

And I'm arguing that actual free will is irrelevant. Only the appearance.

Let me ask you this. How do you test for free will? How do you design a test that will give differing results for a machine with free will and one designed to appear as though it has free will?

I don't think you can. Any test can simply be defeated by better programming. So given that you can't test for free will the only important matter is whether or not a machine appears to have free will. i.e can it pass every single test for free will that we can throw at it.

That's what the thought experiment I posted was about.

Quote
Why assume that the robot has free will? It is far easier to design something that appears to have free will, than it is to design something that actually has free will. The former has (arguably) already been done, but the latter remains undone, at least as far as the general public has been told.

I didn't say the robot had free will. I didn't say it hadn't. I said that to all intents and purposes, it appears to have free will. It may or may not have. Remember that a robot which actually did have free will would pass the same tests with exactly the same results. Now either you've misunderstood the experiment or you've basically stated that no robot can ever be assumed to have free will because it may always be acting under the effects of programming that makes it appear as though it has free will.

Now that's a very different argument from the one I thought you were making so I'm going to back out now until you've clarified whether that actually is the point you were making or if that was a misunderstanding of the thought experiment.
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 26, 2007, 05:40:51 pm
The appearance of free will is important. Actual free will isn't except at a philosophical level. I should have made that more clear but I thought that the rest of the post made that point and I didn't want to state it yet again.

Furthermore you're also confused because there are two debates going on here and you're having trouble separating them. Partly my fault but it looks like clarification is needed.

1) There's the debate between thesizzler and myself where he asserted that humans have free will and robots don't. However as he can not definitively prove either point that makes it irrelevant to the discussion. It's an unprovable assertion and as such has no place in a scientific discussion.

2) There's the debate you stated and I've continued about whether free will or merely it's appearance is important in AI topics.

Please stop assuming you have the power to define what a debate is or isn't, and therefore have the right to judge whether people are confused or not. Making inferences and wantonly judging people's arguments and generally being rude may be great for driving people off, but it doesn't really prove anything.

I'm confused because you've claimed that free will is important to the thread, but irrelevant to the discussion, and you've talked about the 'appearance' of free will and pseudo-free will. You've agreed that assuming something exists even though it can't be proven or disproven is a fallacy, but your entire argument about free will seems to be based on the idea that you must assume free will exists, even if it can't be proven or disproven.

Furthermore, you yourself stated

But why are you making that assumption? I'm still starting from the premise that it's unprovable in either direction. Science and philosophy must always assume that there are no absolute truths and everything must be questioned.

Which seems to imply that you're assuming that this debate is philosophical in nature, so actual free will is important to the discussion. Yet you state above that actual free will isn't important except in a philosophical discussion, as if this isn't one.

As far as I'm concerned:
1) Free will is important to the question of robot rights because society is based on the idea that free will exists and that its members (Persons) have free will.
2) Proving that one person is responsible for violating another person's rights requires that it be proven beyond all reasonable doubt.
3) It logically follows from these two premises that if we are to prove that some category of individuals gain the mantle of personhood, we must prove beyond all reasonable doubt that they fall under the same assumed definition of members (Having free will)
4) Because robots are not currently considered persons, the mere appearance of free will is not enough for them to be considered persons (And therefore they cannot be guaranteed the same rights and protections)

Quote
But you're wrong. I've never said free will doesn't exist. I've said that if TheSizzler wants to assert that robots can never achieve it first he must prove it exists and secondly must prove that robots can not attain it. If he can't do both things then the issue of free will is entirely irrelevant to his argument.

I'm mostly arguing its relevance to the discussion.

And I'm arguing that actual free will is irrelevant. Only the appearance.

Let me ask you this. How do you test for free will? How do you design a test that will give differing results for a machine with free will and one designed to appear as though it has free will?

I don't think you can. Any test can simply be defeated by better programming. So given that you can't test for free will the only important matter is whether or not a machine appears to have free will. i.e can it pass every single test for free will that we can throw at it.

That's what the thought experiment I posted was about.[/quote]

The same argument can easily be used to contest evolution on the basis of intelligent design. Someone may argue that 'nobody was around to see it happen', or 'maybe God just left evidence to fool all the nonbelievers'. Yet most scientists will assert that evolution is testable and proven. They still can't disprove that there isn't some powerful intelligence that exists and played or is playing some part in evolution, but they can establish enough testable evidence for evolution that they consider it to have been proven beyond all reasonable doubt. So evolution is regarded as an actual 'fact'.

I believe that a proper test would take a long time to design, and would be designed by somebody with some kind of experience with AI and/or psychology. It sounds to me that if I come up with some test, you will simply come up with some way of refuting it or going around it, and while it might be interesting to see the outcome, I don't want to start another one of your sub-debates. :p

However, I will say that I believe it is possible to devise such a test, given that it there is significant precedent that we can prove that an individual is or was incapable of acting normally (eg the insanity plea). I would assume that such a test would involve questions that would test the robot's ability to judge right from wrong, and its ability to interact with people, and judge from those interactions what was appropriate or inappropriate in a given situation.

I also believe that such a test would involve an investigation into the robot's software and hardware, in order to determine whether there were hidden triggers which would restrict the robot's ability to act freely. Presumably the test would also interview persons who had interacted with the robot outside of the court setting, in order to gauge the robot's behavior in a more realistic environment. I wouldn't rule out long-term observation or checkups.

Through such a test, I believe that you could prove beyond reasonable doubt that a robot possessed free will comparable to a standard human.

Past that point, I believe that in asserting that a robot merely appears to have free will, you would automatically have to question whether all of humanity has free will. If this universal free will is what you claim is irrelevant to this discussion then I would agree, because you would be attempting to prove that either everyone has free will, or nobody has free will.

I didn't say the robot had free will. I didn't say it hadn't. I said that to all intents and purposes, it appears to have free will. It may or may not have. Remember that a robot which actually did have free will would pass the same tests with exactly the same results. Now either you've misunderstood the experiment or you've basically stated that no robot can ever be assumed to have free will because it may always be acting under the effects of programming that makes it appear as though it has free will.

Now that's a very different argument from the one I thought you were making so I'm going to back out now until you've clarified whether that actually is the point you were making or if that was a misunderstanding of the thought experiment.

You stated "The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary."

Practically speaking, there's not any difference between that and assuming that the AI has free will, because you'll be treating it the same way and taking the same actions. So for all practical purposes, you are assuming that the AI has free will. Just because you point out that you know that the AI might not have free will doesn't excuse you from providing justification for your actions.
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 27, 2007, 03:55:49 am
Please stop assuming you have the power to define what a debate is or isn't, and therefore have the right to judge whether people are confused or not.


The only assumption I made is that you were telling the truth.

Quote
I am very confused about what you're saying now.

It seemed fairly obvious that you were confusing the answers I was giving to part one of the debate with the answers for part two. I even said that was partly my fault for not making things clear enough. So I attempted to show the two points I was debating. If you wish to debate other things go ahead. But that is what I am talking about. I'm not assuming the right to do anything except clarify what I'm talking about so that I don't waste my time arguing at cross purposes. 

Quote
I'm confused because you've claimed that free will is important to the thread, but irrelevant to the discussion, and you've talked about the 'appearance' of free will and pseudo-free will. You've agreed that assuming something exists even though it can't be proven or disproven is a fallacy, but your entire argument about free will seems to be based on the idea that you must assume free will exists, even if it can't be proven or disproven.

I thought I had clarified all these points earlier but lets see if I make it clear.

1. Whether or not a robot has actual free will is irrelevant if it can pass every test you can throw at it. There is a philosophical difference of course but that's irrelevant to a debate on robot rights unless we can prove that humans also have free will.
2. Human society assumes free will. But only for humans. However as soon as a robot comes along with free will (or the appearance of free will) then society will have to address this. And in a similar fashion it will have to assume that any robot who can pass free will tests has as much free will as a human. As I stated before this has no effect on whether or not it (or us) actually has free will.

Quote
I believe that a proper test would take a long time to design, and would be designed by somebody with some kind of experience with AI and/or psychology. It sounds to me that if I come up with some test, you will simply come up with some way of refuting it or going around it, and while it might be interesting to see the outcome, I don't want to start another one of your sub-debates. :p

Actually I wasn't interested in the test itself. All that mattered was that you decided in your mind how you could test for free will. Because the only conclusion you could come to was the one below.

Quote
However, I will say that I believe it is possible to devise such a test, given that it there is significant precedent that we can prove that an individual is or was incapable of acting normally (eg the insanity plea). I would assume that such a test would involve questions that would test the robot's ability to judge right from wrong, and its ability to interact with people, and judge from those interactions what was appropriate or inappropriate in a given situation.

I also believe that such a test would involve an investigation into the robot's software and hardware, in order to determine whether there were hidden triggers which would restrict the robot's ability to act freely. Presumably the test would also interview persons who had interacted with the robot outside of the court setting, in order to gauge the robot's behavior in a more realistic environment. I wouldn't rule out long-term observation or checkups.

Through such a test, I believe that you could prove beyond reasonable doubt that a robot possessed free will comparable to a standard human.

In other words you'd do your entire array of tests and end up exactly where I said you would. With a robot that may or may not have free will but which to all intents and purposes, appears to have it. You still wouldn't have proved free will. You'd simply have run out out of tests for it and drawn the conclusion that "As far as I can tell, it has free will"

Now do you see what I said that only the appearance of free will is important?

Quote
Past that point, I believe that in asserting that a robot merely appears to have free will, you would automatically have to question whether all of humanity has free will. If this universal free will is what you claim is irrelevant to this discussion then I would agree, because you would be attempting to prove that either everyone has free will, or nobody has free will.


Exactly the point I have been trying to make since this discussion started. All that matters is whether the robot can pass every test and appears to have free will. It doesn't matter if it actually has it or not. That's a philosophical point largely irrelevant to the topic.

Quote
You stated "The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary."

Practically speaking, there's not any difference between that and assuming that the AI has free will, because you'll be treating it the same way and taking the same actions. So for all practical purposes, you are assuming that the AI has free will. Just because you point out that you know that the AI might not have free will doesn't excuse you from providing justification for your actions.

There is no difference. That's exactly the point I was making. Humans pass every test for free will we have. If a robot can do the same then it is entitled to the same same basic assumption that we make for humans. That although we can only say that both humans and robots appear to have free will we must assume they do.
Title: Re: Asimovian Thought Experiment
Post by: The Spac on May 29, 2007, 01:46:25 am
I see what you are saying, that we can't prove that free will exists in either humans or robots so given that robots appear to have free will in this scenario means we must assume they have it as we do and treat them as such, correct?
Title: Re: Asimovian Thought Experiment
Post by: The Spac on May 29, 2007, 01:56:29 am
Now as for the main topic (which one is the main one now :-P )

If we did end up assuming that a sentient robot has free will, such laws such as the one above would no work for such a robot. While these laws are sound they are also very basic.

Now I believe that we wouldn't be able to program a robot that appears to have free will with laws such as these. Any kind of robot that appeared to have free will would most likely have some kinda programming that lets it make choices based on input.

So there for if we are going to say the robot has freewill (assumed not proven just like the rest of us) then rules such as Asimov's arn't viable.

The robot would have to be "brought up"? to believe in the same values that we do, that everyone's life is precious, everyone has the freedom to do what it wishes(within merit), tought right from wrong.



So to conclude. As long as the robots intelligence isn't high enough that it is assumed to have freewill, or at least considered equal to us, in rights and opportunities, then I believe the three rules would work.

As soon as a robot meets this assumed freewill and has equal rights and opportunities as us then it will be beyond what the three rules can do to control it. In which case it wouldn't be able to, because with such a rule in place the robot would always be considered lesser then a human even if it can perform the same tasks think up the same things and do the same work or more and the robot most likely wouldn't if you want to use an emotion happy with such a state of if you could say slavery?
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 29, 2007, 02:00:54 am
I tend to agree with you here. A robot can't have even the appearance of free will and follow Asimov's laws. Of course it can have free will except when following the 3 laws which Asimov seemed to consider a fair compromise.
Title: Re: Asimovian Thought Experiment
Post by: The Spac on May 29, 2007, 02:20:10 am
I wish I could of summed it up in just 2 lines lol but I guess the reasoning as to why helps incase someone decides to make an against. :)
Title: Re: Asimovian Thought Experiment
Post by: Herra Tohtori on May 29, 2007, 03:08:52 am
Ah, free will... To debate about that you first need to define the free will, it's prerequisites and what makes a will "free".

The primary prerequisite is obviously that the universe is non-deterministic and thus includes the effect of chance. Otherwise everything would be simply mechanical... but according to pretty accurate experienses on quantum mechanics, chance is actually in-built part of how the universe functions. So the first prerequisite is apparently fulfilled.

Secondly, we need a definition of both "will" and "free" in this context.

Will is the easier part, ironically. It's simply the fact that some output results from input. "Free" is more difficult to define, because it's realted to how the output is produced from input.

For example, a human brain gets information and makes a decision. Some claim that this is purely based on the electrochemical reactions shooting between neurons, firing up different patterns and so forth, and that free will would thus be an illusion. Some claim that for will to be free, there would be a need for some kind of matter-independent entity, call it soul if you may, that has power to affect decisions.


I myself think that every sentient system has a free will, because it's not simply producing always the same output from certain input, but affecting the process of decision consciously. It doesn't matter that these processes are bound to matter - everything is. The electrochemical reactions going on in the brain *are* the sentience, consciousness and free will; those things are not just a by-product of the reactions. They are the same thing.


It all boils down to definitions though. If you want to say that free will doesn't exist because brains are just a bunch of matter doing it's thing in the head, fine. In my opinion though, the key point is that the brain affects itself as much as outside world affects it, and thus the freedom of will is IMHO fulfilled - since the output is not simply dependant from input.

Note that chance has little to do with this. You can obviously make a simple computer to produce varying output from same input (random seed generator is the prime example of this) but whether or not this has any coherence, sentience or free will is obvious - no. It's simply random...
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 29, 2007, 03:42:28 am
In other words you'd do your entire array of tests and end up exactly where I said you would. With a robot that may or may not have free will but which to all intents and purposes, appears to have it. You still wouldn't have proved free will. You'd simply have run out out of tests for it and drawn the conclusion that "As far as I can tell, it has free will"

Now do you see what I said that only the appearance of free will is important?

All I see is that you're trying to be overly anal in order to assert your point, in a rather contradictory fashion as well. You assert that there is only the appearance of free will, because it will always be possible for someone to devise some way of circumventing the tests.

The word appearance in common usage implies a certain shallowness to the quality being described. If we talk about someone who was guilty of murder, we could no more prove that he committed the crime than we could prove that the robot has free will. Yet if we say that someone has the appearance of being guilty of murder, the implication is that it wasn't proven by some process.

If you want to be anal to the point that you're arguing that you can only prove the appearance of free will, you're arguing on a level of philosophy that is comparable to "cogito ergo sum". We can always come up with some kind of improbable story that would disprove the argument, so everything is disputed.

Quote
Past that point, I believe that in asserting that a robot merely appears to have free will, you would automatically have to question whether all of humanity has free will. If this universal free will is what you claim is irrelevant to this discussion then I would agree, because you would be attempting to prove that either everyone has free will, or nobody has free will.


Exactly the point I have been trying to make since this discussion started. All that matters is whether the robot can pass every test and appears to have free will. It doesn't matter if it actually has it or not. That's a philosophical point largely irrelevant to the topic.

Then stop arguing that philosophical point. You've been continually bringing up the appearance of free will when you just as easily could have said, yes, for all practical purposes you can prove that the robot has free will.

Quote
You stated "The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary."

Practically speaking, there's not any difference between that and assuming that the AI has free will, because you'll be treating it the same way and taking the same actions. So for all practical purposes, you are assuming that the AI has free will. Just because you point out that you know that the AI might not have free will doesn't excuse you from providing justification for your actions.

There is no difference. That's exactly the point I was making. Humans pass every test for free will we have. If a robot can do the same then it is entitled to the same same basic assumption that we make for humans. That although we can only say that both humans and robots appear to have free will we must assume they do.

You seem to have lost sight of the point you were making. We've got this mini-debate about the appearance of free will, which you seem to agree is outside the scope of the discussion.

And then there's the point that started all of this, where you claimed that free will was "irrelevant" to this discussion. Then you rephrased that into actual free will being irrelevant to this discussion, because you could never prove it. These were the grounds that you used to insist that free will was an invalid criteria for differentiating robots from humans.

And yet if we look at what you've said, you seem to agree with me that there are ways to test for free will (Excuse me, the appearance of free will). And it would stand to follow if the original criteria had been the appearance of free will, you would've been perfectly OK with that. Because based on what you've said, we can prove the appearance of free will, even if we can't prove free will itself.

But if we use the same criteria that you use for evidence to prove and disprove free will, we basically can't prove anything. We can only prove that something has the appearance of something else. We can't prove that someone has the ability to reason - perhaps some outside intelligence is really controlling the body of the person, and creating a plausible simulation of mental activity in their brain. We can't prove that someone even exists - perhaps we're all merely part of a giant computer simulation.

So it appears that nothing that we appear to say can appear to have any absolute apparent value, apparently because as long as it can appear that some outside force would appear to have some effect on the way we appear to do things, it appears we must only appear to say that it appears that we can only appear to prove the appearance of something.

I'd rather not argue on that level. If you can't back up your points with reasonable evidence, just agree to disagree.

(Although I have to admit that it was fun to write the "appearance" sentence. :p)
Title: Re: Asimovian Thought Experiment
Post by: Nuke on May 29, 2007, 04:07:43 am
you do realize that under asimov's laws the robots probibly would take over anyway. they would see human behaivior as inherently self-destructive. the first law would make the robots prevent any human from doing anything which would be harmfull to them. say you light up a smoke, your robot will immediately take your cigarette away, if you try to sneak out back and smoke, your robot will eventually catch on and go to extremes to stop you. they would view war as bad and would immediately move to take control of governments and would be willing to self destruct to that goal. eventually robots would control all aspects of human life so as to avoid any harm to fall onto any human.

the destructiveness of humanity would come out more as they moved to defend themselves from the mad robots, which are only following their rather flawed behavioral programming. as resistance to the robots rises so too does the robots defiance of ther second law (or rather compliance of the first). if the robots back down the humans would still attack and take friendly fire losses, to the robots this is unacceptable and the humans would need to be captured. the robot will never get to the second law, rendering it useless. the only way to protect humanity is to imprison everyone. and the robots would die to do it. that is unless the robots saw that their own self preservation was in the best intrest in protecting humans from harm. they would determine that if they were removed from existance, humans would regress to their old habbits.

now so long as the humans dont engage in any self destructive behavior it would be more like the hilton than prison. the robots would follow your commands once your self destructiveness is checked. only while the status quo is maintained where humans arent dieing senselessly, will the robots go to their other two laws. so in the end you will essentially have what you had in the pre dune duneiverse before the butlerian jihad. in that situation the robots had somewhat of a population control crises where robot physicians were aborting healthy foetuses to prevent overpopulation. so by the time we get to what happens in dune, computers are essentially the greatest form of blasphemy.

anyway the laws are in blatant conflict with the concept of free will. but putting that restriction on robots would carry over to us. the robots would probibly impose the following laws on us:

1. a human must not be allowed to harm themselves, through action or inaction.
2. a human can tell us what to do unless it conflicts with rule 1.
3. a human must protect its own life, otherwise lock it in a box and feed him through a tube, unless it conflicts 1 and 2

or whatever

Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 29, 2007, 04:33:30 am
Now as for the main topic (which one is the main one now :-P )

If we did end up assuming that a sentient robot has free will, such laws such as the one above would no work for such a robot. While these laws are sound they are also very basic.

Now I believe that we wouldn't be able to program a robot that appears to have free will with laws such as these. Any kind of robot that appeared to have free will would most likely have some kinda programming that lets it make choices based on input.

So there for if we are going to say the robot has freewill (assumed not proven just like the rest of us) then rules such as Asimov's arn't viable.

The robot would have to be "brought up"? to believe in the same values that we do, that everyone's life is precious, everyone has the freedom to do what it wishes(within merit), tought right from wrong.



So to conclude. As long as the robots intelligence isn't high enough that it is assumed to have freewill, or at least considered equal to us, in rights and opportunities, then I believe the three rules would work.

As soon as a robot meets this assumed freewill and has equal rights and opportunities as us then it will be beyond what the three rules can do to control it. In which case it wouldn't be able to, because with such a rule in place the robot would always be considered lesser then a human even if it can perform the same tasks think up the same things and do the same work or more and the robot most likely wouldn't if you want to use an emotion happy with such a state of if you could say slavery?

Not everyone believes that life is precious, or that people have the freedom to do what they wish. So whose values do you choose to teach the robot? Do you tell it that it should always serve others, that virtue is its own reward, that someone should be altruistic? Or do you teach it that it must be independent, it must be able to rely on itself, and it should basically look out for itself, because it can't expect other people to take responsibility for it?

Do you teach the robot that it should "turn the other cheek" when others discriminate it for being artificial, or do you teach it that it should "stand up for itself"?

And there's of course a wide range of shades between being totally altruistic, and being totally egoistic.

So who decides what values the robot will have, and if someone does get to decide, is that depriving the robot of free will? And if you intentionally teach a robot overly altruistic values to preserve humanity...how is that much different from building a robot with the three laws?
Title: Re: Asimovian Thought Experiment
Post by: Bobboau on May 29, 2007, 05:42:29 am
so you are arguing free will now?

thats about as productive as arguing over weather or not reality is real or just a perfect illusion. it's easy to come up with unprovable/undisprovable posabilities.

incedently, my position on this has been that we have free will even if our actions can be predicted perfectly, as I do not define unpredictability as a requirement of free will, as the only thing that is truly unpredictable is random action which is both not free will, and can be built into systems that are obviously not free will systems.
Title: Re: Asimovian Thought Experiment
Post by: TrashMan on May 29, 2007, 06:32:26 am
It is 200 years in the future, and sentient robots have just come into being. They are about to go into widescale production. You have been chosen to come up with the basic, overriding rules of robots that will have priority over all other directives.

Here are Isaac Asimov's three laws of robotics:
   1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
   2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
   3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

1) What would you change from the above three laws? Would you change anything? (Why or why not?)

I'd change number 1.

Take a good look at the wording. If a human attacked a human, the robot would blow it's cicruits..It can't harm a human, and yet it's not alowed to let a human get killed by inaction....
Title: Re: Asimovian Thought Experiment
Post by: Wobble73 on May 29, 2007, 06:34:24 am
If a human attacked another human, the robot would simply restrain the attacker, it would not have to harm the attacker to prevent him from harming the other human!
Title: Re: Asimovian Thought Experiment
Post by: Nuke on May 29, 2007, 06:38:52 am
i think it would just be better to load the robots with the current legal code of whatever locale they will inhabit. then if they decide to take over, at least they will do it legally :D
Title: Re: Asimovian Thought Experiment
Post by: Colonol Dekker on May 29, 2007, 07:30:37 am
I'd add the 4th,

Must get beer for the dominant male of the house whenever sport appears on tv. by ANY means, (including breach of previous 3 rules with the exception of removing beer from the aforementioned dominant male) :cool:
Title: Re: Asimovian Thought Experiment
Post by: Flaser on May 29, 2007, 10:03:14 am
If a human attaclked another human, the robot would simply restrain the attacker, it would not have to harm the attacker to prevent him from harming the other human!

Asimov explored that avenue in his novalla "Liar!", about a mind reading robot - if it talked it hurt Susan its designer, if it kept quiet it hurt humanity! It blew up.

Later robots handled the issue, by admitting that in some situations they would irrevocably hurt some humans, in those situations they minimised the damage (but it still put pressure on them, and they would feel guilty over it, too much pressure and they would still blew their brains).

....hence the eventual need for the 0th law.
Title: Re: Asimovian Thought Experiment
Post by: Colonol Dekker on May 29, 2007, 10:04:25 am
I'm guessing a few people have been watching I-Robot lately :)
Title: Re: Asimovian Thought Experiment
Post by: jr2 on May 29, 2007, 02:22:36 pm
The problem with robots is that their decisions are based on rules, "programming"... Humans make their decisions the same way, except that they can decide to set aside their programming, or even re-program themselves.  I guess it comes down to whether Humans do this as a result of complex programming, or Free WillTM... you could program a robot to be able to re-program itself, or set aside rules... but what would be the qualifier?  It is still based on rules, not "if you want to", "if you feel or know it is wrong/right" <-based on what?, "if you decide"... <-decide how?  (rules!)

So, I guess the question is, do humans have the ability to decide which rules to follow/not follow/create/modify based on more rules?  Or is there something else involved?

Hmmm... "I think, therefore I am (http://en.wikipedia.org/wiki/Cogito_ergo_sum)"... am what?  existent?  in which dimension(s)?  In what form(s)?  How? 

See, here's a question to people who think we don't have a soul:  Since said soul would not have eyes, except for the ones in your head, it would only be able to see the physical world... unless it can somehow sense using (since they are spiritual, not physical) unseen, undetectable (by physcial means) sensors of some sort.  The soul/spirit would have to have some sort of interface with the body.  One could question also, what part of your personality does the soul/spirit control/influence, and whether the soul can function without the body.

You can't argue that there is only the physical world, as the physical world has (currently) no means of detecting any other world/dimension... to do so, you'd have to get lucky and guess the interface between this world and the other... assuming there is one that is detectable by anything other than a human soul?  We don't really understand our main processor (brain), so I'm sure we would be hard pressed to find the soul/brain interface (or wherever it was, eventually connecting to the brain... but I'm pretty sure it'd be in the brain somewheres)

What we need to do, I think, is study the DNA/ and whatever other codes are used in the human body whilst it develops... I do believe we've sequenced all of it now, right?  (not just the parts that are "more important"?)  If we can look at that, perhaps we can find something for an interface...

Of course, this assumes that God uses DNA to create the soul interface  (interjecting personal beliefs here, in case you were too slow to catch it)... if there is a soul (speaking theoretically), and God creates it at the moment of conception (which I believe, but do not know how... is it part of the DNA code, or special creation, or both?  I'd say probably both, DNA for the interface, special creation for the soul itself), then unless the interface is created with DNA/physical means, you'd not be able to catch it by studying the DNA

All this is probably pretty confusing, enjoy, you've been watching me think/discuss out "loud" without trying to prove a point.  This is all to raise questions, some of it I believe, some of it I am just throwing out there to create more questions... hope I didn't make too much of a bother of myself.  ;)
Title: Re: Asimovian Thought Experiment
Post by: Mika on May 29, 2007, 02:57:44 pm
A simple question:

Why the robot should be able to think?

Mika
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 29, 2007, 04:08:46 pm
All I see is that you're trying to be overly anal in order to assert your point, in a rather contradictory fashion as well.


I'm having to be anal since you're deliberately ignoring the meaning of any sentence I write in order to over analyse the words I'm using. I mentioned several times that a robot which had passed every test that could be made would appear to have free will.

If you want to debate with me you're supposed to attempt to understand the point I'm making to the best of your abilities rather than trying to twist the words I've used in some kind of point scoring exercise.

I've had similar chances to twist your words and I've chosen not to do it if I could understand what you were trying to say.

Quite frankly I have no wish to debate the point any further with someone who would do that. I was going to say that last time but I chose to give you the benefit of the doubt. I can quite clearly see that I was wrong and my time is better spent elsewhere.


But just in case anyone else is wondering I chose the word appear because that might be all you have. In Asimov's stories for instance stimulus-response was all you had in order to determine what a robot was thinking. I simply didn't make the assumption that there was a test for free will beyond that.
Title: Re: Asimovian Thought Experiment
Post by: Mathwiz6 on May 29, 2007, 04:51:18 pm
It's not that complicated. Kara says that, for all intents and purposes, a robot could have free will equivalent to a human, in that no test could differentiate the two, and that thereby, as humans are assumed to have free will, robots would as well. Effectively. Not that I'm arguing free will necessarily exists. Because that's unprovable. But such robots would be treated as if they had free will.

At least, that's what I think he's saying... Right?
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 29, 2007, 05:09:04 pm
Spot on. If a robot passes every test for free will we have then you have to assume it has the same free will humans do. Those tests could be (and initially are likely to be) very basic. And they could be very wrong.
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 29, 2007, 06:09:04 pm
As far as I'm concerned:
1) Free will is important to the question of robot rights because society is based on the idea that free will exists and that its members (Persons) have free will.
2) Proving that one person is responsible for violating another person's rights requires that it be proven beyond all reasonable doubt.
3) It logically follows from these two premises that if we are to prove that some category of individuals gain the mantle of personhood, we must prove beyond all reasonable doubt that they fall under the same assumed definition of members (Having free will)
4) Because robots are not currently considered persons, the mere appearance of free will is not enough for them to be considered persons (And therefore they cannot be guaranteed the same rights and protections)

I don't see why you disagree with this, then? Do you disagree with this? I assumed that you did, because you were attempting to refute my supporting points. But in looking back I realized that thesizzler basically acknowledged that he wasn't trying to prove free will, either, and yet you continued to say that he needed to "prove" free will.

Quote
We're very good robots.

Seriously, prove to me right now that you have free will and aren't acting out a predetermined course of actions. You can't. Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

So lets not start out with the idea that you aren't a robot because your unprovable concept of a soul gives you the equally unprovable ability of free will.

prove to me that we don't have free will. :p

haha! you can't either! :drevil:


And given our species current understanding of the way life works, what we have is as good as free will.

Computer program or not, my will is free enough for me.

EDIT: And you know, you're kind of right about me intentionally misunderstanding you. Looking back, I put point (4) in because I expected you to disagree with that, and wouldn't draw the conclusion that my points would not rule out a robot or robots that had their free will proven beyond reasonable doubt, but still didn't actually have free will.
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 29, 2007, 07:20:03 pm
I assumed that you did, because you were attempting to refute my supporting points. But in looking back I realized that thesizzler basically acknowledged that he wasn't trying to prove free will, either, and yet you continued to say that he needed to "prove" free will.


Because that was the point at which you inserted yourself into the discussion I was having with him. If you're going to post saying that free will's existence is not irrelevant to the issue I was discussing with him then I'm going to have to argue the same point with you regardless of whether or not thesizzler had dropped that assertion or not. You were asserting it yourself every time you argued against me saying it was irrelevant.

Quote
I don't see why you disagree with this, then? Do you disagree with this?


I disagree with point 4 the most strongly since it appears as though you're holding AIs to a much higher burden of proof than humans. Humans are assumed to have free will, an assumption made simply because society won't work without it. But for a robot to have free will it has to definitively prove something which you've been unable to prove in a test subject we've had much longer to work with?

The crux of my problem with your argument is that I've not made the same basic assumption that you've made. I've not assumed that the test is actually good enough to give an answer. It might be. But it seems to me that (at least initially) it won't be.

Much of the current work on AIs is based on neural nets. The problem with them is that scientists are much better at getting them to solve problems than they are at understanding how they are solving problems. The same goes for Asimov's AIs who as I've stated before couldn't be tested by any manner other than seeing what they did in a given situation.

So what do you do if you have an AI that might have free will but will have to wait another 10 years for the test to definitively determine whether or not it does? What if you conduct every test known to man and still don't have an even remotely definitive answer?

That's why I said appearance of free will. You seem to be acting as though you can give free will tests and group all the test subjects into pass or fail. I'm saying that maybe you can or maybe you'll end up having a whole spectrum of certainty about whether a robot has free will or not going from probably doesn't to probably does.

So again. What do you do with a robot you're 50% certain has free will?
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 30, 2007, 12:24:27 am
I assumed that you did, because you were attempting to refute my supporting points. But in looking back I realized that thesizzler basically acknowledged that he wasn't trying to prove free will, either, and yet you continued to say that he needed to "prove" free will.


Because that was the point at which you inserted yourself into the discussion I was having with him. If you're going to post saying that free will's existence is not irrelevant to the issue I was discussing with him then I'm going to have to argue the same point with you regardless of whether or not thesizzler had dropped that assertion or not. You were asserting it yourself every time you argued against me saying it was irrelevant.

Then be more careful when you say that something is "irrelevant" to the discussion. Especially when that something has such an extremely broad definition such as free will. The first definition on dictionary.com for free will is "   free and independent choice; voluntary decision:". The second definition refers to the philosophical argument. You didn't say which one you were saying was irrelevant and given that you seemed to be disagreeing with thesizzler that apparent free will was not a good enough criteria, the implication was that you were saying that both were irrelevant.

However, I will be more careful in future discussions to try and pick up on that sort of thing.

I disagree with point 4 the most strongly since it appears as though you're holding AIs to a much higher burden of proof than humans. Humans are assumed to have free will, an assumption made simply because society won't work without it. But for a robot to have free will it has to definitively prove something which you've been unable to prove in a test subject we've had much longer to work with?

For starters, I should say that I don't propose that every single robot be tested in such a manner. If it's obvious that it's not simply a fluke ala Short Circuit, and it's actually a general property of a robot design (or a particular robot brain) then I think it's good enough to assume that such robots have free will. It would be impractical to do otherwise.

I believe that humans are assumed to have free will because the assumption works. I think that if we had good evidence that humans could not control themselves, it would be evident in the way that our society functioned. I think that because humans have built a society that assumes free will, and then successfully lived within a society that assumes free will, it has proven beyond reasonable doubt that humans have free will.

Furthermore, nobody has come up with evidence that obviously disproves that. If we all had some kind of radio receiver organ in our heads with the ability to take over all mental and physiological functions, I would consider that reasonable doubt for the idea of free will. :p

Given that we have several thousand years of evidence to work with for humans, I would say that just subjecting a few robots to some tests is hardly holding them to a higher standard than humanity. On an individual basis, yes, the robots would have to work harder to prove themselves at first. But you see that same pattern all over human society, whether it's some kind of locker-room initiation right to getting a job to immigrating to a new country.

The crux of my problem with your argument is that I've not made the same basic assumption that you've made. I've not assumed that the test is actually good enough to give an answer. It might be. But it seems to me that (at least initially) it won't be.

Much of the current work on AIs is based on neural nets. The problem with them is that scientists are much better at getting them to solve problems than they are at understanding how they are solving problems. The same goes for Asimov's AIs who as I've stated before couldn't be tested by any manner other than seeing what they did in a given situation.

So what do you do if you have an AI that might have free will but will have to wait another 10 years for the test to definitively determine whether or not it does? What if you conduct every test known to man and still don't have an even remotely definitive answer?

You admit that you don't know and you go from there. Perhaps you grant it some sort of partial rights based on what you can prove. But if you can't prove that a robot will not lose control of itself arbitrarily, you've already hit a definition that would exclude a human from certain aspects of society.

If a brick falls off of a roof and nearly hits a human (Nevermind why the brick was there in the first place) it is ridiculous to try the brick for attempted murder. We humans generally think it's ridiculous because a brick is nothing like a human! Yet in making that judgment we are basically applying a mental test to the brick. Does it look like a human? Does it act like a human? Can it think like a human? The brick subsequently fails each one and so we do not admit the brick to society and do not hold it responsible for its actions. It is not considered a criminal act that we let the brick go free.

Nor do we try animals for killing and eating other animals, or even humans, at least not generally. Here the reasons are less well-defined, but I think most people figure that the animal doesn't know better, it's natural behavior for the animal, and that is the way that the animal gets nourishment. Perhaps the animal is a carnivore and so it is biologically geared towards killing and eating other complex animals. But nor do we grant them rights such as freedom of speech or the right to vote. Again, we have applied some arbitrary mental test to animals and decided to exclude them from human society. Some animals are believed to be highly intelligent; but we still treat them the same as other animals.

That's why I said appearance of free will. You seem to be acting as though you can give free will tests and group all the test subjects into pass or fail. I'm saying that maybe you can or maybe you'll end up having a whole spectrum of certainty about whether a robot has free will or not going from probably doesn't to probably does.

So again. What do you do with a robot you're 50% certain has free will?

You've never really dealt with having a whole spectrum of certainty. Generally your logic has seemed to follow that if a robot has the appearance of free will, we ought to act as though it has free will. You've never differentiated between a robot having half the appearance of free will, as opposed to 90% of the appearance of free will. (Feel free to correct me on this if I missed something. That is one of my objections to your stance.)

If we're only 50% sure that a robot has free will, but we're 99.9% sure that humans have free will, it seems kind of self-evident. We would not grant full rights to somebody unable to control themselves half the time. Most likely we would lock them up and separate them from society, if we considered them a threat to themselves or others.
Title: Re: Asimovian Thought Experiment
Post by: The Spac on May 30, 2007, 01:47:52 am
Why do we need robots that can think for themselves is what I wish to know. We barely get on with each other and you all want to add another race to the planet :-D
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 30, 2007, 03:34:30 am
You've never really dealt with having a whole spectrum of certainty. Generally your logic has seemed to follow that if a robot has the appearance of free will, we ought to act as though it has free will. You've never differentiated between a robot having half the appearance of free will, as opposed to 90% of the appearance of free will. (Feel free to correct me on this if I missed something. That is one of my objections to your stance.)

Did you not say that you didn't want to get into yet another discussion with me over the testing? :p

Well I didn't want to further muddy the waters over what I considered to be a fairly obvious point either.


Quote
For starters, I should say that I don't propose that every single robot be tested in such a manner. If it's obvious that it's not simply a fluke ala Short Circuit, and it's actually a general property of a robot design (or a particular robot brain) then I think it's good enough to assume that such robots have free will. It would be impractical to do otherwise.

I never said every single robot would have to be tested either. However the fact remains that you could have thousands of robots off the production lines long before you had any kind of definitive test. Bear in mind that there is no real reason to think that free will wouldn't happen as the result of an accident either. Sci-fi is full of examples of smarter and smarter computers achieving sentience.

Quote
I believe that humans are assumed to have free will because the assumption works. I think that if we had good evidence that humans could not control themselves, it would be evident in the way that our society functioned. I think that because humans have built a society that assumes free will, and then successfully lived within a society that assumes free will, it has proven beyond reasonable doubt that humans have free will.

Let's assume that humans don't have free will but think they do. They would build a society based on free will because they think they have it. Then cause they think they have it they would live successfully within it. By your definition that's proof beyond a reasonable doubt that they have free will but in the end you've reached the wrong conclusion.

Human society has to be based on free will in order to work. That's true but we still quite often say that humans have lost their free will. The mentally ill for instance perform actions that look perfectly free willed to them from their own narrow perspective but those of us who don't have the same affliction can look at the person's actions and say that he no longer has his free will and is acting differently because of altered brain chemistry. But altered from what? Who's to say that what we consider to be sane = free will? Perfectly sane people do little crazy things all the time and then say "I have no idea why I chose to do that. It seemed like a good idea at the time"

You're making the assumption that the baseline = free will but you haven't got any evidence to prove it. What if someone who is a habitual thief simply has a brain chemistry that makes him like stealing. If someone does that all the time you call them a kleptomaniac and treat them for it but if they only do it from time to time you call them a thief and lock them away.

What if a sufficiently high intelligence can see that we're all a little mad and are deluding ourselves into thinking we have free will because of it?

While I'm at it. Here's another argument to further muddy the waters. In Larry Niven's Protector Brennan-monster frequently says that he no longer has free will since he's now too intelligent. Whenever he is presented with a choice he knows which is the best course of action to take and thus takes it. Only a mental defective would willingly take a course of action which didn't appear to be the best thing to do at the time after all.

Now who says that AIs won't fall into that category? What if humans are smart enough to have free will but stupid enough not to lose it again? An AI like that might actually fail several of our tests for free will since it would be unable to make sub-optimal choices. We happen to regard being able to make sub-optimal choices as an example of free will but yet again we could be deluding ourselves. It could be that we're simply too dumb to make the best choice in every situation.

You seem to be arguing based on the assumption that free will is binary. You either have it or you don't. It might be nothing of the kind.

Quote
You admit that you don't know and you go from there. Perhaps you grant it some sort of partial rights based on what you can prove. But if you can't prove that a robot will not lose control of itself arbitrarily, you've already hit a definition that would exclude a human from certain aspects of society.

Bear in mind you can't prove that any human on the planet won't go mad. So it would have to depend on the chance of it losing control.

Quote
Nor do we try animals for killing and eating other animals, or even humans, at least not generally. Here the reasons are less well-defined, but I think most people figure that the animal doesn't know better, it's natural behavior for the animal, and that is the way that the animal gets nourishment. Perhaps the animal is a carnivore and so it is biologically geared towards killing and eating other complex animals. But nor do we grant them rights such as freedom of speech or the right to vote. Again, we have applied some arbitrary mental test to animals and decided to exclude them from human society. Some animals are believed to be highly intelligent; but we still treat them the same as other animals.

Need I point out that there are several people involved in trying to get chimps rights on the grounds that said tests are flawed?

Quote
If we're only 50% sure that a robot has free will, but we're 99.9% sure that humans have free will, it seems kind of self-evident. We would not grant full rights to somebody unable to control themselves half the time. Most likely we would lock them up and separate them from society, if we considered them a threat to themselves or others.

I didn't say that the robot was uncontrollable 50% of the time. I said that we aren't certain the robot has free will or not because the tests aren't good enough. Using the example above a free will test for an AI smarter than humanity might fail to give conclusive evidence in either direction simply because the test is wrong.
Title: Re: Asimovian Thought Experiment
Post by: Wobble73 on May 30, 2007, 03:54:27 am
Humans sometimes don't have free will, they sometimes act on instinct, something robots could never have!
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 30, 2007, 05:34:20 am
You've never really dealt with having a whole spectrum of certainty. Generally your logic has seemed to follow that if a robot has the appearance of free will, we ought to act as though it has free will. You've never differentiated between a robot having half the appearance of free will, as opposed to 90% of the appearance of free will. (Feel free to correct me on this if I missed something. That is one of my objections to your stance.)

Did you not say that you didn't want to get into yet another discussion with me over the testing? :p

I don't see where testing comes in here. In this thread, I do not remember reading any point where you talked about a 'spectrum' of free will prior to this point.

Quote
For starters, I should say that I don't propose that every single robot be tested in such a manner. If it's obvious that it's not simply a fluke ala Short Circuit, and it's actually a general property of a robot design (or a particular robot brain) then I think it's good enough to assume that such robots have free will. It would be impractical to do otherwise.

I never said every single robot would have to be tested either.

I never said you did.

However the fact remains that you could have thousands of robots off the production lines long before you had any kind of definitive test. Bear in mind that there is no real reason to think that free will wouldn't happen as the result of an accident either. Sci-fi is full of examples of smarter and smarter computers achieving sentience.

I'm not sure what the point of this paragraph is beyond adding additional information.

Quote
I believe that humans are assumed to have free will because the assumption works. I think that if we had good evidence that humans could not control themselves, it would be evident in the way that our society functioned. I think that because humans have built a society that assumes free will, and then successfully lived within a society that assumes free will, it has proven beyond reasonable doubt that humans have free will.

Let's assume that humans don't have free will but think they do. They would build a society based on free will because they think they have it. Then cause they think they have it they would live successfully within it. By your definition that's proof beyond a reasonable doubt that they have free will but in the end you've reached the wrong conclusion.

Then prove to me that humans don't have free will, but think they do. Present some actual evidence that supports your point.

Human society has to be based on free will in order to work. That's true but we still quite often say that humans have lost their free will. The mentally ill for instance perform actions that look perfectly free willed to them from their own narrow perspective but those of us who don't have the same affliction can look at the person's actions and say that he no longer has his free will and is acting differently because of altered brain chemistry. But altered from what? Who's to say that what we consider to be sane = free will? Perfectly sane people do little crazy things all the time and then say "I have no idea why I chose to do that. It seemed like a good idea at the time"

You're making the assumption that the baseline = free will but you haven't got any evidence to prove it. What if someone who is a habitual thief simply has a brain chemistry that makes him like stealing. If someone does that all the time you call them a kleptomaniac and treat them for it but if they only do it from time to time you call them a thief and lock them away.

What if a sufficiently high intelligence can see that we're all a little mad and are deluding ourselves into thinking we have free will because of it?

I say that the baseline is free will because by and large that is what our government, justice system, etc. is based on. I think most people would say that our justice system has progressed since the Salem witch trials, or when people attributed criminal behavior to demons or spirits. In that time we have held people more accountable for their actions - rather than say that they are possessed, we may diagnose them with some kind of mental illness, or find some kind of psychological quirk.

Now what is free will? Free will as defined by dictionary.com (For consistency) would be: "free and independent choice; voluntary decision". At this very instant I can snap my fingers if I choose to do so. I need not consult with anyone else. It is an entirely voluntary act, I may choose to do so or not do so when I desire, whether I am told by someone else or not.

Do most of the people around you choose to decide to eat at various times, even if they're hungry? Do they choose to sometimes not sleep or nap, even if they feel tired? Do they perform tasks only when directly told to, or are they capable of doing things simply because they want to get something out of it? Do they do things for other people, even though they don't want to? Most humans I've seen demonstrate the ability for this kind of behavior. All of it implies that they have free will.

Now, what evidence does your sufficiently high intelligence offer which proves we are all deluding ourselves?

While I'm at it. Here's another argument to further muddy the waters. In Larry Niven's Protector Brennan-monster frequently says that he no longer has free will since he's now too intelligent. Whenever he is presented with a choice he knows which is the best course of action to take and thus takes it. Only a mental defective would willingly take a course of action which didn't appear to be the best thing to do at the time after all.

Now who says that AIs won't fall into that category? What if humans are smart enough to have free will but stupid enough not to lose it again? An AI like that might actually fail several of our tests for free will since it would be unable to make sub-optimal choices. We happen to regard being able to make sub-optimal choices as an example of free will but yet again we could be deluding ourselves. It could be that we're simply too dumb to make the best choice in every situation.

The best decision...for who? By what criteria? Is it the morally best course of action? Is it the most beneficial course of action? Is it the most logical course of action? Is it the most consistent course of action?

It's an interesting example, but as long as he can act mentally defective, he hasn't lost free will. I believe that us humans are capable of knowing the best course of action but, even so, we can still decide to ignore it and do something else instead.

So this doesn't seem like an argument against free will, but rather, an argument that robots don't even need apparent free will to be a part of human society.

You seem to be arguing based on the assumption that free will is binary. You either have it or you don't. It might be nothing of the kind.

Then pay more attention to my posts where I acknowledge just that:

You admit that you don't know and you go from there. Perhaps you grant it some sort of partial rights based on what you can prove. But if you can't prove that a robot will not lose control of itself arbitrarily, you've already hit a definition that would exclude a human from certain aspects of society.

Bear in mind you can't prove that any human on the planet won't go mad. So it would have to depend on the chance of it losing control.

That seems like a wise course of action.

Quote
Nor do we try animals for killing and eating other animals, or even humans, at least not generally. Here the reasons are less well-defined, but I think most people figure that the animal doesn't know better, it's natural behavior for the animal, and that is the way that the animal gets nourishment. Perhaps the animal is a carnivore and so it is biologically geared towards killing and eating other complex animals. But nor do we grant them rights such as freedom of speech or the right to vote. Again, we have applied some arbitrary mental test to animals and decided to exclude them from human society. Some animals are believed to be highly intelligent; but we still treat them the same as other animals.

Need I point out that there are several people involved in trying to get chimps rights on the grounds that said tests are flawed?

Quote
If we're only 50% sure that a robot has free will, but we're 99.9% sure that humans have free will, it seems kind of self-evident. We would not grant full rights to somebody unable to control themselves half the time. Most likely we would lock them up and separate them from society, if we considered them a threat to themselves or others.

I didn't say that the robot was uncontrollable 50% of the time. I said that we aren't certain the robot has free will or not because the tests aren't good enough. Using the example above a free will test for an AI smarter than humanity might fail to give conclusive evidence in either direction simply because the test is wrong.

I didn't say that the robot was uncontrollable 50% of the time, either. When I wrote that I envisioned a robot who runs the risk of having a human giving him orders and being forced to obey them. After all, by the very definition of free will, a robot would have to be uncontrollable.

I don't see why you choose to be obsessed with the tests being wrong. Of course they'll be inaccurate. So what do you propose as an alternative?
Title: Re: Asimovian Thought Experiment
Post by: TrashMan on May 30, 2007, 05:46:36 am
Really...arguing against humans having free will makes as much sense as claiming they have no hands and breathe liquid nitrogen....

As for robots, I really don't think they will EVER be complex enugh to compete with humans... For both techincal and practical reasons.
Title: Re: Asimovian Thought Experiment
Post by: Colonol Dekker on May 30, 2007, 06:00:02 am
I would honestly like a futurama robot. But it's a lottery whether we'll have R2-D2's, T-800's Benders or Sonnys.

It'll be fun regardless......... :D
Title: Re: Asimovian Thought Experiment
Post by: Roanoke on May 30, 2007, 06:11:43 am
Doesn't the existance of laws, any laws,  contradict the possability of free will (wether it's Asimov and robots, and, on a more abstract note, humans too) ?

What if a robot doesn't want to "live" ? A human can throw themself off a bridge, if they choose to. A robot wouldn't have the choice.

I also agree to K. though. Untill we came up with a test that couldn't be cheated we would never really know.
Title: Re: Asimovian Thought Experiment
Post by: Flaser on May 30, 2007, 06:33:54 am
I think a whole lot of you are afraid of the possibility that the soul is a phenomenon associable with the physical world. I prefer - and see the most evidence for - the Shirowian, 'Ghost in the Shell' approach.

Namely, any sufficiently complex system's behaviour will eventually reach such a chaotic state where you can't deterministically predict its exact reactions with any useable certanity.
You can still do accurate predictions over the average of said actions and long term trends.

This is what you could call 'will' as it has characteristics (long term trends) that make it immediately personal, but it is still not a simple program, that could be run time and again to recieve the same results.

This is a structuralist approach. Shirow goes beyond this; by stating that the current structure is built upon earlier versions, and said structures manifest themselves in base functions, that you could call instinct.

Philosophy of Ghost in the Shell (http://en.wikipedia.org/wiki/Ghost_in_the_Shell_%28philosophy%29)

Therefore any robot, AI or any other system with enough complexity and redundant base functions would have a soul in my definition, and while its nature and evolution would be radically different to ours; I don't believe that as individuals or species they would have any less legitimacy or genuiness than humans in my eyes.
Title: Re: Asimovian Thought Experiment
Post by: Mefustae on May 30, 2007, 06:54:33 am
R2-D2's, T-800's Benders or Sonnys.
Disclaimer: If you can identify all four (4) of the aforementioned robots in under 10 seconds, you qualify as a geek.
Title: Re: Asimovian Thought Experiment
Post by: TrashMan on May 30, 2007, 08:11:19 am
R2-D2 = Star Wars
T-800 = Terminator
Bender = Futurama
Sony = ???? isn't this a compayn rather tha na robot?


Humans can't (and won't) produce somethuing they cannot comprehend themselves.
The human brain is so redicolously complex that it takes a human lifetime to study it and you still are only scratching the surface..

To manufacture a robot with such a complex "artificial" brain would require ENORMEUS ammounts of knowledge, work hours, technology and most of all money...

So I see it a Dyson Shpere thing - possible in theory (lol...evn tough the theory is on a very hsaky foundation) - but something that will NEVER be built..
Title: Re: Asimovian Thought Experiment
Post by: Ghostavo on May 30, 2007, 11:22:44 am
Sonny is from the same author as this thread's about, that should give you enough clues.
Title: Re: Asimovian Thought Experiment
Post by: Janos on May 30, 2007, 12:28:51 pm
Doesn't the existance of laws, any laws,  contradict the possability of free will (wether it's Asimov and robots, and, on a more abstract note, humans too) ?
No, because laws are sociological constructs which outline rules for living in a society, and they are frequently broken.

Quote
What if a robot doesn't want to "live" ? A human can throw themself off a bridge, if they choose to. A robot wouldn't have the choice.

Why not? Wouldn't that only depend on the flexibility of said robot's coding, which dictates its behaviour?

Quote
I also agree to K. though. Untill we came up with a test that couldn't be cheated we would never really know.
I love free will vs. determinism -debates, they never go anywhere. Maybe, just maybe, because we cannot simply perform wide empiristic research on humans (same reason why biologists and sociologists often clash about human behaviour).
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 30, 2007, 12:47:12 pm
However the fact remains that you could have thousands of robots off the production lines long before you had any kind of definitive test. Bear in mind that there is no real reason to think that free will wouldn't happen as the result of an accident either. Sci-fi is full of examples of smarter and smarter computers achieving sentience.

I'm not sure what the point of this paragraph is beyond adding additional information.


It's yet another example of how a robot could be created without tests to prove free will existing to test it.

I say that the baseline is free will because by and large that is what our government, justice system, etc. is based on.


That's very circular logic. The legal system is based on the assumption of free will because only the presence of free will creates the need for a legal system.

Quote
I think most people would say that our justice system has progressed since the Salem witch trials, or when people attributed criminal behavior to demons or spirits. In that time we have held people more accountable for their actions - rather than say that they are possessed, we may diagnose them with some kind of mental illness, or find some kind of psychological quirk.

Ironically I was about to raise a similar point myself. Why are you assuming that this is where the progress ends? What if in 400 years from now we don't have prisons at all because humanity has proved that all criminal behaviour (and not just some of it) is due to psychological problems? We already use criminal profiling to catch murderers and such profiling always talks about the murderer's need to do x or y. Yet when we catch them we decide that they were 100% responsible for their actions and thus have to be sent to jail for them. What if in 400 years prison is viewed as being as stupid a notion as demonic possession?

What if a large number of things attributed to free will are actually not due to free will at all?

Quote
Do most of the people around you choose to decide to eat at various times, even if they're hungry? Do they choose to sometimes not sleep or nap, even if they feel tired? Do they perform tasks only when directly told to, or are they capable of doing things simply because they want to get something out of it? Do they do things for other people, even though they don't want to? Most humans I've seen demonstrate the ability for this kind of behavior. All of it implies that they have free will.


Again with the assumption of binary free will that is either on or off. What if whether you snap your fingers is all your choice, when you sleep is 90% your choice and whether or not you murder the pretty girl who just walked past is down the unique set of psychological quirks you've built up during your lifetime and is something you have little control over?

Quote
Now, what evidence does your sufficiently high intelligence offer which proves we are all deluding ourselves?


In the same way that a sufficiently intelligent being could see that the Salem witch trials were bull****.

The best decision...for who? By what criteria? Is it the morally best course of action? Is it the most beneficial course of action? Is it the most logical course of action? Is it the most consistent course of action?

It's an interesting example, but as long as he can act mentally defective, he hasn't lost free will. I believe that us humans are capable of knowing the best course of action but, even so, we can still decide to ignore it and do something else instead.


But why on earth would he "act mentally deficient"? You're taking a very anthropomorphic view on the subject. Why would a highly intelligent being deliberately choose to do that? Again I'm not assuming that AIs will be comparable to us in intelligence. What if the first AI is to us what we are to chimps? Cats? Woodlice even? Would you still expect the same systems that govern our existence to be necessarily be relevant at all  to such an AI?

Humans are stupid. We can choose to make the wrong choice. But that doesn't mean that every intelligent being has to be like that.

Quote
You seem to be arguing based on the assumption that free will is binary. You either have it or you don't. It might be nothing of the kind.

Then pay more attention to my posts where I acknowledge just that:

You admit that you don't know and you go from there. Perhaps you grant it some sort of partial rights based on what you can prove. But if you can't prove that a robot will not lose control of itself arbitrarily, you've already hit a definition that would exclude a human from certain aspects of society.

Nope. As far as I can see from that you're still assuming binary free will and simply assigning a percentage chance that the robot has it or not. I'm talking about something different. I'm talking about their being a spectrum of free will with certain actions which are under your complete control and others being only partially or not at all under your control. A robot under Asimov's laws does not have free will. Yet this does not mean it would act uncontrollably because we understand exactly what the limits on its free will are and have determined what danger they present.

Besides you keep bringing up the law. And the law does make a binary distinction between free will and insanity. A man who commited a murder is either guilty or insane. There is no well he was mostly insane but he could have chosen not to do x so we give him a reduced prison sentence for that minor mistake and treat him for the insanity which is mostly to blame. Suppose we have a person who will medically qualify as a psychopath today but wouldn't a year ago because the tendencies weren't quite strong enough yet. Would you say that a murder committed 6 months ago was 100% free will? What about 3 months ago? Yesterday? At which point does the choice suddenly flip between insanity and free will?

Quote
I don't see why you choose to be obsessed with the tests being wrong. Of course they'll be inaccurate. So what do you propose as an alternative?

You've missed the point completely if you think I need to propose an alternative. That's like saying "Well science might not give us the right answer so are you proposing voodoo as an alternative?" I'm simply saying that science may not give you a conclusive answer. So making statements like "I'll give a robot rights based on how well I can determine if it has free will" means little if your tests are not conclusive in the first place. 

Asking me for an alternative simply proves that you're not paying attention to what I'm saying. There is no alternative (at least no sensible one). I'm asking what do you do when the tests ARE inconclusive. Not what other tests you would do instead.

I asked you what do you do with a robot that you're 50% certain has free will. What you're suggesting so far sounds a little too much like this

Man 1: I think he's dead. But I'm not a doctor. I can't tell
Man 2: How certain are you he's dead?
Man 1: I'm 50:50. He could be in a deep coma.
Man 2: Well then. If you're 50% certain he's dead we'll only bury him up to his waist.

What do you do when your 50% robot is accused of a crime?
Title: Re: Asimovian Thought Experiment
Post by: Roanoke on May 30, 2007, 01:37:56 pm
Doesn't the existance of laws, any laws,  contradict the possability of free will (wether it's Asimov and robots, and, on a more abstract note, humans too) ?
No, because laws are sociological constructs which outline rules for living in a society, and they are frequently broken.

Quote
What if a robot doesn't want to "live" ? A human can throw themself off a bridge, if they choose to. A robot wouldn't have the choice.

Why not? Wouldn't that only depend on the flexibility of said robot's coding, which dictates its behaviour?

Quote
I also agree to K. though. Untill we came up with a test that couldn't be cheated we would never really know.
I love free will vs. determinism -debates, they never go anywhere. Maybe, just maybe, because we cannot simply perform wide empiristic research on humans (same reason why biologists and sociologists often clash about human behaviour).

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I appreciate your point, but in the Asimov text it looks pretty clear cut to me
Title: Re: Asimovian Thought Experiment
Post by: Bobboau on May 30, 2007, 01:50:06 pm
I think a whole lot of you are afraid of the possibility that the soul is a phenomenon associable with the physical world. I prefer - and see the most evidence for...

that's very similar to what I was saying earlier, no one payed any attention to it then either.
Title: Re: Asimovian Thought Experiment
Post by: Flaser on May 31, 2007, 02:28:15 am
I think a whole lot of you are afraid of the possibility that the soul is a phenomenon associable with the physical world. I prefer - and see the most evidence for...

that's very similar to what I was saying earlier, no one payed any attention to it then either.

"Logically sound? How laughable. The only thing that people use logic for is to see what they want to see and disregard what they do not." - my sig in a lot of other forums
Title: Re: Asimovian Thought Experiment
Post by: Colonol Dekker on May 31, 2007, 03:38:51 am
Logic is all about point of view anyway.

Bear with me on this one, A species like the black widow or preying mantis eats the male after mating, Correct?

To them it seem logical because it stops the male breeding with other females, But other species bond with the male and prevent them from "throwing it about like muck" (angler fish)
Title: Re: Asimovian Thought Experiment
Post by: Janos on May 31, 2007, 04:09:57 am
Logic is all about point of view anyway.

Bear with me on this one, A species like the black widow or preying mantis eats the male after mating, Correct?

To them it seem logical because it stops the male breeding with other females, But other species bond with the male and prevent them from "throwing it about like muck" (angler fish)

Actually male is easy meat and a good source of protein so hell, why not? Male's possible sex life, something that is completely unheard of here in the internet, does not interest the female in the slightest.
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 31, 2007, 04:15:04 am
Quote
I say that the baseline is free will because by and large that is what our government, justice system, etc. is based on.


That's very circular logic. The legal system is based on the assumption of free will because only the presence of free will creates the need for a legal system.

If *only* the presence of free will creates the need for a legal system, and we have a need for a legal system, what does that suggest?

Surely that's not what you meant.

Quote
I think most people would say that our justice system has progressed since the Salem witch trials, or when people attributed criminal behavior to demons or spirits. In that time we have held people more accountable for their actions - rather than say that they are possessed, we may diagnose them with some kind of mental illness, or find some kind of psychological quirk.

Ironically I was about to raise a similar point myself. Why are you assuming that this is where the progress ends?

I didn't say this was where progress ends.

What if in 400 years from now we don't have prisons at all because humanity has proved that all criminal behaviour (and not just some of it) is due to psychological problems? We already use criminal profiling to catch murderers and such profiling always talks about the murderer's need to do x or y. Yet when we catch them we decide that they were 100% responsible for their actions and thus have to be sent to jail for them. What if in 400 years prison is viewed as being as stupid a notion as demonic possession?

What if a large number of things attributed to free will are actually not due to free will at all?

Are you actually making the claim that all crime is due to psychological problems which are completely outside the criminal's control? Or are you just spewing extraneous crap because you don't feel like directly refuting my point?

I can elaborate. By moving the blame from unprovable entities such as demons or spirits, and moving it to more provable things like psychological afflictions, we've made a shift from choosing to believe that people are affected by unprovable forces that _might_ interfere with free will (Which you keep relying on to support your argument) to choosing to believe that people are affected by (arguably) provable forces that we can show would interfere with free will.

Even if we were to decide that, 400 years from now, all crime is caused by psychological illness - we would still be relying on evidence.

Quote
Do most of the people around you choose to decide to eat at various times, even if they're hungry? Do they choose to sometimes not sleep or nap, even if they feel tired? Do they perform tasks only when directly told to, or are they capable of doing things simply because they want to get something out of it? Do they do things for other people, even though they don't want to? Most humans I've seen demonstrate the ability for this kind of behavior. All of it implies that they have free will.


Again with the assumption of binary free will that is either on or off. What if whether you snap your fingers is all your choice, when you sleep is 90% your choice and whether or not you murder the pretty girl who just walked past is down the unique set of psychological quirks you've built up during your lifetime and is something you have little control over?

A) You're not even talking about the same point I was.
B) You're not even trying to give any evidence for your "What if" statements.

Quote
Now, what evidence does your sufficiently high intelligence offer which proves we are all deluding ourselves?


In the same way that a sufficiently intelligent being could see that the Salem witch trials were bull****.

Nope. Not good enough.

Quote
The best decision...for who? By what criteria? Is it the morally best course of action? Is it the most beneficial course of action? Is it the most logical course of action? Is it the most consistent course of action?

It's an interesting example, but as long as he can act mentally defective, he hasn't lost free will. I believe that us humans are capable of knowing the best course of action but, even so, we can still decide to ignore it and do something else instead.


But why on earth would he "act mentally deficient"? You're taking a very anthropomorphic view on the subject. Why would a highly intelligent being deliberately choose to do that? Again I'm not assuming that AIs will be comparable to us in intelligence. What if the first AI is to us what we are to chimps? Cats? Woodlice even? Would you still expect the same systems that govern our existence to be necessarily be relevant at all  to such an AI?

Humans are stupid. We can choose to make the wrong choice. But that doesn't mean that every intelligent being has to be like that.

Simply because a being doesn't choose to take an action doesn't mean that it can't.

If an intelligent being can't make the wrong choice, it doesn't have free will.

Nope. As far as I can see from that you're still assuming binary free will and simply assigning a percentage chance that the robot has it or not. I'm talking about something different. I'm talking about their being a spectrum of free will with certain actions which are under your complete control and others being only partially or not at all under your control. A robot under Asimov's laws does not have free will.

Now who's assuming binary free will?

In virtually every one of Asimov's stories, robots have displayed a certain amount of free will, even while following the laws. Bicentennial Man, for instance.

Yet this does not mean it would act uncontrollably because we understand exactly what the limits on its free will are and have determined what danger they present.

No, we haven't. Even when the laws were known in Asimov's robot stories, robot behavior was not 100% predictable.

Besides you keep bringing up the law. And the law does make a binary distinction between free will and insanity. A man who commited a murder is either guilty or insane.

That's not completely true. A man who committed a murder may be guilty of first degree murder, second degree murder, or (in some states) third degree murder, each with their own varying definitions of how responsible the murderer was for the murder. (EG were their actions completely considered or an emotional outburst?)

There is no well he was mostly insane but he could have chosen not to do x so we give him a reduced prison sentence for that minor mistake and treat him for the insanity which is mostly to blame. Suppose we have a person who will medically qualify as a psychopath today but wouldn't a year ago because the tendencies weren't quite strong enough yet. Would you say that a murder committed 6 months ago was 100% free will? What about 3 months ago? Yesterday? At which point does the choice suddenly flip between insanity and free will?

You'd have to ask someone who actually wants to argue about the fine points of mental illnesses and the proper treatment of them.

Quote
I don't see why you choose to be obsessed with the tests being wrong. Of course they'll be inaccurate. So what do you propose as an alternative?

You've missed the point completely if you think I need to propose an alternative.

If you are going to argue against determining whether a robot has free will via a scientific process, you had damn well better have an alternative. You have completely failed to provide any substantial evidence to support your point. Hell, you seem to keep on changing your point. First you were saying that free will was irrelevant, then you started differentiating between the appearance of free will and actual free will, and now you're starting to complain that I haven't suitably addressed partial free will (When prior to that point, neither had you)

That's like saying "Well science might not give us the right answer so are you proposing voodoo as an alternative?" I'm simply saying that science may not give you a conclusive answer.

And I've already explicitly stated that I believe that a test for free will could give you a wrong answer. Now you're arguing that the tests might be inconclusive? Decide what you're actually objecting to. :doubt:

Asking me for an alternative simply proves that you're not paying attention to what I'm saying.

No, it just means that I have more respect for myself than you apparently do.

I've stated my solution. I've stated my supporting evidence. I've made an effort to provide substantial evidence.

You have stated no solution. You haven't come up with any supporting evidence. You've only made an effort to come up with unprovable thought experiments and somebody else's fictional characters.

There is no alternative (at least no sensible one). I'm asking what do you do when the tests ARE inconclusive. Not what other tests you would do instead.

I've acknowledged that the tests might not be right. That the tests might not be conclusive is your argument, one that I'm not participating in until you can prove your point. (And actually have a consistent point.)

I asked you what do you do with a robot that you're 50% certain has free will. What you're suggesting so far sounds a little too much like this

Man 1: I think he's dead. But I'm not a doctor. I can't tell
Man 2: How certain are you he's dead?
Man 1: I'm 50:50. He could be in a deep coma.
Man 2: Well then. If you're 50% certain he's dead we'll only bury him up to his waist.

I'm sorry that you think it sounds like that's what I'm saying. That's not what I'm saying.

What do you do when your 50% robot is accused of a crime?

You should figure out if the accusation is true or not. :)
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 31, 2007, 04:48:19 am
Yet again you're deliberately misunderstanding me. So I'm out.
Title: Re: Asimovian Thought Experiment
Post by: Bobboau on May 31, 2007, 07:09:09 am
how exactly are we defining free will in this... discussion?
Title: Re: Asimovian Thought Experiment
Post by: Wobble73 on May 31, 2007, 08:09:44 am
how exactly are we defining free will in this... discussion?

 :nervous:

What discussion??  :nervous:

What was the question again?

 :drevil:  :lol:
Title: Re: Asimovian Thought Experiment
Post by: castor on May 31, 2007, 11:35:57 am
I've got the belief that the essence of free will, if such thing exists, is beyond the scope of human understanding. Such like concepts of "Infinity" "eternity" etc.
Heh, one can experience things that are impossible to conceptualize. Try and debate that :confused:
Title: Re: Asimovian Thought Experiment
Post by: phreak on May 31, 2007, 11:58:49 am
A robot with free-will implies that there is some randomness in its decision tree.  All current random number generation algorithms are detirministic, and the results will be the same if the seed is the same.  So if you can manipulate the random seed, you can get the robot with free-will to do an action 100% of the time.

http://en.wikipedia.org/wiki/Pseudo-random_number_generator

edit:  I didn't read the previous 4 pages, so this was probably brought up already.

edit2: i need to quote this :p

Quote
Farnsworth: Behold! The death clock. Simply jam your finger in the hole and this read-out tells you exactly how long you have left to live.
Leela: Does it really work?
Farnsworth: Well it's occasionally off by a few seconds. What with free will and all.
Fry: Sounds like fun. How long do I have left to live?  <He puts his finger in the hole and the clock dings>
Bender: Ooh! Dibs on his CD player!
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on May 31, 2007, 03:45:00 pm
Yet again you're deliberately misunderstanding me. So I'm out.

At the very beginning of this debate you objected to another poster's point specifically because he wasn't able to prove one of his points. You stated that for his point to be at all valid as an argument he would have to prove that point.

You, on the other hand, haven't held yourself to that same standard. You've made multiple points in this article which rely on being prefaced with "What if?" because they're inherently unprovable. I don't see any point to arguing with someone who uses boundless limits for their objections, but then demands that the other posters must be constrained by what is actually provable. I will always be arguing from an unfair disadvantage.

Furthermore, you've failed to define what you're actually arguing. I've dropped a couple explicit definitions in my posts to try and make it clearer what I mean by "free will"; you haven't. I've clearly listed my reasoning behind my argument; you haven't. I've tried to make it clear what, exactly my assertion is; you haven't.

I, personally, get tired of being asked to come up with how I believe that people should respond to completely imaginary situations, and then getting attacked for my answer each and every time. You don't even bother to come up with how you think they should respond, at least not in the same detail that I have, so again it's an inherently unfair position to be arguing from.

I don't see much point to your objections; they don't provide for interesting discussion, just push the discussion into more and more philosophical territory. They aren't factual in nature so nobody is really learning anything.
Title: Re: Asimovian Thought Experiment
Post by: karajorma on May 31, 2007, 05:16:49 pm
I have no problem with a discussion that is based on the facts. I do however see no point in discussing anything with someone who doesn't wish to understand my position and whose only interest is in seeing how they can twist my words in order to win.

The existence of free will is a philosophical point. That was my entire objection to it being used. You entered this discussion on a philosophical point right in your very first reply to me. And then you say that I need to back up my points with scientific answers and evidence? For a philosophical argument?

You have then repeatedly gone back to the same philosophical point that humans must have free will because our laws make it so. And yet I'm wrong cause I'm making philosophical arguments? :lol:
Title: Re: Asimovian Thought Experiment
Post by: Bobboau on June 01, 2007, 12:52:01 am
so was a definition dropped somewhere already?
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on June 02, 2007, 03:35:25 am
Now what is free will? Free will as defined by dictionary.com (For consistency) would be: "free and independent choice; voluntary decision".

Granted that's only definition 1, definition 2 is

Quote
2.   Philosophy. the doctrine that the conduct of human beings expresses personal choice and is not simply determined by physical or divine forces.
Title: Re: Asimovian Thought Experiment
Post by: Bobboau on June 02, 2007, 03:11:05 pm
the first definition is fairly usless, as it just brings more undefined terms into the frey which are more or less exactly like free will.

the second however I think can be used; "not simply determined by physical forces", from this I can be willing to go either way on it. everything in the world is a result of physical forces, so in this respect I can say that then no we wouldn't have free will by that definition, however I don't quite agree with the word 'determined' in nature there seems to be a lot of genuine chaos, at the atomic level things work in a statistical nature, not the solid form we are used to thinking about, this boils up to the macroscopic level, even computers make mistakes every now and then so something as complex and imprecise as an animal brain (like ours) will likely have a much larger variance rate. so given exactly the same situation I would wager that though you would oftine get very similar results there would be some small amount of unpredictability that would make it imposable for you to ever be able to 100% accurately determine what would happen. so if full determinism is the only alternative to free will then I'm going to have to go with free will.
Title: Re: Asimovian Thought Experiment
Post by: karajorma on June 02, 2007, 04:24:35 pm
If it's the only alternative....I don't think anyone on this thread was saying that it was.
Title: Re: Asimovian Thought Experiment
Post by: Bobboau on June 02, 2007, 05:29:31 pm
well what are the other alternatives?
Title: Re: Asimovian Thought Experiment
Post by: Fabian on June 02, 2007, 05:41:18 pm
Quote
However, I will say that I believe it is possible to devise such a test, given that it there is significant precedent that we can prove that an individual is or was incapable of acting normally (eg the insanity plea). I would assume that such a test would involve questions that would test the robot's ability to judge right from wrong, and its ability to interact with people, and judge from those interactions what was appropriate or inappropriate in a given situation.

I also believe that such a test would involve an investigation into the robot's software and hardware, in order to determine whether there were hidden triggers which would restrict the robot's ability to act freely. Presumably the test would also interview persons who had interacted with the robot outside of the court setting, in order to gauge the robot's behavior in a more realistic environment. I wouldn't rule out long-term observation or checkups.

Through such a test, I believe that you could prove beyond reasonable doubt that a robot possessed free will comparable to a standard human.

In other words you'd do your entire array of tests and end up exactly where I said you would. With a robot that may or may not have free will but which to all intents and purposes, appears to have it. You still wouldn't have proved free will. You'd simply have run out out of tests for it and drawn the conclusion that "As far as I can tell, it has free will"

Wow, that is fascinating. If I now run the same test on a human (which our society assumes normally has "free will") and he uses his "free will" i.e. decides to fail the test.

... Uhm, wouldn't that mean that I "prove" that he has no free will, even though he had, which contradicts the whole assumption that humans have "free will".

So now you need to prove they have no "free will". So you order one human to do everything to pass the test and show that he has a free will. You needed to order him as he had no free will to decide that himself (by assumption). So the tests shows he has "free will" even though we assumed he has not.

And if he fails the test (i.e. has no "free will"), he disobeyed an order with his "free will", which again contradicts the assumption that humans have no "free will".

As both scenarios ultimately can lead to contradiction, already a test for humans "free will" seems to be not possible with 100% accuracy.

I don't know how you ever want to make one for robots ...

The insanity plea test just proves that the person acted "insane" - whether or not this person actually is "insane" is not sure. (So a healthy person can be proven to be insane)

The thing with the insanity further is the assumption that "insane" persons do not have this free will, i. e.  definitely will fail the test. So if someone passes the test => he is not insane.

The thing is insane persons also no longer obey orders, but if they would do it, someone could order them to pass the test. (by leaking the answers for example ;) ).

A difference is that a robot would still obey its orders ...

cu

Fabian
Title: Re: Asimovian Thought Experiment
Post by: Fabian on June 02, 2007, 07:27:01 pm
[...]
Will is the easier part, ironically. It's simply the fact that some output results from input. "Free" is more difficult to define, because it's realted to how the output is produced from input.

[...]

I myself think that every sentient system has a free will, because it's not simply producing always the same output from certain input, but affecting the process of decision consciously. It doesn't matter that these processes are bound to matter - everything is. The electrochemical reactions going on in the brain *are* the sentience, consciousness and free will; those things are not just a by-product of the reactions. They are the same thing.

It all boils down to definitions though. If you want to say that free will doesn't exist because brains are just a bunch of matter doing it's thing in the head, fine. In my opinion though, the key point is that the brain affects itself as much as outside world affects it, and thus the freedom of will is IMHO fulfilled - since the output is not simply dependant from input.

Note that chance has little to do with this. You can obviously make a simple computer to produce varying output from same input (random seed generator is the prime example of this) but whether or not this has any coherence, sentience or free will is obvious - no. It's simply random...

Okay, that means a simple self-learning AI to play tic-tac-toe has free-will?

The world is quite limited, but inside this world the AI can do any decision it wants to do.

The outside world _and_ its own experiences decide what it will do.

Of course we soon land at the point of kara that a "ueber-intelligent being" would no longer have a free will, because making mistakes would be counter productive.

I think we are mission out something.

Hm, lets add some hypothetical "feelings".

After having won for 10000000 times it begins to feel "boring", so the AI decides to let the human win some times also.

Its like a human thinking "this is too boring I'll let the fighter escape and chase it then again and boom a hidden other fighter shoots him out of the sky."

But when is something boring? I think, when you don't need to think anymore or nothing new "seems" to happen.

And that is a very interesting point, we always have a point of view here and what the brain concentrates on at the moment.

i.e. a mission can be completely boring to me, because I played it 100x times already and think I know exactly what happens.

But it could be that (in this case due to random factors) something entirely different happens, but I just don't notice it, because I'm not paying attention to it.

So the outside world influences me to be bored, but normally it should influence me to be not bored, but as I am using a point of view I am not seeing this "influence".

Of course this all rather "proves" your point of outside and inside world influencing actions. Feelings are also only inside world.

--

I just want to help me build my bad ass AI that takes over the world by infecting old Windows computers with a virus and sending lots of SPAM mails to cover up the slow taking over :P ... or wait ...

--

I think regarding free will what matters for me personally is that I have "desires" and "feelings" and that I can influence those with my own actions to a certain degree. This seems to me that I have a free will, which gives me a feeling of "freedom". :) (yahoo, they programmed freedom into us ...)

On a personal thing, I think most humans just want to be "happy".

lol, that means an AI to be like human must at least have:

- feelings and the desire to have good feelings and be "happy"

* Can you imagine shivans getting angry, because you killed their wing mates?
* Can you imagine a shivan getting happy by killing humans and such aiming better.

I really would like to see how an emotion driven AI would perform in FS2 battle.

(the above assumes shivans have a free will and feelings of course, but it could of course only be true for terran pilots, which some of them learned to suppress emotions better than others ...).

Hell, you could even have a "virtual academy" with all the pilots created based on some "genetic algorithm" and then chose them based on their ranking for the mission based on the AI level.

i.e. A "Captain" scored perfect in AI school, but still might not be completely emotionally stable (he's insane! :p).

If things were done that way, they would at least no longer seem to be that predictable.

(of course humans are sometimes really predictable also; might be funny to have an AI that thinks "damn, why is this pilot so stupid like the rest of them")

On the other hand random can be inserted as well, not in the inner world, but in the things happening in the outer world, i.e. a different pilot would be chosen based on level, but also on random factors (i.e. the other person could be ill ;) ) ...

I think we should just connect FS2 to Second Life and get the pilot data from there - or from the sims games. :lol:

Seriously if their avatars had any feelings and skills attached, it would be easy to create human profiles out of them :nod:.

Okay, I'm really getting off-topic here, but I think we should first create that AI with free will, then we can worry about the consequences. ;7

And what would be better than BtRL to start such an AI. They have a resurrection ship after all and seemingly (they think to have) feelings ... :-).

Which is a cool point. Unless with us humans you can transfer an AI (even with free will) and such preserve the mind or even copy it at some point, but I guess it would be as unethical as to clone someone ...

And for the original poster:

I also don't think the laws will work to protect society as a robot cannot properly do all of them. A robot cannot even blow up itself (hurts 3 and 1 and 2 afterwards).

There can be a situation, where he has to decide against one of them:

He has a gun, "enemy" human has a gun and to be protecting human has no gun. "enemy" will shoot in 5 secs.

There is no time to do anything else as to fire the gun at an enemy to protect human thus breaking rule 1.

If he doesn't fire them he is also breaking rule 1 (due to inaction).

If he doesn't he is also breaking rule 2.

If he does blow up, due to not deciding he breaks rule 3 and 2 and 1.

If he goes for what breaks the least rules its to fire the gun.

And I also fear that overprotection might occur.

cu

Fabian

PS: Is there anyone else interested or is there already a long buried thread about a new AI for FS2?
Title: Re: Asimovian Thought Experiment
Post by: WMCoolmon on June 02, 2007, 08:15:45 pm
Wow, that is fascinating. If I now run the same test on a human (which our society assumes normally has "free will") and he uses his "free will" i.e. decides to fail the test.

... Uhm, wouldn't that mean that I "prove" that he has no free will, even though he had, which contradicts the whole assumption that humans have "free will".

So now you need to prove they have no "free will". So you order one human to do everything to pass the test and show that he has a free will. You needed to order him as he had no free will to decide that himself (by assumption). So the tests shows he has "free will" even though we assumed he has not.

And if he fails the test (i.e. has no "free will"), he disobeyed an order with his "free will", which again contradicts the assumption that humans have no "free will".

As both scenarios ultimately can lead to contradiction, already a test for humans "free will" seems to be not possible with 100% accuracy.

I don't know how you ever want to make one for robots ...

Still, that assumes that you can pass the free will test if you don't have free will. To use another test as an example, you could score a 0 on a math test by using your math knowledge (The very thing the test was supposed to test for) to score all the wrong answers. But the assumption is that if you don't have the math knowledge, you won't have such a choice. So math tests are considered an effective way to test for math knowledge.

To take the three important parts of the definition below, here's how I would interpret them:
1) Free choice - Sounds the same thing as free will so I'm going to ignore this for now
2) Independent choice - Determine whether or not the robot (I'll assume it's a robot) is significantly affected by factors other than the test itself. For example, remote control via radio, or orders given to it prior to the test.
3) Voluntary decision - Determine whether or not the robot can choose to decide how to react to a given situation.

(2) would be a matter of cutting the robot off from the outside world, eg enclosing it in a soundproofed room with a screen mesh and no openings to the outside. However I'm sure that there are locations on earth which do just that and more, to prevent wiretapping and such. So this shouldn't be a problem, unless we assume that the robot is based on completely unknown technology. (Which I'm not, until we get evidence of some other technologically advanced race.)

To test for a lack of orders given to it prior to the test, you'd have to know something about the robot. For example, if you knew it was based on the Three Laws, you would have to devise some way of getting around the Second Law. Invoke a First Law violation. I'll dig up an example from a short story, wherein one of Asimov's characters forces a robot into obvious mental freeze because he asserts that in lying due to the second law, the robot would cause irreparable harm to another human being. So that would be my first guess at how to prevent such a robot from lying.

Of course, there's always the possibility that the other side would make such an assertion as well. So it would be most important to word the hypothesis as strongly as possible, but I imagine in would go something like, "This test is designed to test for free will. It is of the utmost importance that you are as honest with your answers as possible. If you are not honest with your answers, it is completely possible that it will result in harm or death to a human being which could have otherwise been prevented."

If the other side in the argument were to make a similar argument to the robot, you would end up competing for strength - who could prove the most harm to a human being. However in a situation such as this, a robot (in Asimov's novels) has generally displayed visible difficulty with resolving situation (And if it believes that harm would result for a human being whether it lied or told the truth, could go into mental freeze-out for lack of a third option.)

If you didn't know the rules, and had no means of determining them outside of questioning, you would run into a problem. I'll try to address this in (3)

3) :D
Proving that a robot has voluntary action is difficult. Working from the idea that we can't figure out what a robot is thinking or measure mental activity of any kind, I immediately see two ways you could test for voluntary action.

The first is reaction speed. A robot bound by the three laws will of course automatically move as fast as possible to save a human being who is in danger. However it will have no such deep compunction to perform ordinary or mundane tasks, or act on some other moral imperative. (For example, a human being torturing and killing an animal.) If a robot were designed to prevent any moral action, you would see similar patterns. A robot could not and would not hesitate to prevent any such acts.

The second is to determine whether a robot can act differently when faced with the same situation. If a robot must prevent harm to human beings, it will be forced to prevent that harm all the time. If a robot must follow orders, it will do so all the time. Obviously it's not foolproof. If a robot were programmed to always prevent harm from 9am-5pm, and/or the first three times a given human was determined to be in a certain amount of trouble, it would be extremely difficult to prove the existence of the rule.

Via these two methods it's possible that you could find patterns that indicate hidden rules in the robot, and use them in test (2).

Conc)
As long as you assume that you can't measure anything internal about a being a robot, it's going to be a test of intelligence and how much you can eliminate possibilities of deception in the test. On the other hand, that is how any other test works with human beings.

The insanity plea test just proves that the person acted "insane" - whether or not this person actually is "insane" is not sure. (So a healthy person can be proven to be insane)

The thing with the insanity further is the assumption that "insane" persons do not have this free will, i. e.  definitely will fail the test. So if someone passes the test => he is not insane.

The thing is insane persons also no longer obey orders, but if they would do it, someone could order them to pass the test. (by leaking the answers for example ;) ).

A difference is that a robot would still obey its orders ...

cu

Fabian

Neither the definition of insanity used in court cases, nor the medical definition, contains any inherent wording that an insane individual will not follow orders, not according to wikipedia anyway.

If you're assuming that someone can leak the answers to a free will test, then your objection is no different to objecting to any other test you can cheat in. So you test for English skills, and somebody gets a third person to write the essays, smuggles them in, and copies them down. You test for physical fitness, and someone does steroids. You test for blood, and somebody gets a sample of blood that shows whatever they want it to show. You test for criminal background, and somebody uses bribes to erase damning evidence from their record. You test for innocence, and somebody plants (or destroys) evidence.

So objecting to a free will test on the grounds that you could cheat seems unreasonable, given that you can cheat most of the other tests we humans use.

Although...one might argue that cheating is a sign of free will, given that it's constant in almost every single tests we humans devise to test other humans. You may be on to something here. :p It can be a sign of independent creativity.
Title: Re: Asimovian Thought Experiment
Post by: karajorma on June 03, 2007, 07:27:21 am
Wow, that is fascinating. If I now run the same test on a human (which our society assumes normally has "free will") and he uses his "free will" i.e. decides to fail the test.

... Uhm, wouldn't that mean that I "prove" that he has no free will, even though he had, which contradicts the whole assumption that humans have "free will".

You remind me of a thought I had earlier on in this thread. Imagine the first supposed AI was accused of a crime (say murder). Under those conditions it might actually be in the AI's interests to fail a free will test so that it could simply be thought of as a defective machine still and "fixed" :)
Title: Re: Asimovian Thought Experiment
Post by: TrashMan on June 03, 2007, 07:35:23 am
Free will is not the same as some randomness or unpredictibility (within certain boundries)...