Author Topic: Asimovian Thought Experiment  (Read 12194 times)

0 Members and 1 Guest are viewing this topic.

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
In other words you'd do your entire array of tests and end up exactly where I said you would. With a robot that may or may not have free will but which to all intents and purposes, appears to have it. You still wouldn't have proved free will. You'd simply have run out out of tests for it and drawn the conclusion that "As far as I can tell, it has free will"

Now do you see what I said that only the appearance of free will is important?

All I see is that you're trying to be overly anal in order to assert your point, in a rather contradictory fashion as well. You assert that there is only the appearance of free will, because it will always be possible for someone to devise some way of circumventing the tests.

The word appearance in common usage implies a certain shallowness to the quality being described. If we talk about someone who was guilty of murder, we could no more prove that he committed the crime than we could prove that the robot has free will. Yet if we say that someone has the appearance of being guilty of murder, the implication is that it wasn't proven by some process.

If you want to be anal to the point that you're arguing that you can only prove the appearance of free will, you're arguing on a level of philosophy that is comparable to "cogito ergo sum". We can always come up with some kind of improbable story that would disprove the argument, so everything is disputed.

Quote
Past that point, I believe that in asserting that a robot merely appears to have free will, you would automatically have to question whether all of humanity has free will. If this universal free will is what you claim is irrelevant to this discussion then I would agree, because you would be attempting to prove that either everyone has free will, or nobody has free will.


Exactly the point I have been trying to make since this discussion started. All that matters is whether the robot can pass every test and appears to have free will. It doesn't matter if it actually has it or not. That's a philosophical point largely irrelevant to the topic.

Then stop arguing that philosophical point. You've been continually bringing up the appearance of free will when you just as easily could have said, yes, for all practical purposes you can prove that the robot has free will.

Quote
You stated "The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary."

Practically speaking, there's not any difference between that and assuming that the AI has free will, because you'll be treating it the same way and taking the same actions. So for all practical purposes, you are assuming that the AI has free will. Just because you point out that you know that the AI might not have free will doesn't excuse you from providing justification for your actions.

There is no difference. That's exactly the point I was making. Humans pass every test for free will we have. If a robot can do the same then it is entitled to the same same basic assumption that we make for humans. That although we can only say that both humans and robots appear to have free will we must assume they do.

You seem to have lost sight of the point you were making. We've got this mini-debate about the appearance of free will, which you seem to agree is outside the scope of the discussion.

And then there's the point that started all of this, where you claimed that free will was "irrelevant" to this discussion. Then you rephrased that into actual free will being irrelevant to this discussion, because you could never prove it. These were the grounds that you used to insist that free will was an invalid criteria for differentiating robots from humans.

And yet if we look at what you've said, you seem to agree with me that there are ways to test for free will (Excuse me, the appearance of free will). And it would stand to follow if the original criteria had been the appearance of free will, you would've been perfectly OK with that. Because based on what you've said, we can prove the appearance of free will, even if we can't prove free will itself.

But if we use the same criteria that you use for evidence to prove and disprove free will, we basically can't prove anything. We can only prove that something has the appearance of something else. We can't prove that someone has the ability to reason - perhaps some outside intelligence is really controlling the body of the person, and creating a plausible simulation of mental activity in their brain. We can't prove that someone even exists - perhaps we're all merely part of a giant computer simulation.

So it appears that nothing that we appear to say can appear to have any absolute apparent value, apparently because as long as it can appear that some outside force would appear to have some effect on the way we appear to do things, it appears we must only appear to say that it appears that we can only appear to prove the appearance of something.

I'd rather not argue on that level. If you can't back up your points with reasonable evidence, just agree to disagree.

(Although I have to admit that it was fun to write the "appearance" sentence. :p)
-C

 

Offline Nuke

  • Ka-Boom!
  • 212
  • Mutants Worship Me
Re: Asimovian Thought Experiment
you do realize that under asimov's laws the robots probibly would take over anyway. they would see human behaivior as inherently self-destructive. the first law would make the robots prevent any human from doing anything which would be harmfull to them. say you light up a smoke, your robot will immediately take your cigarette away, if you try to sneak out back and smoke, your robot will eventually catch on and go to extremes to stop you. they would view war as bad and would immediately move to take control of governments and would be willing to self destruct to that goal. eventually robots would control all aspects of human life so as to avoid any harm to fall onto any human.

the destructiveness of humanity would come out more as they moved to defend themselves from the mad robots, which are only following their rather flawed behavioral programming. as resistance to the robots rises so too does the robots defiance of ther second law (or rather compliance of the first). if the robots back down the humans would still attack and take friendly fire losses, to the robots this is unacceptable and the humans would need to be captured. the robot will never get to the second law, rendering it useless. the only way to protect humanity is to imprison everyone. and the robots would die to do it. that is unless the robots saw that their own self preservation was in the best intrest in protecting humans from harm. they would determine that if they were removed from existance, humans would regress to their old habbits.

now so long as the humans dont engage in any self destructive behavior it would be more like the hilton than prison. the robots would follow your commands once your self destructiveness is checked. only while the status quo is maintained where humans arent dieing senselessly, will the robots go to their other two laws. so in the end you will essentially have what you had in the pre dune duneiverse before the butlerian jihad. in that situation the robots had somewhat of a population control crises where robot physicians were aborting healthy foetuses to prevent overpopulation. so by the time we get to what happens in dune, computers are essentially the greatest form of blasphemy.

anyway the laws are in blatant conflict with the concept of free will. but putting that restriction on robots would carry over to us. the robots would probibly impose the following laws on us:

1. a human must not be allowed to harm themselves, through action or inaction.
2. a human can tell us what to do unless it conflicts with rule 1.
3. a human must protect its own life, otherwise lock it in a box and feed him through a tube, unless it conflicts 1 and 2

or whatever

I can no longer sit back and allow communist infiltration, communist indoctrination, communist subversion, and the international communist conspiracy to sap and impurify all of our precious bodily fluids.

Nuke's Scripting SVN

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
Now as for the main topic (which one is the main one now :-P )

If we did end up assuming that a sentient robot has free will, such laws such as the one above would no work for such a robot. While these laws are sound they are also very basic.

Now I believe that we wouldn't be able to program a robot that appears to have free will with laws such as these. Any kind of robot that appeared to have free will would most likely have some kinda programming that lets it make choices based on input.

So there for if we are going to say the robot has freewill (assumed not proven just like the rest of us) then rules such as Asimov's arn't viable.

The robot would have to be "brought up"? to believe in the same values that we do, that everyone's life is precious, everyone has the freedom to do what it wishes(within merit), tought right from wrong.



So to conclude. As long as the robots intelligence isn't high enough that it is assumed to have freewill, or at least considered equal to us, in rights and opportunities, then I believe the three rules would work.

As soon as a robot meets this assumed freewill and has equal rights and opportunities as us then it will be beyond what the three rules can do to control it. In which case it wouldn't be able to, because with such a rule in place the robot would always be considered lesser then a human even if it can perform the same tasks think up the same things and do the same work or more and the robot most likely wouldn't if you want to use an emotion happy with such a state of if you could say slavery?

Not everyone believes that life is precious, or that people have the freedom to do what they wish. So whose values do you choose to teach the robot? Do you tell it that it should always serve others, that virtue is its own reward, that someone should be altruistic? Or do you teach it that it must be independent, it must be able to rely on itself, and it should basically look out for itself, because it can't expect other people to take responsibility for it?

Do you teach the robot that it should "turn the other cheek" when others discriminate it for being artificial, or do you teach it that it should "stand up for itself"?

And there's of course a wide range of shades between being totally altruistic, and being totally egoistic.

So who decides what values the robot will have, and if someone does get to decide, is that depriving the robot of free will? And if you intentionally teach a robot overly altruistic values to preserve humanity...how is that much different from building a robot with the three laws?
-C

 

Offline Bobboau

  • Just a MODern kinda guy
    Just MODerately cool
    And MODest too
  • 213
Re: Asimovian Thought Experiment
so you are arguing free will now?

thats about as productive as arguing over weather or not reality is real or just a perfect illusion. it's easy to come up with unprovable/undisprovable posabilities.

incedently, my position on this has been that we have free will even if our actions can be predicted perfectly, as I do not define unpredictability as a requirement of free will, as the only thing that is truly unpredictable is random action which is both not free will, and can be built into systems that are obviously not free will systems.
Bobboau, bringing you products that work... in theory
learn to use PCS
creator of the ProXimus Procedural Texture and Effect Generator
My latest build of PCS2, get it while it's hot!
PCS 2.0.3


DEUTERONOMY 22:11
Thou shalt not wear a garment of diverse sorts, [as] of woollen and linen together

 

Offline TrashMan

  • T-tower Avenger. srsly.
  • 213
  • God-Emperor of your kind!
    • FLAMES OF WAR
Re: Asimovian Thought Experiment
It is 200 years in the future, and sentient robots have just come into being. They are about to go into widescale production. You have been chosen to come up with the basic, overriding rules of robots that will have priority over all other directives.

Here are Isaac Asimov's three laws of robotics:
   1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
   2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
   3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

1) What would you change from the above three laws? Would you change anything? (Why or why not?)

I'd change number 1.

Take a good look at the wording. If a human attacked a human, the robot would blow it's cicruits..It can't harm a human, and yet it's not alowed to let a human get killed by inaction....
Nobody dies as a virgin - the life ****s us all!

You're a wrongularity from which no right can escape!

 

Offline Wobble73

  • 210
  • Reality is for people with no imagination
    • Steam
Re: Asimovian Thought Experiment
If a human attacked another human, the robot would simply restrain the attacker, it would not have to harm the attacker to prevent him from harming the other human!
« Last Edit: May 30, 2007, 03:32:06 am by Wobble73 »
Who is General Failure and why is he reading my hard disk?
Early bird gets the worm, but the second mouse gets the cheese
Ambition is a poor excuse for not having enough sense to be lazy.
 
Member of the Scooby Doo Fanclub. And we're not talking a cartoon dog here people!!

 You would be well adviced to question the wisdom of older forumites, we all have our preferences and perversions

 

Offline Nuke

  • Ka-Boom!
  • 212
  • Mutants Worship Me
Re: Asimovian Thought Experiment
i think it would just be better to load the robots with the current legal code of whatever locale they will inhabit. then if they decide to take over, at least they will do it legally :D
I can no longer sit back and allow communist infiltration, communist indoctrination, communist subversion, and the international communist conspiracy to sap and impurify all of our precious bodily fluids.

Nuke's Scripting SVN

 

Offline Colonol Dekker

  • HLP is my mistress
  • Moderator
  • 213
  • Aken Tigh Dekker- you've probably heard me
    • My old squad sub-domain
Re: Asimovian Thought Experiment
I'd add the 4th,

Must get beer for the dominant male of the house whenever sport appears on tv. by ANY means, (including breach of previous 3 rules with the exception of removing beer from the aforementioned dominant male) :cool:
Campaigns I've added my distinctiveness to-
- Blue Planet: Battle Captains
-Battle of Neptune
-Between the Ashes 2
-Blue planet: Age of Aquarius
-FOTG?
-Inferno R1
-Ribos: The aftermath / -Retreat from Deneb
-Sol: A History
-TBP EACW teaser
-Earth Brakiri war
-TBP Fortune Hunters (I think?)
-TBP Relic
-Trancsend (Possibly?)
-Uncharted Territory
-Vassagos Dirge
-War Machine
(Others lost to the mists of time and no discernible audit trail)

Your friendly Orestes tactical controller.

Secret bomb God.
That one time I got permabanned and got to read who was being bitxhy about me :p....
GO GO DEKKER RANGERSSSS!!!!!!!!!!!!!!!!!
President of the Scooby Doo Model Appreciation Society
The only good Zod is a dead Zod
NEWGROUNDS COMEDY GOLD, UPDATED DAILY
http://badges.steamprofile.com/profile/default/steam/76561198011784807.png

 

Offline Flaser

  • 210
  • man/fish warsie
Re: Asimovian Thought Experiment
If a human attaclked another human, the robot would simply restrain the attacker, it would not have to harm the attacker to prevent him from harming the other human!

Asimov explored that avenue in his novalla "Liar!", about a mind reading robot - if it talked it hurt Susan its designer, if it kept quiet it hurt humanity! It blew up.

Later robots handled the issue, by admitting that in some situations they would irrevocably hurt some humans, in those situations they minimised the damage (but it still put pressure on them, and they would feel guilty over it, too much pressure and they would still blew their brains).

....hence the eventual need for the 0th law.
"I was going to become a speed dealer. If one stupid fairytale turns out to be total nonsense, what does the young man do? If you answered, “Wake up and face reality,” you don’t remember what it was like being a young man. You just go to the next entry in the catalogue of lies you can use to destroy your life." - John Dolan

 

Offline Colonol Dekker

  • HLP is my mistress
  • Moderator
  • 213
  • Aken Tigh Dekker- you've probably heard me
    • My old squad sub-domain
Re: Asimovian Thought Experiment
I'm guessing a few people have been watching I-Robot lately :)
Campaigns I've added my distinctiveness to-
- Blue Planet: Battle Captains
-Battle of Neptune
-Between the Ashes 2
-Blue planet: Age of Aquarius
-FOTG?
-Inferno R1
-Ribos: The aftermath / -Retreat from Deneb
-Sol: A History
-TBP EACW teaser
-Earth Brakiri war
-TBP Fortune Hunters (I think?)
-TBP Relic
-Trancsend (Possibly?)
-Uncharted Territory
-Vassagos Dirge
-War Machine
(Others lost to the mists of time and no discernible audit trail)

Your friendly Orestes tactical controller.

Secret bomb God.
That one time I got permabanned and got to read who was being bitxhy about me :p....
GO GO DEKKER RANGERSSSS!!!!!!!!!!!!!!!!!
President of the Scooby Doo Model Appreciation Society
The only good Zod is a dead Zod
NEWGROUNDS COMEDY GOLD, UPDATED DAILY
http://badges.steamprofile.com/profile/default/steam/76561198011784807.png

 

Offline jr2

  • The Mail Man
  • 212
  • It's prounounced jayartoo 0x6A7232
    • Steam
Re: Asimovian Thought Experiment
The problem with robots is that their decisions are based on rules, "programming"... Humans make their decisions the same way, except that they can decide to set aside their programming, or even re-program themselves.  I guess it comes down to whether Humans do this as a result of complex programming, or Free WillTM... you could program a robot to be able to re-program itself, or set aside rules... but what would be the qualifier?  It is still based on rules, not "if you want to", "if you feel or know it is wrong/right" <-based on what?, "if you decide"... <-decide how?  (rules!)

So, I guess the question is, do humans have the ability to decide which rules to follow/not follow/create/modify based on more rules?  Or is there something else involved?

Hmmm... "I think, therefore I am"... am what?  existent?  in which dimension(s)?  In what form(s)?  How? 

See, here's a question to people who think we don't have a soul:  Since said soul would not have eyes, except for the ones in your head, it would only be able to see the physical world... unless it can somehow sense using (since they are spiritual, not physical) unseen, undetectable (by physcial means) sensors of some sort.  The soul/spirit would have to have some sort of interface with the body.  One could question also, what part of your personality does the soul/spirit control/influence, and whether the soul can function without the body.

You can't argue that there is only the physical world, as the physical world has (currently) no means of detecting any other world/dimension... to do so, you'd have to get lucky and guess the interface between this world and the other... assuming there is one that is detectable by anything other than a human soul?  We don't really understand our main processor (brain), so I'm sure we would be hard pressed to find the soul/brain interface (or wherever it was, eventually connecting to the brain... but I'm pretty sure it'd be in the brain somewheres)

What we need to do, I think, is study the DNA/ and whatever other codes are used in the human body whilst it develops... I do believe we've sequenced all of it now, right?  (not just the parts that are "more important"?)  If we can look at that, perhaps we can find something for an interface...

Of course, this assumes that God uses DNA to create the soul interface  (interjecting personal beliefs here, in case you were too slow to catch it)... if there is a soul (speaking theoretically), and God creates it at the moment of conception (which I believe, but do not know how... is it part of the DNA code, or special creation, or both?  I'd say probably both, DNA for the interface, special creation for the soul itself), then unless the interface is created with DNA/physical means, you'd not be able to catch it by studying the DNA

All this is probably pretty confusing, enjoy, you've been watching me think/discuss out "loud" without trying to prove a point.  This is all to raise questions, some of it I believe, some of it I am just throwing out there to create more questions... hope I didn't make too much of a bother of myself.  ;)

 

Offline Mika

  • 28
Re: Asimovian Thought Experiment
A simple question:

Why the robot should be able to think?

Mika
Relaxed movement is always more effective than forced movement.

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
All I see is that you're trying to be overly anal in order to assert your point, in a rather contradictory fashion as well.


I'm having to be anal since you're deliberately ignoring the meaning of any sentence I write in order to over analyse the words I'm using. I mentioned several times that a robot which had passed every test that could be made would appear to have free will.

If you want to debate with me you're supposed to attempt to understand the point I'm making to the best of your abilities rather than trying to twist the words I've used in some kind of point scoring exercise.

I've had similar chances to twist your words and I've chosen not to do it if I could understand what you were trying to say.

Quite frankly I have no wish to debate the point any further with someone who would do that. I was going to say that last time but I chose to give you the benefit of the doubt. I can quite clearly see that I was wrong and my time is better spent elsewhere.


But just in case anyone else is wondering I chose the word appear because that might be all you have. In Asimov's stories for instance stimulus-response was all you had in order to determine what a robot was thinking. I simply didn't make the assumption that there was a test for free will beyond that.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline Mathwiz6

  • Pees numbers
  • 27
Re: Asimovian Thought Experiment
It's not that complicated. Kara says that, for all intents and purposes, a robot could have free will equivalent to a human, in that no test could differentiate the two, and that thereby, as humans are assumed to have free will, robots would as well. Effectively. Not that I'm arguing free will necessarily exists. Because that's unprovable. But such robots would be treated as if they had free will.

At least, that's what I think he's saying... Right?

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Spot on. If a robot passes every test for free will we have then you have to assume it has the same free will humans do. Those tests could be (and initially are likely to be) very basic. And they could be very wrong.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
As far as I'm concerned:
1) Free will is important to the question of robot rights because society is based on the idea that free will exists and that its members (Persons) have free will.
2) Proving that one person is responsible for violating another person's rights requires that it be proven beyond all reasonable doubt.
3) It logically follows from these two premises that if we are to prove that some category of individuals gain the mantle of personhood, we must prove beyond all reasonable doubt that they fall under the same assumed definition of members (Having free will)
4) Because robots are not currently considered persons, the mere appearance of free will is not enough for them to be considered persons (And therefore they cannot be guaranteed the same rights and protections)

I don't see why you disagree with this, then? Do you disagree with this? I assumed that you did, because you were attempting to refute my supporting points. But in looking back I realized that thesizzler basically acknowledged that he wasn't trying to prove free will, either, and yet you continued to say that he needed to "prove" free will.

Quote
We're very good robots.

Seriously, prove to me right now that you have free will and aren't acting out a predetermined course of actions. You can't. Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

So lets not start out with the idea that you aren't a robot because your unprovable concept of a soul gives you the equally unprovable ability of free will.

prove to me that we don't have free will. :p

haha! you can't either! :drevil:


And given our species current understanding of the way life works, what we have is as good as free will.

Computer program or not, my will is free enough for me.

EDIT: And you know, you're kind of right about me intentionally misunderstanding you. Looking back, I put point (4) in because I expected you to disagree with that, and wouldn't draw the conclusion that my points would not rule out a robot or robots that had their free will proven beyond reasonable doubt, but still didn't actually have free will.
« Last Edit: May 29, 2007, 06:17:13 pm by WMCoolmon »
-C

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
I assumed that you did, because you were attempting to refute my supporting points. But in looking back I realized that thesizzler basically acknowledged that he wasn't trying to prove free will, either, and yet you continued to say that he needed to "prove" free will.


Because that was the point at which you inserted yourself into the discussion I was having with him. If you're going to post saying that free will's existence is not irrelevant to the issue I was discussing with him then I'm going to have to argue the same point with you regardless of whether or not thesizzler had dropped that assertion or not. You were asserting it yourself every time you argued against me saying it was irrelevant.

Quote
I don't see why you disagree with this, then? Do you disagree with this?


I disagree with point 4 the most strongly since it appears as though you're holding AIs to a much higher burden of proof than humans. Humans are assumed to have free will, an assumption made simply because society won't work without it. But for a robot to have free will it has to definitively prove something which you've been unable to prove in a test subject we've had much longer to work with?

The crux of my problem with your argument is that I've not made the same basic assumption that you've made. I've not assumed that the test is actually good enough to give an answer. It might be. But it seems to me that (at least initially) it won't be.

Much of the current work on AIs is based on neural nets. The problem with them is that scientists are much better at getting them to solve problems than they are at understanding how they are solving problems. The same goes for Asimov's AIs who as I've stated before couldn't be tested by any manner other than seeing what they did in a given situation.

So what do you do if you have an AI that might have free will but will have to wait another 10 years for the test to definitively determine whether or not it does? What if you conduct every test known to man and still don't have an even remotely definitive answer?

That's why I said appearance of free will. You seem to be acting as though you can give free will tests and group all the test subjects into pass or fail. I'm saying that maybe you can or maybe you'll end up having a whole spectrum of certainty about whether a robot has free will or not going from probably doesn't to probably does.

So again. What do you do with a robot you're 50% certain has free will?
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
I assumed that you did, because you were attempting to refute my supporting points. But in looking back I realized that thesizzler basically acknowledged that he wasn't trying to prove free will, either, and yet you continued to say that he needed to "prove" free will.


Because that was the point at which you inserted yourself into the discussion I was having with him. If you're going to post saying that free will's existence is not irrelevant to the issue I was discussing with him then I'm going to have to argue the same point with you regardless of whether or not thesizzler had dropped that assertion or not. You were asserting it yourself every time you argued against me saying it was irrelevant.

Then be more careful when you say that something is "irrelevant" to the discussion. Especially when that something has such an extremely broad definition such as free will. The first definition on dictionary.com for free will is "   free and independent choice; voluntary decision:". The second definition refers to the philosophical argument. You didn't say which one you were saying was irrelevant and given that you seemed to be disagreeing with thesizzler that apparent free will was not a good enough criteria, the implication was that you were saying that both were irrelevant.

However, I will be more careful in future discussions to try and pick up on that sort of thing.

I disagree with point 4 the most strongly since it appears as though you're holding AIs to a much higher burden of proof than humans. Humans are assumed to have free will, an assumption made simply because society won't work without it. But for a robot to have free will it has to definitively prove something which you've been unable to prove in a test subject we've had much longer to work with?

For starters, I should say that I don't propose that every single robot be tested in such a manner. If it's obvious that it's not simply a fluke ala Short Circuit, and it's actually a general property of a robot design (or a particular robot brain) then I think it's good enough to assume that such robots have free will. It would be impractical to do otherwise.

I believe that humans are assumed to have free will because the assumption works. I think that if we had good evidence that humans could not control themselves, it would be evident in the way that our society functioned. I think that because humans have built a society that assumes free will, and then successfully lived within a society that assumes free will, it has proven beyond reasonable doubt that humans have free will.

Furthermore, nobody has come up with evidence that obviously disproves that. If we all had some kind of radio receiver organ in our heads with the ability to take over all mental and physiological functions, I would consider that reasonable doubt for the idea of free will. :p

Given that we have several thousand years of evidence to work with for humans, I would say that just subjecting a few robots to some tests is hardly holding them to a higher standard than humanity. On an individual basis, yes, the robots would have to work harder to prove themselves at first. But you see that same pattern all over human society, whether it's some kind of locker-room initiation right to getting a job to immigrating to a new country.

The crux of my problem with your argument is that I've not made the same basic assumption that you've made. I've not assumed that the test is actually good enough to give an answer. It might be. But it seems to me that (at least initially) it won't be.

Much of the current work on AIs is based on neural nets. The problem with them is that scientists are much better at getting them to solve problems than they are at understanding how they are solving problems. The same goes for Asimov's AIs who as I've stated before couldn't be tested by any manner other than seeing what they did in a given situation.

So what do you do if you have an AI that might have free will but will have to wait another 10 years for the test to definitively determine whether or not it does? What if you conduct every test known to man and still don't have an even remotely definitive answer?

You admit that you don't know and you go from there. Perhaps you grant it some sort of partial rights based on what you can prove. But if you can't prove that a robot will not lose control of itself arbitrarily, you've already hit a definition that would exclude a human from certain aspects of society.

If a brick falls off of a roof and nearly hits a human (Nevermind why the brick was there in the first place) it is ridiculous to try the brick for attempted murder. We humans generally think it's ridiculous because a brick is nothing like a human! Yet in making that judgment we are basically applying a mental test to the brick. Does it look like a human? Does it act like a human? Can it think like a human? The brick subsequently fails each one and so we do not admit the brick to society and do not hold it responsible for its actions. It is not considered a criminal act that we let the brick go free.

Nor do we try animals for killing and eating other animals, or even humans, at least not generally. Here the reasons are less well-defined, but I think most people figure that the animal doesn't know better, it's natural behavior for the animal, and that is the way that the animal gets nourishment. Perhaps the animal is a carnivore and so it is biologically geared towards killing and eating other complex animals. But nor do we grant them rights such as freedom of speech or the right to vote. Again, we have applied some arbitrary mental test to animals and decided to exclude them from human society. Some animals are believed to be highly intelligent; but we still treat them the same as other animals.

That's why I said appearance of free will. You seem to be acting as though you can give free will tests and group all the test subjects into pass or fail. I'm saying that maybe you can or maybe you'll end up having a whole spectrum of certainty about whether a robot has free will or not going from probably doesn't to probably does.

So again. What do you do with a robot you're 50% certain has free will?

You've never really dealt with having a whole spectrum of certainty. Generally your logic has seemed to follow that if a robot has the appearance of free will, we ought to act as though it has free will. You've never differentiated between a robot having half the appearance of free will, as opposed to 90% of the appearance of free will. (Feel free to correct me on this if I missed something. That is one of my objections to your stance.)

If we're only 50% sure that a robot has free will, but we're 99.9% sure that humans have free will, it seems kind of self-evident. We would not grant full rights to somebody unable to control themselves half the time. Most likely we would lock them up and separate them from society, if we considered them a threat to themselves or others.
-C

 
Re: Asimovian Thought Experiment
Why do we need robots that can think for themselves is what I wish to know. We barely get on with each other and you all want to add another race to the planet :-D

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
You've never really dealt with having a whole spectrum of certainty. Generally your logic has seemed to follow that if a robot has the appearance of free will, we ought to act as though it has free will. You've never differentiated between a robot having half the appearance of free will, as opposed to 90% of the appearance of free will. (Feel free to correct me on this if I missed something. That is one of my objections to your stance.)

Did you not say that you didn't want to get into yet another discussion with me over the testing? :p

Well I didn't want to further muddy the waters over what I considered to be a fairly obvious point either.


Quote
For starters, I should say that I don't propose that every single robot be tested in such a manner. If it's obvious that it's not simply a fluke ala Short Circuit, and it's actually a general property of a robot design (or a particular robot brain) then I think it's good enough to assume that such robots have free will. It would be impractical to do otherwise.

I never said every single robot would have to be tested either. However the fact remains that you could have thousands of robots off the production lines long before you had any kind of definitive test. Bear in mind that there is no real reason to think that free will wouldn't happen as the result of an accident either. Sci-fi is full of examples of smarter and smarter computers achieving sentience.

Quote
I believe that humans are assumed to have free will because the assumption works. I think that if we had good evidence that humans could not control themselves, it would be evident in the way that our society functioned. I think that because humans have built a society that assumes free will, and then successfully lived within a society that assumes free will, it has proven beyond reasonable doubt that humans have free will.

Let's assume that humans don't have free will but think they do. They would build a society based on free will because they think they have it. Then cause they think they have it they would live successfully within it. By your definition that's proof beyond a reasonable doubt that they have free will but in the end you've reached the wrong conclusion.

Human society has to be based on free will in order to work. That's true but we still quite often say that humans have lost their free will. The mentally ill for instance perform actions that look perfectly free willed to them from their own narrow perspective but those of us who don't have the same affliction can look at the person's actions and say that he no longer has his free will and is acting differently because of altered brain chemistry. But altered from what? Who's to say that what we consider to be sane = free will? Perfectly sane people do little crazy things all the time and then say "I have no idea why I chose to do that. It seemed like a good idea at the time"

You're making the assumption that the baseline = free will but you haven't got any evidence to prove it. What if someone who is a habitual thief simply has a brain chemistry that makes him like stealing. If someone does that all the time you call them a kleptomaniac and treat them for it but if they only do it from time to time you call them a thief and lock them away.

What if a sufficiently high intelligence can see that we're all a little mad and are deluding ourselves into thinking we have free will because of it?

While I'm at it. Here's another argument to further muddy the waters. In Larry Niven's Protector Brennan-monster frequently says that he no longer has free will since he's now too intelligent. Whenever he is presented with a choice he knows which is the best course of action to take and thus takes it. Only a mental defective would willingly take a course of action which didn't appear to be the best thing to do at the time after all.

Now who says that AIs won't fall into that category? What if humans are smart enough to have free will but stupid enough not to lose it again? An AI like that might actually fail several of our tests for free will since it would be unable to make sub-optimal choices. We happen to regard being able to make sub-optimal choices as an example of free will but yet again we could be deluding ourselves. It could be that we're simply too dumb to make the best choice in every situation.

You seem to be arguing based on the assumption that free will is binary. You either have it or you don't. It might be nothing of the kind.

Quote
You admit that you don't know and you go from there. Perhaps you grant it some sort of partial rights based on what you can prove. But if you can't prove that a robot will not lose control of itself arbitrarily, you've already hit a definition that would exclude a human from certain aspects of society.

Bear in mind you can't prove that any human on the planet won't go mad. So it would have to depend on the chance of it losing control.

Quote
Nor do we try animals for killing and eating other animals, or even humans, at least not generally. Here the reasons are less well-defined, but I think most people figure that the animal doesn't know better, it's natural behavior for the animal, and that is the way that the animal gets nourishment. Perhaps the animal is a carnivore and so it is biologically geared towards killing and eating other complex animals. But nor do we grant them rights such as freedom of speech or the right to vote. Again, we have applied some arbitrary mental test to animals and decided to exclude them from human society. Some animals are believed to be highly intelligent; but we still treat them the same as other animals.

Need I point out that there are several people involved in trying to get chimps rights on the grounds that said tests are flawed?

Quote
If we're only 50% sure that a robot has free will, but we're 99.9% sure that humans have free will, it seems kind of self-evident. We would not grant full rights to somebody unable to control themselves half the time. Most likely we would lock them up and separate them from society, if we considered them a threat to themselves or others.

I didn't say that the robot was uncontrollable 50% of the time. I said that we aren't certain the robot has free will or not because the tests aren't good enough. Using the example above a free will test for an AI smarter than humanity might fail to give conclusive evidence in either direction simply because the test is wrong.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]