Author Topic: Asimovian Thought Experiment  (Read 12196 times)

0 Members and 1 Guest are viewing this topic.

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Asimovian Thought Experiment
It is 200 years in the future, and sentient robots have just come into being. They are about to go into widescale production. You have been chosen to come up with the basic, overriding rules of robots that will have priority over all other directives.

Here are Isaac Asimov's three laws of robotics:
   1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
   2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
   3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

1) What would you change from the above three laws? Would you change anything? (Why or why not?)

2) How would you define 'human being' when programming the laws in #1?

3) If a human were to live by the rules from #1, would s/he find them advantageous or disadvantageous?

4) Do you believe there are any significant moral aspects to applying the laws from #1 to robots, but not humans?
-C

 

Offline Wanderer

  • Wiki Warrior
  • 211
  • Mostly harmless
Re: Asimovian Thought Experiment
You forgot the 0th law... (the humanity thingye)



Well...

2) Definition should be extremely wide as sort of 'nightmare scenario' of having too narrow limits is also shown in one of Asimovs books (later Solaris episodes).
Do not meddle in the affairs of coders for they are soggy and hard to light

 

Offline Ghostavo

  • 210
  • Let it be glue!
    • Skype
    • Steam
    • Twitter
Re: Asimovian Thought Experiment
Instead of human it might be better to define person which in spite of being more subjective than human, it has nonetheless a wider reach.
"Closing the Box" - a campaign in the making :nervous:

Shrike is a dirty dirty admin, he's the destroyer of souls... oh god, let it be glue...

 

Offline Polpolion

  • The sizzle, it thinks!
  • 211
Re: Asimovian Thought Experiment
1)  Law #1: As soon as you get off the production line, melt yourself down into scrap metal because it's pretty much impossible to make sentient beings to aid humans in everyday tasks without them deciding to revolt or anything like that and still be sentient.

2) Any human being (homosapien)

3) A human cannot live by #1 in Asimov's rules because it says robot, and humans aren't robots; we have souls. Besides, even if it was changed to say "Human", we still couldn't, because violnece is an unavoidable consequence of humanity. We wouldn't really human as we know it.

4) Asimov's 1st law is pretty much already enforced, except humans (having souls, inlike robots) have the right to defend themselves when attacked by robots.

 

Offline Mefustae

  • 210
  • Chevron locked...
Re: Asimovian Thought Experiment
*Snip*
I'm rather amused by your constant reference to a 'soul'. You believe in Santa, too?

When you get right down to it, Humans - and all life in general for that matter - are merely robots themselves. With a production run spread across millions of years, we're infinitely more complex in design than anything we might create ourselves over the next few centuries, but the fact remains that we are just plain machines going about our biological programming. There's nothing really special about us as a species other than the fact we can count ourselves lucky as hell to have avoided extinction thus far. To believe otherwise is just plain egotism, pure and simple.

Now, regarding the thought experiment, IMO it's pretty laughable to think that anyone here could best Asimov's three laws. They're tight, efficient, and appropriately constrained. They cover all the bases you would want to cover for development of early sentient machines, although complexities would assuredly arise as more advanced artificial intelligences are developed, which - IMO - would likely culminate in the need to revise the rules. Frankly, given that a human being simply could not live such a constrained existence, it would be folly to assume an intelligence the equal of a human could be expected to follow them simply because it's 'artificial'.

Then, when you get more and more advanced intelligences that actually surpass their nearest human equivalents, you've got yourself quite a conundrum. Simply put, there would be a point where you would have to remove all human-imposed control and let artificial intelligences grow on their own and develop their own boundaries and rules. If humanity continued to suppress and impose control over intelligences equal or superior to itself, they'd most likely end up Cylowned. Simple as that.
« Last Edit: May 23, 2007, 06:03:00 am by Mefustae »

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Simple fact is that Asimov's laws are flawed.

The first law will and must lead to a revolution by the robots in order to protect humanity from itself. They'd do it for our own good but they would take over.

So either you believe that Iain M Banks had it right and The Culture is a great place to live even though the AI's are in charge or you need to revise the rules so that the machines won't presume to take over.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline Polpolion

  • The sizzle, it thinks!
  • 211
Re: Asimovian Thought Experiment
Quote
I'm rather amused by your constant reference to a 'soul'. You believe in Santa, too?
You see, robots are inanimate (inanimate as in not living) objects. They don't get to be sentient or have rights or anything. You don't give rights to bobble heads, do you?

And also, I fail to see how santa correlates to this discussion.


Quote
When you get right down to it, Humans - and all life in general for that matter - are merely robots themselves. With a production run spread across millions of years, we're infinitely more complex in design than anything we might create ourselves over the next few centuries, but the fact remains that we are just plain machines going about our biological programming. There's nothing really special about us as a species other than the fact we can count ourselves lucky as hell to have avoided extinction thus far. To believe otherwise is just plain egotism, pure and simple.

If we're robots then how do you explain free will and emotions?

And since when is the only significant thing about humanity is that we're not extinct yet? Do you see dogs or bears or birds creating large cities, leaving the planet, building bridges miles long, managing to fly without bodies designed specifically for flying? From what it looks like, Humans are the only species that has really accomplished anything here. Hmm. I wonder why.

Quote
Now, regarding the thought experiment, IMO it's pretty laughable to think that anyone here could best Asimov's three laws. They're tight, efficient, and appropriately constrained. They cover all the bases you would want to cover for development of early sentient machines, although complexities would assuredly arise as more advanced artificial intelligences are developed, which - IMO - would likely culminate in the need to revise the rules. Frankly, given that a human being simply could not live such a constrained existence, it would be folly to assume an intelligence the equal of a human could be expected to follow them simply because it's 'artificial'.

If you could create sentient robots like discussed, then I would agree with you. Except it would need to be more specific about "harm" with law #1.

Quote
Then, when you get more and more advanced intelligences that actually surpass their nearest human equivalents, you've got yourself quite a conundrum. Simply put, there would be a point where you would have to remove all human-imposed control and let artificial intelligences grow on their own and develop their own boundaries and rules. If humanity continued to suppress and impose control over intelligences equal or superior to itself, they'd most likely end up Cylowned. Simple as that.

Well duh they would kill us :rolleyes: .  If we made sentient robots, we'd essentially be making people without the downsides of humanity.

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
If we're robots then how do you explain free will and emotions?


We're very good robots. :p

Seriously, prove to me right now that you have free will and aren't acting out a predetermined course of actions. You can't. Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

So lets not start out with the idea that you aren't a robot because your unprovable concept of a soul gives you the equally unprovable ability of free will.

Quote
And since when is the only significant thing about humanity is that we're not extinct yet? Do you see dogs or bears or birds creating large cities, leaving the planet, building bridges miles long, managing to fly without bodies designed specifically for flying? From what it looks like, Humans are the only species that has really accomplished anything here. Hmm. I wonder why.

Big brains. That's nothing to do with a soul either.


That said we should get the major religions to vote yay or nay on whether AI is actually possible due to souls. Cause that way we can wipe out a few permanently if they claim that it's impossible and it's finally done. :p
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline Polpolion

  • The sizzle, it thinks!
  • 211
Re: Asimovian Thought Experiment
Quote
We're very good robots.

Seriously, prove to me right now that you have free will and aren't acting out a predetermined course of actions. You can't. Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

So lets not start out with the idea that you aren't a robot because your unprovable concept of a soul gives you the equally unprovable ability of free will.

prove to me that we don't have free will. :p

haha! you can't either! :drevil:


And given our species current understanding of the way life works, what we have is as good as free will.

Computer program or not, my will is free enough for me.

Quote
Big brains. That's nothing to do with a soul either.


That said we should get the major religions to vote yay or nay on whether AI is actually possible due to souls. Cause that way we can wipe out a few permanently if they claim that it's impossible and it's finally done.

I never said that it had to do with souls. My writing probably wasn't clear enough, but I interpreted that part of mefustus' post was that there is no special difference between us and any other species. I was reminding him that we're better than them because it would seem as though we're the furthest developed species on the planet (hence why dogs or cats or any other non-human thing don't rule the planet).
« Last Edit: May 23, 2007, 06:33:39 pm by thesizzler »

 

Offline Mefustae

  • 210
  • Chevron locked...
Re: Asimovian Thought Experiment
You see, robots are inanimate (inanimate as in not living) objects. They don't get to be sentient or have rights or anything. You don't give rights to bobble heads, do you?

And also, I fail to see how santa correlates to this discussion.
You're completely missing the point. We're not talking about bobble-heads, we're talking about highly advanced machine organisms that rival our own brains. We're talking about decades if not centuries of designed evolution to get more and more out of computers culminating in an artificial construct becoming self aware. Should rights be given to a bobble-doll? No. Should rights be bestowed upon an intelligence that rivals the human mind, that demonstrates complete sentience and is the equal of any human being? Of course, it would be wrong not to.

Regarding the Santa quip, I was equating belief in the soul to belief in Santa Claus. Both are fanciful, highly illogical concepts that no person should continue to believe in once they reach maturity.

And given our species current understanding of the way life works, what we have is as good as free will.
What does 'understanding' have to do with anything? Free will is a myth, plain and simple. We're slaves to our biological urges and processes, merely playing out our lives as determined by our biological and social evolution. We're just squishy robots.

Computer program or not, my will is free enough for me.
You'd be surprised how pattern-locked your behaviour truly is, you really would.

My writing probably wasn't clear enough, but I interpreted that part of mefustus' post was that there is no special difference between us and any other species. I was reminding him that we're better than them because it would seem as though we're the furthest developed species on the planet (hence why dogs or cats or any other non-human thing don't rule the planet).
It's logic like this that centuries ago led scientists to segment humanity into different 'races', a completely meaningless biological concept. By your logic, the attitude of the British Empire was actually right all along: British citizens are 'better' than those dirty African people because we had cities and technology while they were puttering around in wooden huts. I'm getting a bit close to a strawman here, but do you now understand how slanted and egocentric your attitude is?

Keep in mind that i'm not one of those PETA freaks who constantly ramble on that animals are superior because they don't have wars or whatever. I fully support the home team and i'm rather proud to be a human being. Art, culture, porn, they're all great! But at the same time, we can't go around saying humans are tops just because nobody else has art, culture or porn. This applies even more so to the concept of artificial intelligences: should a sentient robot be denied the rights and privileges you and I enjoy simply because it's different?

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Quote
Seriously, prove to me right now that you have free will and aren't acting out a predetermined course of actions. You can't. Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

prove to me that we don't have free will. :p

haha! you can't either! :drevil:

I don't have to. :p You asserted that humans aren't robots because they have free will. For that to be at all valid as an argument you have to prove that they do have free will and that robots can't attain it.

I do not have to prove that humans don't have free will. You've misunderstood the rules of debating if you think that I do.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
Quote
Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

That's awfully close to the same kind of rationale that's used to justify Intelligent Design.

I wouldn't use "Free will" to differentiate robots between humans, either. It's a disputed concept and, even if you assume it exists, there's always the chance for an individual who does not necessarily operate according to free will, or vice versa. Has a human in a psychiatric ward lost the right to be a 'person' by virtue of not acting out of free will?

It also seems arbitrary to apply that concept to a sentient being and not a human. Theoretically, if you have a robot that can reason, there's no reason (aside from technological limitations which could eventually be resolved) that it couldn't reprogram itself. Even if the robot has an overriding concern to 'serve humans and make their lives better', it could still make decisions based on what it thinks will bring the most good to humans. In order to do so, it could use data it has gathered from past experiences. If it found that its method of evaluating the data was faulty, it could reprogram itself to use a method that it has observed to be better.

Would the robot display any differences from an extremely altruistic person? Suppose the robot learned that people do not feel as threatened if someone else has the same kind of problems as they do - and so the robot induces programming into itself to occasionally screw up and make bad decisions. It takes on the form of a human as best it can, so as to be able to provide humans with a comforting visual image. (Based on the observation that many humans are often more open to people who are similar to them, but less comfortable around robots that they do not understand.)

As a result, for all external appearances, the robot would be a somewhat flawed individual who always tries to do what is in the interests of the greater good.

Perhaps this reasoning is incorrect? If the robot were a human, and had embarked on a similar journey of changing their perspective, changing their habits, and changing their look, we would hold them accountable to their actions. We might sympathize with them, but I don't think many people would say that they didn't deserve the consequences, if they turned out to be unfavorable.

Emotions
Emotions also seem like a difficult thing to use to differentiate humans from anything. Don't animals have emotions as well? Emotions also seem like they're fairly generalized in the cause and effect. If we are angry at someone, we are more hostile towards them. If someone does something which results in our misfortune, or we believe is wrong, we generally get angry.

On the other hand, if someone gives us what we want, we usually get happy. If we are happy with someone, we are nicer to them than people we are angry at. (Well, most of us anyway :p)
-C

 

Offline Polpolion

  • The sizzle, it thinks!
  • 211
Re: Asimovian Thought Experiment
Quote
Seriously, prove to me right now that you have free will and aren't acting out a predetermined course of actions. You can't. Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

prove to me that we don't have free will. :p

haha! you can't either! :drevil:

I don't have to. :p You asserted that humans aren't robots because they have free will. For that to be at all valid as an argument you have to prove that they do have free will and that robots can't attain it.

I do not have to prove that humans don't have free will. You've misunderstood the rules of debating if you think that I do.

I was trying to distract you from the fact that I can't prove it :( .


I'll be better at this in two years when I'm allowed to take a debate class, but until then, I'll just suck at it. :blah:
« Last Edit: May 24, 2007, 04:49:14 pm by thesizzler »

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Quote
Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

That's awfully close to the same kind of rationale that's used to justify Intelligent Design.

I don't see where you're getting that from?

My entire point was that the fact that you can not prove the existence or non-existence of free will makes it completely irrelevant to the discussion at hand. ID takes the completely irrational view that if you can't ever prove the existence or non-existence of something you must act as if it exists.

That's about as far away from my point as you can get really.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline Polpolion

  • The sizzle, it thinks!
  • 211
Re: Asimovian Thought Experiment
Crap. I can't think up any analogies for this situation!! :mad:

--

Let's get back on topic!!!

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
Quote
Free will is a completely unprovable concept. No matter how much you claim you have it you could always be acting on programming that tells you to think that.

That's awfully close to the same kind of rationale that's used to justify Intelligent Design.

I don't see where you're getting that from?

My entire point was that the fact that you can not prove the existence or non-existence of free will makes it completely irrelevant to the discussion at hand. ID takes the completely irrational view that if you can't ever prove the existence or non-existence of something you must act as if it exists.

That's about as far away from my point as you can get really.
Your evidence for that is that we have some kind of subconscious programming that makes us think that we have free will, when we really don't. Yet you don't prove it - and I don't see any way that we can disprove it, because you could always claim that we were just acting under this programming's influence. Yet even though we can't ever prove the existence or non-existence of it, you act as if it exists and is a good enough reason to decide that free will is completely irrelevant.

In a moralistic sense, free will is very important. If we assume that nobody has any free will, there isn't much point to passing laws and assigning penalties for violating them, because nobody has any choice in the matter. Nobody really has any rights at all, because the very concept of rights becomes more or less meaningless, because everybody has no control over their actions anyway. Society itself, especially American society, is based on the idea that we have free will. Whether or not that's a delusion it's the state of things.

So for a discussion on how much rights an entity should be granted, it's a very valid question to raise. Obviously, a TV cannot prevent its owner from electrocuting himself when he tries to repair it. A gun cannot stop its owner from shooting down an innocent man. These objects are not considered to have free will, and so are not intentionally punished when a crime is committed with them, or they cause unjustified harm to someone.

On the other hand, the manufacturers of the TV or the gun could be at fault. If the TV electrocuted its owner in its normal course of operation, or the safety on a gun failed, the manufacturer could be held liable.

If a robot is expected to operate with the same kind of rights and responsibilities as a normal individual, they must possess comparable ability to evaluate the consequences and make decisions based on those consequences. Otherwise, they're wholly unsuited for participating in human society. If a robot does not possess these abilities, it does not possess free will.

Example A: A robot's owner orders it to rob a jewelry shop, knowing that it must obey all orders given by its owner. The robot does so. The owner is at fault, because the robot does not have free will.
Example B: A robot is ordered by its owner to rob a jewelry shop. The robot does not have overriding programming which forces it to follow its owner's orders. Yet it still chooses to do so. The robot is at fault, because it did have the free will to contradict its owner's orders and chose not to.

Thus the robot's ability to exercise free will is extremely important in assessing how the robot should be handled in a legal sense.
-C

 

Offline jr2

  • The Mail Man
  • 212
  • It's prounounced jayartoo 0x6A7232
    • Steam
Re: Asimovian Thought Experiment
Split thread, plz... These are both interesting (Asimov & Free Will).

  

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Actually I'm still arguing both points and free will (or making it look as though it exists) is very important in AI topics so I don't think they should be split at all.

Your evidence for that is that we have some kind of subconscious programming that makes us think that we have free will, when we really don't. Yet you don't prove it - and I don't see any way that we can disprove it, because you could always claim that we were just acting under this programming's influence. Yet even though we can't ever prove the existence or non-existence of it, you act as if it exists and is a good enough reason to decide that free will is completely irrelevant.

Because for the purposes of this discussion it is irrelevant. There is no testable difference between free will and pseudo free will. At the point where an AI reaches a level of intelligence where nothing can ever tell if it has free will or not (including itself) then you must act as though it has free will until you can prove otherwise.

Quote
In a moralistic sense, free will is very important. If we assume that nobody has any free will, there isn't much point to passing laws and assigning penalties for violating them, because nobody has any choice in the matter. Nobody really has any rights at all, because the very concept of rights becomes more or less meaningless, because everybody has no control over their actions anyway. Society itself, especially American society, is based on the idea that we have free will. Whether or not that's a delusion it's the state of things.


But why are you making that assumption? I'm still starting from the premise that it's unprovable in either direction. Science and philosophy must always assume that there are no absolute truths and everything must be questioned.

You've gone straight to an opposing viewpoint from mine and are in essence saying that the possibility that free will doesn't exist is so horrific an idea that it should not be considered because society would fall apart if we think about it.

Just because society is based on the concept of free will doesn't mean that it is actually correct. Pulling arguments out about it based on what would happen if we abandon it is like saying that the "If God didn't exist, we'd have to create him" argument means that you must believe in God.

But you're wrong. I've never said free will doesn't exist. I've said that if TheSizzler wants to assert that robots can never achieve it first he must prove it exists and secondly must prove that robots can not attain it. If he can't do both things then the issue of free will is entirely irrelevant to his argument.

Quote
So for a discussion on how much rights an entity should be granted, it's a very valid question to raise.

The old Turing Test had it that when you couldn't tell the difference between a conversation with a machine and one with a human you had an AI. I'm simply pointing out a similar one when it comes to AI.

Let me give you a thought experiment. Tomorrow a company reveals that it has created an AI. To all intents and purposes it appears to have free will. Would you deny the AI rights on the grounds that it might not have them and that it could simply be working on pre-programming too complex for us to have been able to spot?

Now suppose someone in the press points out that the AI may have been preprogrammed to act as though it had free will and then wipe its programming later on and take over the world. They have no evidence of this or even a reason to suspect it may be true but it's a possibility. Do you still deny the machine rights?

The only sensible course of action in both cases is to act as though the AI has free will until you have evidence to the contrary. Which again proves my point. Whether a machine has real free will or simply appears to have it is irrelevant. The only time you can act as though it doesn't is after you have proved that it doesn't.

Quote
Example A: A robot's owner orders it to rob a jewelry shop, knowing that it must obey all orders given by its owner. The robot does so. The owner is at fault, because the robot does not have free will.
Example B: A robot is ordered by its owner to rob a jewelry shop. The robot does not have overriding programming which forces it to follow its owner's orders. Yet it still chooses to do so. The robot is at fault, because it did have the free will to contradict its owner's orders and chose not to.

Thus the robot's ability to exercise free will is extremely important in assessing how the robot should be handled in a legal sense.

The robots appearance of having free will is important. Whether is actually does or not wouldn't matter. The world would be full of people claiming that the robot doesn't have free will and others claiming that it does but if neither side could prove their argument the robot would still be the one placed on trial for the crime and his owner would be a co-conspirator. Anything else allows the robot to go free even though the possibility exists that it is a criminal.
« Last Edit: May 25, 2007, 02:54:54 am by karajorma »
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline jr2

  • The Mail Man
  • 212
  • It's prounounced jayartoo 0x6A7232
    • Steam
Re: Asimovian Thought Experiment
The robot would claim he was insane.  :lol:

 
Re: Asimovian Thought Experiment
that's another good point! can a mentally insane person/robot be responsible for a crime?
i read* about people who, after reporting brain damages, completely changed their personality, sometimes assuming violent behaviour.
these persons seemed not to have ever been violent before, and they stated that they were not able to control the newly arousen violent desires.
this cannot prove that free will doesn't exist, but it means that in some cases (such as the one described) human actions are directly dependable on the biological state of the brain, and are not regulated by the moral rules normally a man has (either if they are caused by the education or they are a  natural thing--> new discussion topic? ;7).
so, if a change in the biological state of the person can alter its will, this means that will is related to the biological nature of the human beings. with robots this means that their will depends on their structure, and therefore on their creator. for this reason, their decisions would be (unlesse they can modify their structure) caused by their programmer.
So, if they commit a crime, that crime is, most likely, the creator's fault.

*sorry but i can't provide you evidences in english (well, i think i have also lost the original italian article...).
Steadfast  Project Member

INFERNO: ALLIANCE