Author Topic: I, Robot Trailer. No, just no.  (Read 6813 times)

0 Members and 1 Guest are viewing this topic.

Offline Flipside

  • əp!sd!l£
  • 212
I, Robot Trailer. No, just no.
Yes, it was because of an impasse in even the Zeroth law that Daneel needed Golan Trevise to make the decision between Pyschohistory and Galactica, he had no way of being able to determine the long term effect of either on Mankind.

 

Offline Mr. Vega

  • Your Node Is Mine
  • 28
  • The ticket to the future is always blank
I, Robot Trailer. No, just no.
Now if they were doing the Caves of Steel, then I would be interested.
Words ought to be a little wild, for they are the assaults of thoughts on the unthinking.
-John Maynard Keynes

 

Offline Su-tehp

  • Devil in the Deep Blue
  • 210
I, Robot Trailer. No, just no.
Flipside misremembered the three laws; he said he couldn't recall them quite clearly. But I found them on the Internet and I've quoted them here:

Quote
1st Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2nd Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3rd Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


And also, later on, the Zeroth or 0th Rule was developed which says that: A Robot may not, by action or inaction, bring harm to Mankind, even this contradicts the First, Second or Third Laws.

Karajorma had a question that I want to reply to:

Quote
Originally posted by karajorma
Why would you need the 0th law? Surely the first law covers that. (Can't think of a situation where mankind would come to harm with individual humans coming to harm too).


Karajorma might have been confused since the laws were misquoted, but if he understood the laws as I've written them down now and still has the same question, I want to respond to that.

There's a likely scenario involving a conflict of the Three laws of Robotics that I want to point out to all you guys, but especially Karamajora, that shows that Asimov's "logic circle" of the Three laws is flawed.

Imagine a family of four: A father, a mother and two children a teenage son and daughter. They purchase a domestic robot (programmed with the Three Laws, of course) to help with chores around the house.

Now imagine that there is a military coup that overthrows the democratic government of the nation this family lives in. The father and son decide to get involved in a local resistance cell and begin quietly planning guerilla attacks on occupying soldiers. The robot hears this and immediately thinks "My First law prevents me from letting my masters kill other humans; I'm not permitted to stand by and let other humans get killed, even if they are soldiers of an occupying army."

So, naturally, the robot wants to stop the father and son from killing other humans, but what happens when the robot realizes that the dictatorship the family is living under also constitutes harm to his own human masters? A robot can't allow humans to come to harm, right? But how can one reconcile the fact that in order to prevent harm to a human, it is sometimes necessary to take human life?

If enemy soldiers come to the family's house to arrest the father and son, how can the robot protect his masters if the only way he can keep the family safe is to kill the soldiers? (Non-lethally incapactitating the soldiers is not an option; there are too many of them. Besides, it's easier to kill a group of soldiers than it is to knock them all out.) The robot can't commit any action that would kill the soldiers, but neither can he simply stand by and let the soldiers take away the family to be executed. Following either course of action violates the First Law. This is a classic case of "damned if you do, and damned if you don't."

That is one instance where the First of the Three Laws of Robotics is insufficient.

The addition of the Zeroth Rule (A Robot may not, by action or inaction, bring harm to Mankind) helps reconcile this quandary.
If a dictatorship is harmful to Mankind (something I think everyone can agree on), then the robot is free to kill enemy soldiers of that dictatorship in order to protect other humans whose continued existence benefits Mankind (namely, freedom-loving and democracy-loving people).

Does this answer your question, Karajorma?
« Last Edit: March 17, 2004, 04:57:00 pm by 387 »
REPUBLICANO FACTIO DELENDA EST

Creator of the Devil and the Deep Blue campaign - Current Story Editor of the Exile campaign

"Let my people handle this, we're trained professionals. Well, we're semi-trained, quasi-professionals, at any rate." --Roy Greenhilt,
The Order of the Stick

"Let´s face it, we Freespace players may not be the most sophisticated of gaming freaks, but we do know enough to recognize a heap of steaming crap when it´s right in front of us."
--Su-tehp, while posting on the DatDB internal forum

"The meaning of life is that in the end you always get screwed."
--The Catch 42 Expression, The Lost Fleet: Beyond the Frontier: Steadfast

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
I, Robot Trailer. No, just no.
I get your post and I'd already thought it through.

Take the second law. Suppose a robot recieves two commands at the same time. Lets say get me my pipe and slippers while someone else says prepare my lunch. Obviously the robot must prioritise those orders and if there is only enough time to do one of them the robot must pick one (unless we are saying that robots sit there whirring and clicking every time there is a conflict of orders).

Now lets look at the evil dictatorship you mention.  A brutal dictatorship obviously causes a greater deal of harm to humans than the taking of individual lives to stop it does. So again the robot is left with a dichotomy in that his actions or inaction will always cause harm. Now while to us humans this is a deep moral question to a robot it would be no bigger a logic problem that which task to do in the earlier example. In fact the robot would always go down the logical path that resulted in it helping the resistance (directly or indirectly).

Now maybe I'm missing something not having read the original Azimov stories but I think this is covered by the first law.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

  

Offline Flipside

  • əp!sd!l£
  • 212
I, Robot Trailer. No, just no.
Because by it's actions it is endangering the life of the Dictator. Which goes against the same law. Even the taking of one human life was impossible to a Robot, even hiring someone to do it was the same as doing it yourself. This was one of the biggest flaws in what I could stomach of Foundations Fear.
That is why 'Mankind' had to be defined, in order to allow such decisions to be made. Daneel had enhanced his brain almost to the point of the uncertainty principle by this stage, it should be remembered.

Also, a Robot caught in the problem above has a simple cure. The ability to question it's orders and state that it is impossible to do both at once.