Flipside misremembered the three laws; he said he couldn't recall them quite clearly. But I found them on the Internet and I've quoted them here:
1st Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2nd Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3rd Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
And also, later on, the Zeroth or 0th Rule was developed which says that:
A Robot may not, by action or inaction, bring harm to Mankind, even this contradicts the First, Second or Third Laws. Karajorma had a question that I want to reply to:
Originally posted by karajorma
Why would you need the 0th law? Surely the first law covers that. (Can't think of a situation where mankind would come to harm with individual humans coming to harm too).
Karajorma might have been confused since the laws were misquoted, but if he understood the laws as I've written them down now and still has the same question, I want to respond to that.
There's a likely scenario involving a conflict of the Three laws of Robotics that I want to point out to all you guys, but especially Karamajora, that shows that Asimov's "logic circle" of the Three laws is flawed.
Imagine a family of four: A father, a mother and two children a teenage son and daughter. They purchase a domestic robot (programmed with the Three Laws, of course) to help with chores around the house.
Now imagine that there is a military coup that overthrows the democratic government of the nation this family lives in. The father and son decide to get involved in a local resistance cell and begin quietly planning guerilla attacks on occupying soldiers. The robot hears this and immediately thinks "My First law prevents me from letting my masters kill other humans; I'm not permitted to stand by and let other humans get killed, even if they are soldiers of an occupying army."
So, naturally, the robot wants to stop the father and son from killing other humans, but what happens when the robot realizes that the dictatorship the family is living under also constitutes harm to his own human masters? A robot can't allow humans to come to harm, right? But how can one reconcile the fact that in order to prevent harm to a human, it is sometimes necessary to take human life?
If enemy soldiers come to the family's house to arrest the father and son, how can the robot protect his masters if the only way he can keep the family safe is to kill the soldiers? (Non-lethally incapactitating the soldiers is not an option; there are too many of them. Besides, it's easier to kill a group of soldiers than it is to knock them all out.) The robot can't commit any action that would kill the soldiers, but neither can he simply stand by and let the soldiers take away the family to be executed. Following either course of action violates the First Law. This is a classic case of "damned if you do, and damned if you don't."
That is one instance where the First of the Three Laws of Robotics is insufficient.
The addition of the Zeroth Rule
(A Robot may not, by action or inaction, bring harm to Mankind) helps reconcile this quandary.
If a dictatorship is harmful to Mankind (something I think everyone can agree on), then the robot is free to kill enemy soldiers of that dictatorship in order to protect other humans whose continued existence benefits Mankind (namely, freedom-loving and democracy-loving people).
Does this answer your question, Karajorma?