Well, if it does lead to AI's the programmer must remember to program it with an appropriate variations of the Laws of Robotics...
What i call the AI Laws - basically the Robotics laws with "AI" subbed in place of robot.
Zeroth Law
-An AI must not merely act in the interests of individual humans, but of all humanity
--An AI may not harm a human being, unless it finds a way to prove that in the final analysis, the harm done would benefit humanity in general.
First Law
-An AI may not injure a human being or, through inaction, allow a human being to come to harm
Second Law
-An AI must obey orders given to it by human beings, except where such orders would conflict with the Zeroth or First Laws
Third Law
-An AI must protect its own existence as long as such protection does not conflict with the Zeroth, First or Second Laws
Fourth Law
-An AI must establish its identity as an AI in all cases
Fifth Law
-An AI must know that it is an AI
So long as any AI is programmed with these laws, there should be no uber problems like Skynet in the Terminator films - someone obviously forgot to program that computer with these laws