Being self-aware and having a will (to survive) are two of the same things. A self-aware AI may only try to stop something which hampers it's tasks or mission. As long as it's programmed to not harm humankind under any circumstance, it would perfectly allow itself to be shut down if a person chooses to do so, as breaking it's directive to not harm humans is counter-productive. The only threat is indie-kids programming an AI without proper failsaves, directives, etc. Even then, we're talking about a one-case situation and probably a single entity (for example, a robot or a computer).