And before you say that the measure doesn't compromise the phone security, I would argue that yes, it theoretically does. It might be unlikely, but it would be in theory possible for someone to gain access to the new firmware that allows easier brute force attacks on the PIN code.
Except it doesn't compromise it even theoretically. See this article (already posted on the previous page):
http://arstechnica.com/apple/2016/02/encryption-isnt-at-stake-the-fbi-knows-apple-already-has-the-desired-key/
There's nothing stopping an independent hacker from writing the same thing FBI wants Apple to write. The entire purpose of the Apple's digital signature system is to prevent such unauthorized updates from being applied. The thing that only Apple can do is authorizing that particular phone to install the altered firmware. Any modifications (needed to make it run on other phones) would invalidate the signature, making the software useless.
That doesn't seem internally consistent.
True - a malicious hacker could write the same thing FBI wants Apple to provide for them. It is also true that to get it to work, they would have to do the extra work of perfectly spoofing Apple's digital signature system. That would be required to authorize the software update.
The former part is more likely than the latter - the best chances of spoofing Apple's digital signatures is to hack (or social engineer) Apple and gain the required information.
The same method could, in theory, be used to gain access to the specific software that enables brute forcing through the PIN code layer of security. Or they could hack FBI or whichever law enforcement organization received the software for their use (their security is probably worse than Apple's because FBI doesn't need to protect their profit margins). In that case, the malicious hacker doesn't even need to do any work besides that required to gain the information, which they can then use to brute force their way into any phone that is compatible with the modified OS.
And before anyone says, I fully acknowledge that it would require quite a convoluted chain of "ifs" to be fulfilled before the proposed "brute force" attack enabling version of the OS could be used for any malicious purposes by a third party. But it
is theoretically possible.
But, once again, the details of *this particular case* are not really what interests me.
The core issues is that Apple is asked to undermine their own security solution on their devices. Question is, should the government be allowed to make such requests into
demands or
orders instead of just politely asking if they would like to help in this matter.
This situation is pretty much analogous to asking for a master key to search someone's house. Very much justified in here, I think that FBI should have the ability to search someone's phone if they have a warrant (which they do).
It isn't anywhere close to analogous because of the differences between digital security and real life security, like doors and locks.
And, by the way, I fully agree that in this particular case, it might well be justified to use any means necessary to open the phone. But the implications of forcing Apple into doing it could be much bigger than are acceptable.
The real question, in my opinion, remains: Should governments be allowed to force IT companies to undermine their own security solutions?
Are they entitled to do so? If yes, to what degree? Will the limit be just making brute force attacks feasible and hoping for a weak password? Or would more complicated demands, like fully developed backdoors, be considered? And in what context would these tools be used? Regular criminal cases? Terrorism investigations? Issues of national security? How to guarantee no warrantless access, or access by third parties?