I also picked up from Slashdot yesterday a pointer to this opinion piece on computer security. OK, it's more like a rant. and it's a rant that I don't entirely agree with.The author is naming his six pet peeves about the way that people implement computer security.
Let's take this point by point.
1.Default Permit. This is absolutely spot-on. This has its roots in academic UNIX, and has held on for far too long to everyone's detriment. It wasn't a big deal until the mid-90's when the Internet boomed and all the machines got linked up together. Then everything went kablooey.
2. Enumerating badness. This is an interesting point, but I think the point is much more general about software engineering. Good designers think about simplicity -- whether it's a user interface, or in the case that the author brings up, things that might connect to a computer. Focusing on the design with the smallest number of cases, and thinking to the future about how many cases there might be then, is critical. But the case he gives isn't as much "enumerating badness" as "enumerating attackers." Badness can take lots of forms. In one sense, it's "Enumerating vulerabilities" vs. "enumerating exploiters" and while there's an upper bound on vulnerabilities, there's no upper bound on exploiters -- thus a signature-based approach to detecting exploiters is less than optimal.
3. Penetrate and Patch. OK, I was thinking about making this point on #2, but I'll make it here. There is nothing inherently wrong with Penetrate and Patch. What's wrong is if this is your only method for eliminating vulnerabilities. The author is falling prey to what I call "The Tyranny of the One Thing." MY grandmother referred to it as "putting all your eggs in one basket." Anyone who relies on just one thing for their security is doomed to failure. Smart people take advantage of every practical method they can to enhance their security, and in fact aim for a diversity of methods. Microsoft uses Penetrate and Patch techniques; they also build world-class source-code analysis tools, they do threat models, and they do extensive code reviews. They do other things too.
4. Hacking is Cool. Agree in part and disagree in part. I agree that lioninzing hackers is counter-productive. I disagree on whether professional developers should study hacking techniques. If it helps one to get insights on how to design hack-proof systems, it can only be a good thing. I think the author is confusing concepts with specific vulnerabilities.
5. Educating Users. See "Tyranny of the One Thing" above. We should absolutely educate users. In fact, the author contradicts himself: that next generation of users who come in with a healthy skepticisim about phishing, etc. will have it because they were educated about it. But the bad guys will continue to adjust too. We need many weapons, and educationis absolutely one of them.
6. Action is Better than Inaction. What he's really saying is that security-conscious people are naturally in the Early Majority (in Moore's terms) -- they are not early adopters; they let others work out the bugs. Sometimes there's a good business reason to be an early adopter -- like crushing competition guaranteed to put you out of business unless you take a business risk and find a sustainable advantage. But it is a risk, including security risk, that must be calculated and managed. I do like his comment that "it's easier not to do something dumb than it is to do something smart."
As for his "minor dumbs" most of them are once again applications of the Tyranny of the One Thing. Never pass up a chance to do something to enhance your security, particularly if it increases the diversity of your efforts -- the only caveat being that you need to manage the complexity down, otherwise you will not be able to administer your own system. It's a hard tradeoff to make well.
"Let's go production with it now and secure it later" -- this is probably the biggest dumb thing on his entire list. This one drives me crazy.
And finally: "we can't stop the occasional problem." The author says "Yes you can."
No, you can't. All software has bugs. You can work very, very hard to reduce the bugs, to stay patched, to configure correctly, etc. etc. and you will STILL have problems with bugs, with DDOS attacks, with a rogue employee, and myriad other problems. But part of having a good and thorough security system is having the procedures in place to mitigate and recover from those situations as well.
Security is hard, and it requires you to be smart -- especially if you intend to live on the bleeding edge. Redundancy and diversity are your friends. Overreaction isn't.
9:47:03 PM
|