Almost a year ago, Alarmed columnist Scott Berinato told you to remain sceptical of Microsoft's claims on security. The advice stands. Here's a progress report.
The good: Microsoft seems genuinely interested in security.
This could be because, as the head of CERT Rich Pethia said to me, "There's a ton of money to be made." That's OK. You'll take it. Microsoft has marketed other ventures — storage comes to mind — purely for the value of saying they do it, and not actually caring much about doing it.
The security effort seems at least a little more earnest than that. Microsoft has added outside audits of code and has paid for some training for most of its developers. But please don't be impressed by $US200 million the company keeps reminding us it spent. It's nice, and necessary, but it's also a fraction of the billions in damage Microsoft's insecure products seem to have been party to over the years.
The bad: Slammer.
Everything about the recent affair suggests that Microsoft's security initiative is, to some extent, lipstick on the pig. Back in 1999, the Melissa virus exploited what Microsoft insisted was an Outlook feature, the ability to expose the address book to script which can forward mail without user intervention. Four years later, little SQL servers everywhere are a feature of Microsoft's design ethos, meant to make the Microsoft experience across applications better, even though it seems to have compromised security. And SQL servers are everywhere, for architectural reasons. There's a SQL desktop engine running in the background of many PCs. It's in games. In some anti-virus software. And what did Slammer, the fastest-moving virus in history, exploit? The SQL server. Plus ça change.
Then there was the patch for Slammer, six months old and so klugy that one anti-virus company reported that some Microsoft engineers themselves were unable to install it properly. The patch, it turned out, needed a patch. This served to highlight the sheer awfulness of patching in general as a response to bad software. Microsoft's patching hierarchy is particularly involute. Consider this user question posed to a third party services firm in a security newsletter late last year:
How do I know when I need to re-apply a security roll-up patch (SRP)? For example, applying IE6 SRP1, do I then need to re-apply Win2K Service Pack 2? When applying hotfixes, do I need to re-install them after more recent SPs?
The answer took up a page-and-a-half.
The Frustrating: Microsoft does what it takes. For itself.
This week I attended a discussion panel at the Software Engineering Process Group Conference in Boston, which the Software Engineering Institute puts on. The venerable Watts Humphrey, who was described as the "Edward Demming of software quality," was on the panel, as was the highly regarded Rich Pethia, who runs the CERT Coordination Center. And so was Carol Grojean, a senior program manager from Microsoft.
They talked about software development methodologies Humphrey has created called the Team Software Process and the Personal Software Process. TSP and PSP, when followed with a bit of discipline, seem to improve the quality of code. (According to Grojean, an executive at Microsoft who was buying a book online noticed Humphrey's book Winning with Software in the "other recommendations" section, bought it, read it on a plane, and decided to try TSP/PSP.)
Grojean led the group of Microsoft developers who were trained in TSP/PSP. They are in the process of re-writing a 24,000-line program using the methodology. The last version of this application had, Grojean said, more than 350 defects. With 14,000 lines completed, Grojean is expecting a total of about 22 defects this time. She said the cost to fix one defect post-production, purely from a development standpoint, averages $US4200. That means the cost of a defect in this application, before you even factor in soft costs (productivity, bad press and so forth), will be cut from $US1,470,000 to $US92,400. Microsoft will have a more secure application, for less money, and deserves full marks for using TSP/PSP.
Here's the frustrating part: This application is not for you. It's not a product Microsoft sells at all. The application is one Microsoft uses internally. “It enables our OEMs and partners to take electronic delivery of data, which contains a lot of our intellectual property," said Grojean. Later, when pressed by an audience member whether Microsoft planned to apply such seemingly effective principles to its products, for its customers, Grojean said she "couldn't speak to that."
The Grade: Probably a C, which isn't good enough for a company with so much catching up to do.
The fact that Microsoft has chosen to apply some of the most aggressive secure coding practices to its own internal development and not its product development is an all-too-familiar sign of the Microsoft culture.
It's the same culture I saw cropping up repeatedly in a visit to Redmond last fall, best exemplified by Vice President of Security Mike Nash, who said, "I don't understand when we try to secure a product and then customers complain because we broke their applications by making some feature opt-in. I'm finding the way we interact with the customer on security has to do with the skill of the customer."
In other words, a lot of you aren't smart enough to handle secure products from Microsoft. That's OK, though. I'm sure, as their customer, you're sleeping better at night knowing that, first and foremost, Microsoft's own IP is safe and sound.
"Alarmed" is a biweekly column about security and privacy. Look for a new version every other Thursday.