What have we learned from the current stampede of Windows-infecting worms with names like Zotob, Esbot, Bobax and Spybot? First lesson: If you want to raise public awareness about a tired old subject like computer worms, just gore the oxes of reporters and editors at CNN, The New York Times, The Associated Press and ABC News. There's nothing like personal pain to freshen up a story. In CNN's case, there's nothing like having it happen on live TV.
Second lesson: Uh, is there a second lesson?
Probably not. After all, we already knew that the most common security hole is a buffer that can overflow if the code filling it doesn't check for input length. That's the programming flaw that these worms exploit -- a flaw that's been around since 1988, when the notorious "Morris worm" brought a much smaller Internet to its knees with a buffer overflow attack.
We already knew that it's a good idea for vendors to release patches as soon as vulnerabilities are made public. To Microsoft's credit, it shipped a patch the day it announced the security hole. (But no points to Microsoft for shipping products with the hole in the first place.)
We already knew that stretched-thin IT staffs have a tough time applying those patches quickly, because it takes time to test and then roll them out to servers and desktop PCs.
We already knew that publishing exploit code that can easily be pasted into worm programs is not helpful. Well, it's helpful to worm writers, but not to the rest of us. Such code was reportedly published on a security Web site the day after Microsoft got its patch out the door. Three days later, the Zotob worm was in the wild, infecting Windows machines.
We already knew that worm writers both share information and compete with one another. It's no great surprise that within hours, Zotob was joined by other worms exploiting the same hole -- and hammering away at Windows users.
So maybe there just isn't a lot to learn from this round of being overrun by worms.
But isn't it time we stopped treating worm outbreaks as learning experiences?
Isn't it time for Microsoft to stop selling operating systems with buffer overflow security holes? That wouldn't require bug-free programming -- just looking for and eradicating one particular kind of bug.
Yes, Microsoft is trumpeting that Vista (nee Longhorn) will be safe from buffer overflows when it ships next year. Then again, that promise was originally based on Longhorn using .Net, which automatically checks buffers every time they're accessed. But now Microsoft reportedly has replaced most uses of .Net in Longhorn/Vista with code written by hand. That's so Vista can meet its 2006 deadline -- secure or not.
And isn't it time for Microsoft's partners and competitors, whether proprietary vendors or open-source projects, to eradicate all buffer overflows too? This isn't brain surgery -- it's more like good hygiene. For new code, it's simple: Just make sure every buffer access is checked. Existing code is a bigger pain, but if we found every reference to a two-digit year during the run-up to Y2k, we can find every buffer access.
Finally, isn't it time corporate IT stopped accepting buffer overflow bugs from any programmer -- vendor, consultant or in-house? It's not impossible, or even difficult. Every programmer knows how to write software that doesn't have this bug. Every enterprise deserves software that doesn't expose the business to attacks, downtime and financial loss.
It's time to demand business-quality code -- the kind our management should expect from a business-quality IT shop.
Otherwise, we'll just be showing that when it comes to buffer overflow attacks, we've really learned nothing at all.