10 years ago, I had a life-altering work experience. I was on the team at Microsoft that was trying to solve two huge problems:
Two billion computers had been infected with a self-replicating virus (AKA 'worm') now known as Blaster.
The NE Power Outage was, for a period of time and by some people, attributed to Blaster.
There are many of my former colleagues who spent literally a year of their lives working with me to fix the aftermath of these problems. There are more friends with whom I later worked with at the Idaho National Lab (INL) that helped me understand the breadth of the problem that was uncovered by Blaster, specifically the reliance of critical infrastructure upon consumer-grade technologies.
[Slideshow: 20 notorious worms, botnets and viruses]
Much of my success in my career is due to the people I met (working tremendously long hours) and the lessons I learned (the VERY hard way) from those weeks of toiling to try to understand the scope of the problem and then the months we spent attempting to fix it some way for the following year. It was one of the most-expensive projects I've ever worked on.
Millions upon millions of dollars were spent by Microsoft to improve internal processes and technologies to prevent a similar outcome in the future.
Millions upon millions more were spent by Microsoft to help customers improve their technology infrastructures, and then those customers spent millions upon millions more so that they would be more resilient to future cybersecurity events
But, by far the greatest cost of Blaster was personal toll it took on all of us involved in the response. Work/life balance has always been a problem for me, and when a problem of this magnitude arose, I automatically threw myself into the thick of trying to solve it. Late-night conference calls, sleeping on the floor of computer labs, eating rushed meals of take-out food in conference rooms and many more hours spent at work than I had spent at a job which was demanding a tremendous amount of my team even before Blaster, resulted in tension in my marriage. Fortunately, my dear wife Holli (with whom I am celebrating 18 years of marriage this month) brought me to my senses in a very-direct conversation in November of 2003 that most definitely prevented a divorce and established a path upon which she and I are still reaping personal, career and economic benefits built upon the foundation of experiences like those of 2003.
The only reason why I was able to re-balance my life was through the hard work and dedication of others. The early days of security efforts were more like a volunteer fire department than a top-down effort. I was on the Microsoft Services team during the Blaster incident. We were responsible for all customer interactions, both measuring the impact of Blaster on our customers and communicating any solutions to them. The Microsoft business model relied on very few Microsoft employees and an army of partners (re-sellers, service providers, etc.). This meant that while we had direct contact with thousands of Microsoft customers (most of them threatening to sue Microsoft for damages in those first few days), we had to rely on thousands more individuals to scale the response to the millions of customers impacted by the event. The training efforts that we coordinated to help those partners get ready to effectively solve the Blaster problem were enormous efforts in and of themselves.
Fortunately for Microsoft and its customers, many thousands of people made incredible personal sacrifices to help organizations of all sizes recover from the effects of Blaster. For all of you, both internal Microsoft staff as well as external partner employees and even those super-smart Microsoft customers, who worked with me during that horrible year of 2003, thanks for sharing your expertise. Thanks for making sacrifices yourselves to help Microsoft and its customers try and make sense of the madness that was August 2003. I know many of you paid high personal prices for your efforts. There are many of us who quite literally lost a year of our lives because of the underlying flaws in technologies and miscreants' exploitation of those flaws for their own purposes.
There are many reasons in my humble opinion why we haven't seen another Blaster-level cyber event. Most definitely the Microsoft team learned their lesson and spent incredible amounts of time to improve the way that technology is developed and deployed. But, not all companies have the luxury of funding multi-million-dollar security mobilization efforts. Based upon some of the research that I have done over the last decade, I have also seen that the adversaries (the miscreants as we called them then) have fundamentally changed the way that they operate. On one occasion, while working at INL, I was working with a team of international researchers and we saw the attackers self-policing when it came to deploying worm-like attacks. One individual on an IRC channel bragged that he could deploy a worm that day on an un-patched vulnerability. The other people on that channel immediately threatened the braggart with bodily harm should he proceed with his plan.
It makes sense when you think about it. Massive worms cause huge denial of service problems, thereby blinding the attackers and preventing them from exploiting the systems that they already control. Also, worms drive a news cycle which results in organizations improving their infrastructures and applications, thereby reducing the attack surface. Worms like Blaster are bad for their business, and I think thats why we havent seen similarly-sized incidents since. The underlying technology problems have not been solved. The root cause of Blaster was a vulnerability in Microsofts operating systems. But the contributing factor which exponentially increased the impact of the worm was the fact that Microsoft's customers were not properly managing their technology infrastructures.
When I go to conferences and speak on the topic of mobile security today, this is one of the key points I focus on: Configuration Management is getting WORSE, not better. A few years ago, I started playing a game which I called Smartphone Bingo. It required everyone sitting in the room to take out their smartphone or tablet, open up the settings of the device and then find the version of the operating system. I then start a sort of reverse auction, calling out version numbers to see who had the oldest, un-patched version. Sometimes we would limit the devices in our Bingo game to just corporate-issued devices. It is shocking to me that even the most-mature organizations are completely ignoring the very-hard-learned lessons about configuration management on the ever-increasing numbers of mobile devices. Within one organization that I spoke with last year, they had 60,000+ smartphones and they estimated that they had 20,000+ different configurations/versions of those smartphones deployed. Over the last few years, we've seen more and more evidence of how attackers are targeting mobile technology for either direct financial gain or to steal intellectual property for longer-term advantages. The lack of effective configuration management on enterprise-connected mobile devices makes their jobs incredibly easy.
Imagine I had a time machine and I went back to August of 2004. On that imaginary trip, I sit down with the CIOs/CISOs of the Microsoft customers I had just spent the last year helping to recover from Blaster and tell them that in 2013 they are relying on the good-will of mall kiosk employees to keep their enterprise mobile technology configured in a way to prevent a system compromise. I'm sure they would laugh in our imaginary conversation. How would it ever be possible to believe that we as technology and information security professionals would ever set ourselves up to fail like we did in 2003?
Unfortunately, the reality is that we ARE setting ourselves up to fail. Every un-patched Android or iOS device that you let have full access to Exchange Activesync is an invitation to the miscreants to steal your company's email, attachments and contact lists. Every time I bring this up at a conference, there are always people who respond, "I'm just a little company in an obscure industry! Surely the attackers are going after bigger fish than me!"
The reality is that attackers are going after targets of opportunity just as often as they are dedicating their efforts to attack a specific organization. If you are not enforcing strict mobile technology configuration management policies, you are getting on a risk management treadmill that will grind you down, chew you up and leave you worn out. While I do not believe we will ever see another Blaster-level event which impacts billions of systems, I am certain that configuration management failures are being exploited every day both opportunistically as well as during targeted attacks. We've seen some very interesting non-persistent exploits run against iOS and Android devices that leave very few forensic traces as weve helped our consulting customers.
Proper mobile technology configuration management can be difficult because of the relatively limited technologies available to scalably manage the situation. But, just because it's difficult doesnt give us as technologists the excuse to ignore the problem. Discipline, innovation and hard work will be required until we see mobile technology management platforms catch up with their server/desktop/laptop counterparts.
I find it hard to believe that it has already been a decade since Blaster. I'm saddened by the fact that it sometimes appears to me that we've forgotten many of the hard lessons we learned in the Fall of 2003. I'm very grateful that we haven't had a similar-magnitude event since. Here's hoping that we avoid one for a long, long time to come and that we can keep applying those hard-learned lessons to new technologies as they are integrated into our enterprises and that we can keep up the good fight against those who wish to steal and misuse our information.
Security veteran Aaron Turner, a former strategist in the security division of Microsoft, is the founder and president of IntegriCell.