Although no longer in the U.S. Navy, Mary Ann Davidson has placed herself on the front lines of an escalating war. It's not a battle between nations, or over land or principle, but instead a fight for information -- perhaps yours.
As chief security officer (CSO) at software vendor Oracle, Davidson is in charge of making sure that customers' data stays safe. Even if you're not one of Oracle's customers, who include critical infrastructure providers in the government, financial services and telecommunications sectors, it's possible that some of your data is being stored in Oracle software.
With so much sensitive information to protect, Oracle has been working to diminish the number of vulnerabilities in its software while delivering fixes in a way that makes it easier for customers to manage them. But just as major software vendors like Oracle are ramping up their defenses, hackers, lured by the profit potential in selling data, are becoming even more skilled at their attacks.
Amid the growing conflict, Davidson made a stop in London last week, where she debriefed IDG News Service (IDGNS) on Oracle's plan of attack. The CSO talked not only about her latest defense -- software auditing -- but also about a little-known technology world where military history is used to plot strategy and security researchers roam like mercenaries.
What can the vendors do to make their software more secure?
One of the problems I see is that communications software has poor auditability. There isn't a standard for data capture and the format that it's represented in. Industry won't solve this problem by itself so we have to get standards bodies like the National Institute of Standards for Technology (NIST) to take this on, then all of a sudden industry would have something to work with. Governments and large procuring bodies could push this by making it a procurement requirement.
Why is auditing important?
It's important for a lot of people from a regulatory standpoint to not only show they did the right thing but to prove it. And on the larger scale I look at what the [U.S.] defense department is doing and say look there's no way I can figure out what the bad guy is doing because there's no auditing records.
I notice from your bio you were an officer in the Navy Civil Engineer Corp. What strengths do you bring from that experience?
I read a lot of military history and a lot of those lessons apply. One of my development teams thought I was insane recently when I asked them 'Okay, how did the Marines take Guadalcanal?' I was talking about the strategy of identity management. What does that have to do with anything? You don't try to hold everything. If you hold everything, you defend nothing. In Guadalcanal, they took the piece that was strategic, which was the airstrip.
What's the airstrip of Oracle's database software then?
Maybe the airstrip is the database. There's a great line that's making the rounds: it's the data, stupid. That's really what you are trying to protect.
Do you see security as its own war?
Oh, it is. This is one of the reasons I got interested in this audit problem. The U.S. defense department currently has physically separate networks and that provides them with a level of security. They have three networks and none are connected to the Internet. What they want to do is to get these networks to talk to each other more easily because they want to get intelligence from an intelligence source all the way out to the war fighter in real time. It has to be very secure, and it's not very different from the business problem. You want to get the right information to the right person at the right time.
If they put those networks together than the network becomes the battlefield. If I'm the bad guy I'm not going to bother to pick up a gun and fight you -- I'm going to attack your network. I'm going to stop your ability to wage war by bringing your information systems down and again in a corporate world it's warfare. Why would I bother walking into your office and riffling through your trade secrets file if I could get it electronically?
Who's winning the security war?
I think the hackers do a lot better job at colluding. Hackers don't worry about competing with each other I don't think and it's now moved beyond bragging rights and become more criminal minded.
But industry is speaking with each other more, we talk about researchers that are plaguing us and how are we dealing with it.
How are the researchers plaguing you?
In one case a researcher said he knew about a vulnerability in one of our competitor's products, and he was shopping it around. I thought what am I going to do with this information? We don't run on their products, and I'm certainly not going to use that to make trouble.
Is that common?
Oh, that's extremely common. There's whole business models based on it. They traffic in inside information on vulnerabilities, they sell it on a subscription basis and they sell exploits. Trafficking in that information increases the risk to the customer. I've actually told customers that if you use this sort of service, you are the problem.
Do you feel security researchers are being responsible in terms of disclosure?
Some are and some aren't. There's tension about the degree of information researchers provide. I think it's absolutely appropriate for vendors to provide enough information for customers to know if they are at risk, and how much they are at risk but that's a different thing than providing exact details on how to exploit it.
How long should you wait to tell customers about a vulnerability?
I'm not saying to sit on something forever and not tell anyone about it, but if you disclose too much information and there's no fix that actually increases the risk to the customer that they are going to get whacked.
How long is it acceptable to sit on a vulnerability without giving a fix?
The uninitiated might say right away, but even if it's just a two-line code change you have to check how many versions it exists in, whether is it operating system specific and if there is code like it that might have the same vulnerability. If I make a change I want to make sure I have all incidents of it. The record is 78 patches for one vulnerability so getting a fix in a customer's hands is different from a two-line code change.
How can we get around all this patching?
By making it easy to lock products down. One of the easiest ways to break into something is not even to look for a vulnerability, to just look to see if someone didn't follow best practices. This is something I think you'll be seeing in the industry in general -- large customers asking for lock-down configurations. There's so many configurations for things that if you have lock all the doors and close all the windows by hand it's just not scalable.
I actually encourage customers to make it a procurement requirement and force vendors to do this to the extent that they don't already.
Why don't vendors do that already?
Because they are trying to serve two groups -- customers and the development community, which wants everything open. It's not easy to come up with a one-size-fits-all security configuration.
What other advice do you have for customers on security?
Push your vendor to tell you how they build their software and ask them if they train people on secure coding practices. Also, push industry analysts on which products are more secure. Analysts tend to not do direct product comparisons but I think they should do it on security products based on the total cost of ownership. For example, ask which product is going to cost more from a security standpoint, ask how many patches am I going to have to apply, and how many consultants am I going to have to hire to lock it down.