Oracle's Mary Ann Davidson is a self-professed Chief Security Officer (CSO). She lobbied hard for a CSO post, and when it was awarded to her in 2001 by the Security Steering Committee, she wrote her own job specification.
Computerworld Hong Kong's editor Winston Raj caught up with her during her short stint in Hong Kong, and asked her some secure questions.
Computerworld Hong Kong (CWHK): Why did Oracle wait till 2001 to have a CSO?
Davidson: I think the reason I was named the CSO because of the Security Steering committee. So why did we take so long? That is because I do not think there was a perceived need for it. We have already added security as a process into our development framework because we are customer based. But I was saying that “this is really dumb, and why are we doing this, and why are we doing that,” and the steering committee offered me a CSO post.
CWHK: How do you draw the line between marketing and development, especially when you have a marketing campaign called “Unbreakable” and yet databases are vulnerable to buffer overruns?
Davidson: Let me tell you this, I was not in favour of this — which security person likes the word “Unbreakable”? It made us a big target. Larry (Ellison) was the one who proposed “Unbreakable”. And it was specific to our database product. That was because we had proof points. I will tell you “Unbreakable” got so catchy that some enthusiastic marketer applied the term to products that had no proof points. It forced us to evaluate Application Server and scrutinise its development. But in the end it was brilliant, because it made us focus. And it made us internally aware as a company why security is important. Furthermore, no developer wants to be part of a group that made what's Unbreakable breakable.
Buffer overruns occur because programmers miss some fundamentals. We want to make every single person feel personally responsible and knowledgeable and aware of the things he or she is working on. It doesn't necessarily mean that we'll never have a problem again, but it's about doing a lot of things better and better. Part of it, I think, is that developers don't think like hackers, and that's why we have an ethical hacking team.
CWHK: Where do you stand on disclosure of security issues?
Davidson: The operative word here is responsible disclosure. There is a big debate that is going on now. We are part of a new group called Organization for Internet Safety, where we are looking at best practices in how do you disclose security vulnerabilities. On one hand, you have many researchers who want to get famous. On the other hand, the company wants to protect their customers and they need to be given a chance to fix it. So we have building rigorous processes for researchers who find these security vulnerabilities and report it to us. Also, just because we have not fixed something we do not think that we are malicious, insensitive or lazy. Certain vulnerabilities you may have to change the entire code base. You can only do that in a major release of a product, else you break everything. Sometimes you can make customers so secure that their systems do not work.
CWHK: How are you integrating privacy as part of your security process?
Davidson: That's an interesting question. We have a Chief Privacy Officer, and no, he does not report to me. He works in the corporate affairs department as his function is legislative. But it is funny that you talk to me about privacy as processes: One of the things we want to do is to make privacy as part of our development process, as privacy coding guidelines. We will train people on this. My hacking team is also looking at the privacy rules that can be breached.