Paul Mockapetris invented the Internet's core Domain Name System (DNS), which is a highly distributed hierarchical database that translates Web names into Internet Protocol addresses, and vice versa. Without it, the Internet as it's structured today wouldn't work. In an interview this week with Computerworld, he talked about the state of the DNS a year after the first distributed denial-of-service attack on the system.
Why is DNS security such a concern?
There was a cybersecurity report that came out of the U.S. government that said the two biggest security issues were DNS and BGP (Border Gateway Protocol). Part of it is that this is just the place where an attacker has the most leverage. ... If you can get to control either the traffic lights or change the street signs, you can create chaos on the road system.
Is the DNS safer than it was a year ago?
The state of it all is uneven. It is more robust at the top. But the bottom layers are a little bit less safe than they were a year ago just because the attack tools have sort of gotten better. The bad guys share their attack tools, and the good guys haven't bought any new technology to beat them, except at the root server level.
How are root servers safer today?
There are just many more copies of the root servers scattered around the world. While you may be able to do a successful denial-of-service attack against one of the servers in Virginia, the other ones in the U.S. and other countries will continue to work. So it's just much more difficult to shut down the whole system.
What is the problem when you go down the hierarchy?
If you talk to any of the top-level domains, I think they will tell you that the amount of junk and various kinds of strange packets that they see (is growing). And there's a bunch of broken implementations such that they'll receive repeated queries and so forth. There was survey done that said a lot of the traffic to root servers is not useful, and the same thing happens for top-level domains too.
I think down at the lower levels there are sort of persistent problems that are not unlike those that you see on the Web or even e-mail (systems), where people are subject to denial-of-service attacks and the like.
What exactly is the junk traffic that you are talking about here?
There was a report that said that somewhere north of 90 percent of the traffic to root servers wasn't really useful traffic. What I mean by that is sort of the same query repeated over and over again. I used to say I was incredibly proud of the DNS (because) it was like a finely tuned machine that actually worked with a lot of sand in the gears. And the thing that I guess (that) really surprises me is when you look at a lot of the root server traffic these days, there really is more sand than anything else. And part of the problem is who is going to do anything about it?
If you're sort of just getting random traffic, what are you going to do?
On the Internet, very often what you try and do is rather than figure out where the bogus traffic is coming from--and then asking whoever is responsible to stop--you just kind of say, "We'll figure out how to deflect the traffic or not answer it at all."
What's causing this bogus traffic?
There's a mixture of technical stuff and misconfigured servers. According to one survey, over 50 percent of (the name servers at) Fortune 500 companies are misconfigured to at least some extent, and north of 80 percent of companies are running software that have known bugs in them. Back in 1985, I wrote a document about how to configure your server in which there was a list of root servers. Some of those machines no longer exist, but they still get traffic. So some people don't actually update things.
What new technologies are being developed to improve security?
DNSSec (DNS Security Protocol) and digital signature technologies are examples of something that could have an impact. Right now, there's a bunch of opportunities to basically forge data into the DNS, and you don't have any particular way to verify it.