Tal Klein, from Lakeside Software talks about hacking regulation and legislation with CSO in a series of topical discussions with industry leaders and experts.
Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focused on disclosure and how pending regulation could impact it. Now, this second set of discussions will examine security research, security legislation, and the difficult decision of taking researchers to court.
CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A. The deadline is October 31, 2015. In addition, feel free to suggest topics for future consideration.
What do you think is the biggest misconception lawmakers have when it comes to cybersecurity?
Tal Klein, VP of Strategy at Lakeside Software (TK): The biggest misconception nearly everyone has about cybersecurity is that it is somehow special or different from physical security. Treating it differently implies there is some magic thing that can be done to prevent all breaches.
Attackers naturally gravitate towards the easiest path in. If we establish a core set of protections like, for example, with PCI we're raising the bar on how hard attackers have to go in via old fashioned "hacking." However, PCI doesn't insure credit card transactions aren't fraudulent, it just establishes a series of criteria that are prerequisites for handling credit card transactions.
When it comes to the way banks handle our money in America, there are similar regulations managed by a government entity called the FDIC - the Federal Deposit Insurance Corporation. The main difference between the "spirit" of PCI compliance and the mission of the FDIC is that PCI doesn't provide insurance, meaning that the onus of transactional risk falls on the shoulders of either the credit card issuer or holder.
I think that's because in the physical world lawmakers accept that institutional failure, fraud and theft will happen regardless of what protections are in place, which is why we have insurance as a regulated compensating control, whereas in the digital world it seems lawmakers still believe that all breaches are somehow preventable.
What advice would you give to lawmakers considering legislation that would impact security research or development?
TK: I would advise them to think broadly in terms of guidelines rather than specifics. Because new technology adoption moves faster than the rate of compliance standards, many standards are irrelevant by the time they come to pass (something I call "checkboxification"). I would encourage lawmakers to think about the business productivity impact of any regulation.
Since regulation adds friction to business, it's important to have a deep understanding of the benefit of such regulation versus its impact on the American economy. The worst thing that can happen is that too much regulation handicaps American businesses and makes them uncompetitive in the market. We're already seeing ramifications of this with the impact of the Stored Communications Act on American cloud computing companies doing business abroad, most famously in Microsoft's battle with the Federal Government over data in Ireland.
If you could add one line to existing or pending legislation, with a focus on research, hacking, or other related security topic, what would it be?
TK: The establishment of a non-partisan committee which shall be chartered to annually review and assess all such legislation in order to ensure its relevance and benefit to the American people.
Now, given what you've said, why is this one line so important to you?
TK: Since legislators are largely out of touch with the latest trends in technology, it's important for those with knowledge to be given an opportunity to review and critique any such "cyber" legislation in order to ensure its relevance to the present landscape.
Do you think a company should resort to legal threats or intimidation to prevent a researcher from giving a talk or publishing their work? Why, or why not?
TK: I don't think we've quite figured out how to qualify the "ethicalness" of such research. However, if this research was done under reasonable responsible disclosure and a trusted third party acts as an "arbiter" of such "ethics" - then I think it's fair game. I think we're making progress here with companies like Bugcrowd (whose CEO was recently interviewed for this column).
What types of data (attack data, threat intelligence, etc.) should organizations be sharing with the government? What should the government be sharing with the rest of us?
TK: I think the work OASIS is doing with the STIX and TAXII frameworks is quite good. I think we're making good momentum towards information sharing. It would be good to see the NSA become a contributor.