Joe Loomis, from CyberSponse talks about hacking regulation and legislation with CSO in a series of topical discussions with industry leaders and experts.
Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focused on disclosure and how pending regulation could impact it. Now, this second set of discussions will examine security research, security legislation, and the difficult decision of taking researchers to court.
CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A. The deadline is October 31, 2015. In addition, feel free to suggest topics for future consideration.
What do you think is the biggest misconception lawmakers have when it comes to cybersecurity?
Joe Loomis, CEO, CyberSponse (JL): That it’s possible to legislate cybersecurity, without making tough choices. Congress has been very dysfunctional over the past few years on a wide range of issues, but it’s been particularly slow to act on cybersecurity. For the most part, this appears due to the fact that lawmakers don’t want to step on anyone’s toes. (This is also true for the executive branch.)
The most recent example is CISA, the threat information-sharing bill, which stalled yet again in the Senate - and which addresses a key issue that lawmakers have been wrangling with for years now. Lawmakers need to recognize that real cybersecurity legislation will have to be tough; it will have to be demanding.
We can’t push for real cybersecurity reform without setting high standards for compliance and strict penalties for not meeting them. We can’t do it without challenging privacy groups on the merits of their arguments against threat data sharing between the private sector and government.
We can’t do it without implementing tough penalties against state-sponsored attacks on corporate entities. Of course, real security can’t be entirely legislated - every practitioner knows the limits of compliance standards and legal frameworks.
It has to come from within each company. But Congress can play a key role in raising the bar for everyone nationwide - however, it can’t do this with half-measures and watered-down bills.
What advice would you give to lawmakers considering legislation that would impact security research or development?
JL: We should be incentivizing security research, not restricting it. Vulnerability research, penetration testing software, etc. are often viewed suspiciously by those in Washington; they see these as a threat to security, instead of a benefit to it.
Congress should be looking at how it can support this field of research, through more streamlined regulations, scaled back export controls, better access to federal research and support programs, tax incentives and more federal grants to support and encourage this research.
For instance, DARPA grants have done a great job at incentivizing original, experimental research in security. But we need more of these programs to encourage groundbreaking research in this field.
If you could add one line to existing or pending legislation, with a focus on research, hacking, or other related security topic, what would it be?
JL: Mandatory simulations and training on real-world breaches.
Now, given what you've said, why is this one line so important to you?
JL: Real-world simulations are essential for exposing weaknesses, oversights and failed planning. Without this type of training, companies, government agencies and other organizations believe their own lies.
Every company should be engaged in this type of testing - it should be a key part of any industry or government compliance standard or measurement. Additionally, we need more large-scale industry and cross-industry ‘war game’ exercises that put multiple systems and teams to the test.
Do you think a company should resort to legal threats or intimidation to prevent a researcher from giving a talk or publishing their work? Why, or why not?
JL: No, I don’t think companies should resort to legal threats in these instances. If a researcher is able to find a significant security flaw in a company’s network, service or product, then it really is the company’s fault - and instead of getting mired in a legal and PR battle with the researcher (which will only damage the company’s reputation further), the company should focus all of its energies on fixing the problem and doing the work necessary to make sure more vulnerabilities aren’t out there.
Of course, no company wants to be exposed in this way - it’s not only embarrassing, but it can hurt business and potentially expose the company to fines or other regulatory penalties. But it’s far better to have a bug discovered by a white hat than a Russian cyber gang is, and companies need to look at it that way, as hard as it may be sometimes.
Instead of antagonizing the research community, companies should try to embrace it - bug bounties are a great resource to crowdsource threat detection and more companies should take advantage of them.
That said, there are times (it doesn’t happen often, but it does happen) when researchers aren’t responsible in how they manage this process. It’s important for researchers to consider the damage their disclosures may cause to end consumers, national security, etc.
While 90 days is the generally accepted time frame from notification to publication, sometimes this isn’t enough - and researchers should be willing to give companies and agencies more time, if they genuinely need it, and to withhold specific details from the public disclosure if those would cause more harm than good.
Public disclosure sometimes walks a fine line between public benefit and public endangerment, so researchers need to be cognizant of the far-reaching impacts their findings may cause - and proceed from there.
What types of data (attack data, threat intelligence, etc.) should organizations be sharing with the government? What should the government be sharing with the rest of us?
JL: Organizations should be sharing as much data on active threats as they can with the government, including key indicators (such as new signatures, identified malicious IPs, etc.), threat actor profiles and patterns, malware samples, detected software bugs, etc.
The government should be doing likewise - and the process should be done quickly and automatically, with the capability for machine-to-machine communication. However, none of this can happen at scale until Congress passes a bill like CISA to protect companies from frivolous lawsuits. This is one area where Congress’ role is vital.