Udi Yavo talks about hacking regulation and legislation with CSO in a series of topical discussions with industry leaders and experts.
Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focused on disclosure and how pending regulation could impact it. This week CSO is posting the final submissions for the second set of discussions examining security research, security legislation, and the difficult decision of taking researchers to court.
CSO encourages everyone to take part in the Hacked Opinions series. If you have thoughts or suggestions for the third series of Hacked Opinions topics, or want to be included as a participant, feel free to email Steve Ragan directly.
What do you think is the biggest misconception lawmakers have when it comes to cybersecurity?
Udi Yavo, CTO, enSilo (UY): A general misconception is that legislation can stop cyber-attackers. For example, if we look at the Wassenaar agreement attempt, they tried to specify that vulnerabilities are akin to weapons, in order to stop their trading.
Such a proposal is problematic, and ineffective to say the least. The reason is that such an agreement actually increases the prices of vulnerabilities in the cyber underground market, since it increases the researcher’s risk. Those that do directly sell to the underground market will find ways to bypass the regulation. Moreover, such changes could even lead to the eradication of third-party bug bounty programs whose hands might be tied due to such proposals. As a result, over time, “play-it-safe” researchers might be more tempted to go the underground route, ironically defeating the actual purpose of such a regulation.
What advice would you give to lawmakers considering legislation that would impact security research or development?
UY: I would advise them to make the 90-day window for vulnerability disclosure that has been industry practice an actual regulatory requirement. The legislation should define the grace time and the consequences for violating these rules, which would allow software makers enough time to patch the vulnerability while still placing them under a deadline.
It’s important here to understand the motivation underlying those researchers who bring these vulnerabilities into the limelight. They work hard to find each vulnerability and ultimately do so in order to benefit, whether in the form of monetary compensation (by selling vulnerabilities on the underground cyber-market), or in the form of recognition and praise.
When a vendor goes silent after being informed by a researcher of a vulnerability, the researcher is placed in a tough spot. Many do not want to have to sell the vulnerability they found on the underground cyber-market to reap their reward. On the other hand, if the vendor is not responsive, it leaves the researcher without a legitimate avenue to achieve their fame.
Implementing legislation that would mandate the 90-day window for vendors to fix vulnerabilities presented by researchers would go a long way towards making sure software makers are held accountable for imperfect code. It would also help ensure consumers safety, and encourage researchers to continue finding vulnerabilities and disclose them in a responsible manner.
If you could add one line to existing or pending legislation, with a focus on research, hacking, or other related security topic, what would it be?
UY: I would add a clause making breach disclosure mandatory on a federal level, not just a state-level.
Now, given what you've said, why is this one line so important to you?
UY: Right now, there is legislation guiding breach disclosure – but it’s only on a state level, and the rules in each state are different. Furthermore, many of the regulations actually frighten away breached company as the consequences of admitting to a breach are not clear-cut. For instance, companies often do not know in advance whether they will be subject to a fine for sustaining a hack, even if they had a security solution in place at the time of the breach.
Vague and varied state legislation discourages forthright disclosure on the part of companies. Enacting federal legislation would go a long way towards making the process of breach disclosure more transparent and productive.
Legislation on a federal level should offer incentives (such as tax breaks) for those disclosing best practices, rather than punitive measures for those who are breached.
Furthermore, there needs to be a federal initiative to deal with cyber-attacks. One example: we are increasingly seeing targeted attacks on impactful industries (e.g. Sony, Target). The federal government cannot reasonably demand that organizations or corporations under attack deal with these types of attacks on their own and bear the full results of their consequences.
Do you think a company should resort to legal threats or intimidation to prevent a researcher from giving a talk or publishing their work? Why, or why not?
UY: Let’s begin by stating that the problem is not with the researcher. The problem is a flaw – a coding or design error – in the vendor’s software.
That said, there should be a proper disclosure practice (as specified in the second question). Then, there are grounds for legal action if the researcher breaches this regulation.
By supporting researchers and facilitating the process of Responsible Disclosure– instead of flouting them – vendors can facilitate better industry-wide software security levels.
What types of data (attack data, threat intelligence, etc.) should organizations be sharing with the government? What should the government be sharing with the rest of us?
UY: Organizations should be sharing more data regarding the type of malware used in attacks on their systems, results of investigations into particular cases of breach, etc.
For example, we have state regulations that require an organization to disclose a breach if it affects a resident of that state. We have become aware of many breaches due to these types of mandatory disclosures. This leads to increased pressure on corporations and organizations to better protect customer data entrusted into their hands.
The government, too, should be sharing this type of data with the community. The government is privy to a vast array of data and tools that can be used to empower security professionals and help them protect against attacks. This is particularly significant in a case where a company’s environment is compromised by a nation-state. It’s important for a company to know that it can turn to the federal government for help.
With this in mind, we need to ensure that the data that is shared (from both parties) does not go beyond what needs to be known. In a post-Snowden world, we are all too wary of over-collecting. Collecting the data should not compromise user privacy, or be abused for other purposes. The data itself should not divulge user activity, but focus only on the details pertinent to understanding the cyber-attack.