Disclosure has a couple of very interesting perspectives according to Troy Hunt, during a very well attended presentation given at AusCERT 2017. On one hand, there are threat researchers like Hunt who disclose vulnerabilities. On the other are organisations receiving reports about problems.
Hunt's focus was on the competing issues of responsibility and disclosure. He noted that while there may be good intentions in reporting security problems it doesn’t always go so well.
Last year, Dave Levin was arrested following his disclosure of a flaw in an elections website. On the face of it, this seems unfair but Hunt says you need to unravel the details to really understand what happened.
Levin and election candidate Dan Sinclair, recorded a video showing how vulnerable the site operated by the election officials was. This didn’t affect the voting systems. Using a SQL Injection tool called Havij, Levin accessed username and password information from the site's database and log in as the system administrator. He recorded all of this and posted the video to YouTube.
Hunt said Levin's actions were not very wide as there are easier ways to determine if the site is vulnerable to a SQL Injection flaw. Simply entering a search query on the site with an apostrophe in it will generate an error that indicates the vulnerability.
There was a way to disclose the issue without doing things that will get you arrested.
Hunt described some other, similar cases, where threat researchers got themselves into trouble by extracting large volumes of data from a system where just one record would suffice for notifying the affected party.
In terms of response, Hunt used the example of UK fitness company Pay As U Gym. They were threatened, via a Twitter direct message, by a hacker claiming access to their server and database. They ignored the message. The hacker's response was to put the data up for sale.
Not long later, the data was released. Incredibly, Hunt said the company was not even aware of what data they were holding. Pay As U Gym told customers their credit card data was not affected but it was clearly visible in the leaked data.
Similarly, CloudPets, who make an IoT stuffed toy that can be used by parents to communicate with their children had several vulnerabilities exposed by threat researchers. One of the problems was the company was using a Mongo database that was publicly facing with no authorisation. Again, there were warnings given that the company didn’t respond to and the company's database was wiped several times after they failed to pay a ransom.
One of the problems with reporting vulnerabilities to companies is that they are often unwilling or unaware of how to deal with it.
Hunt contrasted this with the Red Cross Blood Bank breach of 2016. Hunt was informed of the breach by a hacker who was just scanning IP addresses and found backups of the Red Cross donor database.
The Red Cross took the notification seriously and took immediate steps to lock down access to the data, that was leaked by a third-party partner and not the Red Cross directly. Disclosures were made public within 72 hours via email, text and press release using clear language that honestly explained the issue, taking responsibility for the incident even though it as a third party that caused the issue.
Some of the things companies can do it formalise the structure for reporting bugs and vulnerabilities. Hunt advocated for bug bounties as they incentivise ethical disclosure. Having a vulnerability reporting policy supported with a process is also important. Hunt pointed to Tesla as a company who has this on their company webpage where a secure communication channel is also in place.