The true cost of a data breach

How can you accurately measure the cost of a data breach? Traditional approaches have some major limitations. Verizon’s latest Data Breach Investigation Report examines an improved methodology more suited to the real world.

Author: Robert Parker, Head of Security Solutions, Verizon Asia Pacific

There are many reasons for security attacks. Attackers may be looking for payment card data or other sensitive commercial information, or they may simply wish to disrupt an organisation’s operations. Whatever their motive, data breaches have a significant impact on a business. Protecting an organisation from an unwanted intrusion can save tens of millions of dollars, and help maintain customer loyalty and shareholder confidence. But can we really quantify the true cost of a data breach?

As a part of Verizon’s http://www.verizonenterprise.com/DBIR/2015/|‘2015 Data Breach Investigation Report’]] (DBIR), we have built an accurate approach to estimating the loss as a result of a security incident. We have based the approach on actual data and considered multiple contributing factors, not just the number of records affected.

This is important, because we have found that the cost of a data breach doesn’t follow a linear model. In reality, the cost per record falls as the number of records affected increases. So instead of using a simple average, we have modelled how the actual cost varies with the number of records. We believe that this provides a much more reliable indicator. This model can be used to estimate the cost for breaches experienced by any organisations, large or small.

Analysing the true cost of a breach

In the latest DBIR, Verizon security analysts used a new assessment model for gauging the financial impact of a security breach, based on the analysis of nearly 200 cyber liability insurance claims. The model accounts for the fact that the cost of each stolen record is directly related the type of data and total number of records compromised, and shows a high and low range for the cost of a lost record (e.g. credit card number or medical health record).

For example, the model predicts that the cost of a breach involving 10 million records will fall between $2.1 million and $5.2 million (95 percent of the time), and depending on circumstances could range up to as much as $73.9 million. For breaches with 100 million records, the cost will fall between $5 million and $15.6 million (95 percent of the time), and could top out at $199 million.

The analysis clearly shows that an organisation’s size has no effect on the cost of a breach. The headline-making losses reported by many larger organisations are because these breaches involve the loss of a higher number of records. Breaches with a comparable number of records have a similar cost, regardless of the size of the organisation.

When budgeting and operating an information security program, accurately assessing what might happen and how much it will cost are critically important. A lack of reliable estimates leads to a creative environment for decision making, where underspending, overspending, or even useless spending invariably result.

Regrettably, there is a large and glaring gap in the security industry when it comes to quantifying losses. Verizon has built a new approach to estimating loss, based on actual data and considering multiple contributing factors (not just number of records). This is made possible through a new DBIR contributor, NetDiligence, which partners with cyber insurance carriers to aggregate data on cyber liability insurance claims.

From the data provided, we extracted 191 insurance claims with loss of payment cards, personal information, and personal medical records, as well as sufficient detail to challenge a few existing theories and test some new ones.

The data showed that the established cost-per-record amount for data breaches comes from dividing a sum of all loss estimates by total records lost. That formula estimates a cost of $201 per record in 2014 and $188 the year before.

This cost-per-record model is often used by organisations, but it suffers from the ‘flaw of averages’, which does not accurately take into account large breaches – those exceeding 100,000 records. It is simple to calculate and apply, but the average cost-per-record model does not accurately fit real world loss data.

Read more: How responsible are employees for data breaches and how do you stop them?

If we apply the average cost-per-record approach to the loss claims data, we get a rather surprising amount: just 58 cents per record. That’s a far cry from the figure of $201 estimated earlier.

Part of the issue is the exclusion of breaches over 100,000 records in the existing model, combined with the inclusion of soft costs that don’t show up in the insurance claims data. Smaller breaches average out to much more per record than do larger breaches. With large breaches that compromise 100 million records or more, the cost per record can drop down to just a cent or two.

Is the new model accurate?

It is true that there are many factors contributing to the cost of breaches besides the number of records lost. Perhaps having a robust incident response plan helps, or keeping lawyers on retainer, or having tight contracts for customer notification and credit monitoring.

This means we need broad ranges to express our confidence in the output. On top of that, our uncertainty increases exponentially as the breach gets larger. For example, with this model, the average loss for a breach of 1,000 records is forecast to be between $52,000 and $87,000, with 95% confidence. Compare that to a breach affecting 10 million records where the average overall loss is forecast to be between $2.1 million and $5.2 million. As the record count increases, the accuracy of the overall prediction decreases.

To complicate matters, there are other factors involved besides the base record count. We can test many of these: Do insiders cause more loss than outsiders? Do lost devices lead to higher impact than network intrusions?

After countless permutations, we found many significant loss factors, but every single one of them fell away when we controlled for record count. What this means is that every technical aspect of a breach only mattered inasmuch as it was associated with more or less records lost, and therefore a higher or lower total cost.

Larger organisations post higher losses per breach, but because they typically lost more records than smaller organisations, they had a higher overall cost. Breaches with equivalent record loss had similar total cost, independent of organisational size. This theme played through every aspect of data breaches that we analysed. In other words, everything kept pointing to records and that technical efforts to minimise the cost of breaches should focus on preventing or minimising compromised records.

Read more: Expect a surge in new banking malware after software leak

We are not saying record count is all that matters – it accounts for maybe half of the story. But it is the only relevant factor among the data points we have at our disposal. We have learned that while we can create a better model than cost per record, it could be improved further by collecting more and different data, rather than specifics about the breach, to make better models.

We believe this new model for estimating the cost of a breach is ground-breaking, although there is definitely still room for refinement. And it remains the case that it is almost always less expensive to put a proper defence in place than it is to suffer a breach. In today’s world comprehensive security is not a business luxury, it is a daily necessity.

Robert Parker leads security solutions for the Asia-Pacific region at Verizon Enterprise Solutions. In his capacity, he is responsible for helping enterprises and government organisations to manage risks from a wide range of threats and compliance requirements.

Feeling social? Follow us on Twitter and LinkedIn Now!

Read more: Serious Business: Cyber Security and Brand Survival

Join the CSO newsletter!

Error: Please check your email address.

Tags DBIRdata loss preventionNetDiligencesecurity incidentdata lossprivacyCSO AustraliaHigh profileVerizon securitycyber liabilityVerizon Asia Pacificdata privacydata breach

More about CSOTwitterVerizon

Show Comments

Featured Whitepapers

Editor's Recommendations

Solution Centres

Stories by Robert Parker

Latest Videos

  • 150x50

    CSO Webinar: The Human Factor - Your people are your biggest security weakness

    ​Speakers: David Lacey, Researcher and former CISO Royal Mail David Turner - Global Risk Management Expert Mark Guntrip - Group Manager, Email Protection, Proofpoint

    Play Video

  • 150x50

    CSO Webinar: Current ransomware defences are failing – but machine learning can drive a more proactive solution

    Speakers • Ty Miller, Director, Threat Intelligence • Mark Gregory, Leader, Network Engineering Research Group, RMIT • Jeff Lanza, Retired FBI Agent (USA) • Andy Solterbeck, VP Asia Pacific, Cylance • David Braue, CSO MC/Moderator What to expect: ​Hear from industry experts on the local and global ransomware threat landscape. Explore a new approach to dealing with ransomware using machine-learning techniques and by thinking about the problem in a fundamentally different way. Apply techniques for gathering insight into ransomware behaviour and find out what elements must go into a truly effective ransomware defence. Get a first-hand look at how ransomware actually works in practice, and how machine-learning techniques can pick up on its activities long before your employees do.

    Play Video

  • 150x50

    CSO Webinar: Get real about metadata to avoid a false sense of security

    Speakers: • Anthony Caruana – CSO MC and moderator • Ian Farquhar, Worldwide Virtual Security Team Lead, Gigamon • John Lindsay, Former CTO, iiNet • Skeeve Stevens, Futurist, Future Sumo • David Vaile - Vice chair of APF, Co-Convenor of the Cyberspace Law And Policy Community, UNSW Law Faculty This webinar covers: - A 101 on metadata - what it is and how to use it - Insight into a typical attack, what happens and what we would find when looking into the metadata - How to collect metadata, use this to detect attacks and get greater insight into how you can use this to protect your organisation - Learn how much raw data and metadata to retain and how long for - Get a reality check on how you're using your metadata and if this is enough to secure your organisation

    Play Video

  • 150x50

    CSO Webinar: How banking trojans work and how you can stop them

    CSO Webinar: How banking trojans work and how you can stop them Featuring: • John Baird, Director of Global Technology Production, Deutsche Bank • Samantha Macleod, GM Cyber Security, ME Bank • Sherrod DeGrippo, Director of Emerging Threats, Proofpoint (USA)

    Play Video

  • 150x50

    IDG Live Webinar:The right collaboration strategy will help your business take flight

    Speakers - Mike Harris, Engineering Services Manager, Jetstar - Christopher Johnson, IT Director APAC, 20th Century Fox - Brent Maxwell, Director of Information Systems, THE ICONIC - IDG MC/Moderator Anthony Caruana

    Play Video

More videos

Blog Posts

Market Place