Days like Melbourne Cup Day are almost synonymous with the concept of ‘downtime.’ A day like today truly stops the nation, with almost everyone tuning out from their day-to-day to tune in to Australia’s most expensive horse race of the year.
People will be placing their bets at their local pub and ‘clocking-off’ for the afternoon will be acceptable, if not encouraged, by employers.
But while the nation pauses, there would be no celebrations if our data paused in a similar way. Our increasing reliance on data availability leaves little room for horsing around when it comes to staying always on and always connected.
While the races may be all fun and games, data security and availability is not something to be gambled with. Businesses are dependent on stable data delivery and anything from temporary data outages to permanent data loss can set you dangerously off course. If you want to see a nation truly come to a standstill, a data outage will do the trick.
What would happen should a major data outage strike?
Thousands of businesses across Australia depend on a steady supply of data and access to serve their customers and maintain revenue streams. Across the economy, we are continuously putting more and more trust in technology and into the availability of data and digital service.
So what happens, if this availability fails? How do we ensure business data is stable? How do we restore access to services as quickly as possible to minimise cost, consequence and impact?
Usually, an outage will be the result of a simple gap between user demand and what IT can deliver.
Beyond the (not insignificant) fact of user frustration that might result from ‘downtime’, there are pretty risky business consequences.
Availability shortfalls are not only extremely common, but they’re costing businesses a lot of money globally. The average global cost of ‘downtime’ for mission-critical applications reaches over $100,000 per hour. Needless to say, there aren’t many companies out there that can easily shoulder these kinds of incidental costs.
Data availability is starting to stretch beyond the world of business and IT. As we connect more and more devices, our critical data sprawls and grows ever further. Data access outages can cause significant business consequences. Just think about the recent Telstra 3G outage, that left entire businesses unable to use terminals.
This is not a problem that appears to be going away either. A survey we conducted last year found that unexpected downtime incidents are on the up. And beyond these costs there are broader implications for how unplanned downtime can affect digital transformation projects. The pressure to evolve business processes to meet the demands of an increasingly digital environment has never been greater, but a majority of the organisations we spoke with said that downtimes are stifling their digital transformation initiatives.
As more businesses take steps to update their existing systems to adapt to digital-first operations, the risk (and potential cost) of unexpected downtime increases. Changing and updating systems can be a delicate procedure, but is increasingly an essential one.
Even in the context of an unavoidable natural disaster, such as major flooding, a loss of data availability from an outage becomes less a prospect of lost dollars and something far more serious.
In these cases, the everyday value of data is put into stark perspective – but it shouldn’t take a large-scale outage or a high-profile data disaster to make IT managers properly consider the importance of availability to their digital transformation plans.
If businesses want to commit to providing an ‘always on’ service to their customers, then they need to focus on the planning and implementation of their availability solutions and avoid the pitfalls of unplanned downtime.