Cloud-based architectures may be a boon for improving large-scale backup, but a senior university technology executive warns that it’s equally important to regularly confirm that recovery tools can meet restore time objective (RTO) guidelines to ensure seamless user experience.
It’s a lesson that the University of Canberra has learned over time as it progressively evolved its data-centre architectures from Microsoft Hyper-V, to a Hyper-V/VMware, to the current state where 90 percent of the university’s core systems are running on the Nutanix Enterprise Cloud OS and Acropolis hypervisor.
This included a progressive overhaul of its backup regime, which was previously based on storing quarterly backup tapes offsite but was recently changed to the Amazon Web Services (AWS) cloud by working in partnership with longtime backup vendor Commvault.
With 280TB of data, more than 370 core databases and over 100 key applications, ensuring the continuity of data and services through these “radical” changes has, associate director of vendor and operations Justin Mason told CSO Australia, forced university staff to work closely with technology vendors, and service partners like Wipro, to ensure that the integrity of its environment was maintained.
“You try to keep on top of security as best you can,” Mason explained, “and that means doing your due diligence to double and triple-check that you are following best practices. You know that [cloud providers’] data centre and infrastructure services are going to meet certain security standards, but it’s a matter of making sure you do your due diligence on your side as well.”
A key part of that responsibility was to not take the security of the cloud platforms – no matter how effective it might be – for granted.
Increasing adoption of cloud-based platforms for core functions has surfaced new security challenges, with a spate of incidents highlighting the risks of failing to apply security rigour to cloud environments. The recent McAfee IaaS Adoption and Risk Report noted that the 1000 surveyed organisations were, on average, aware of just 37 cloud misconfiguration incidents per month – but that data monitoring suggests they are actually experiencing more like 3500 such incidents.
Maintenance of a “micro data centre” for disaster recovery purposes helps the university stage critical data to help get back up and running quickly, but providing adequate cloud integration required more than simply knowing that it works.
Backups not only need to work, after all, but need to be recoverable within timeframes that allow the university’s 14,000 students and 5000 staff to quickly recover their data and keep on working.
“It’s a shared responsibility around anything to do with putting workloads into the cloud,” Mason explained.
“It’s not just a matter of the backups working; it’s about them working in a way that allows you to be able to restore your data at multiple points in time to be able to ensure that your staff and students can get their data back in a window that doesn’t impact their productivity.”