06-19-14 | Blog Post
In my last blog post I made the case that data is money. So it’s important to have a strategy for data protection just as you would for cash management.
It helps to have a framework when developing a strategy. One framework for developing a data protection strategy is the Data Protection Spectrum.
Traditionally, costs grow exponentially as you move from “Not Never” on the left of the spectrum to the “Always On” to the far right. But the cloud has completely disrupted that exponential cost growth. Here’s how.
Doing absolutely no backup of your data means you aren’t even on the spectrum. I call that the “Never” scenario. If asked when services will be restored, your only honest answer is “never,” then you’re in really big trouble. So doing the bare minimum backup or snapshot at least gets you on the data protection spectrum, but only at the “Not Never” scenario. Which means your answer would be “I don’t know but at least it’s Not Never.”
At the other end of the spectrum is data that is highly distributed, in real-time and the ability to completely lose an entire stack of hardware or application and still maintain operation with no interruption. This extreme level of resiliency is achievable but is exponentially more expensive than “Not Never.”
Most organizations need to be somewhere in between, and this is where the framework can really help. How much data protection is enough? What processes can you afford to live without and which ones do you absolutely have to have? What does it cost to lose access to one of those systems? What budget do you have to protect it? These are all questions a skilled business continuity or disaster recovery expert can help you answer and then make the business case for where you should be on the spectrum.
But one thing that is clear is that the advent of virtualization, and the maturing of the cloud industry has dramatically shifted the cost curve for the middle of the spectrum. It’s still extremely difficult and expensive to have multiple systems of records at multiple sites, geographically dispersed with sufficient global load balancing to deliver solutions on the far right of the spectrum. The “Not Never” is satisfied by the plethora of commodity offsite providers like Mozy, Carbonite or Druva.
But what about those organizations for whom “Not Never” isn’t good enough but can’t afford and don’t need the complexity of “Always On” multi-site real-time production?
A big driver of cost as you move to the right on the data protection spectrum was the redundant hardware, network, connectivity and processes required that was often seldomly, hopefully, never used. The traditional cold, warm, hot site disaster recovery which would typically be in the spectrum left to right resulted in significant idle IT assets. So one has to ask, who already has idle IT assets at the waiting? Well, the entire cloud service provider industry is exactly that – idle IT assets available to you as a service, with pretty rapid deployment cycles. It’s a perfect match for IT and enables one to move to the right on the spectrum with significantly lower capital costs.
By backing up your data to an existing cloud or IT infrastructure provider you basically get access to their entire unused capacity (which they always have) at the ready, in the case of disaster without having to pay for it. So for what used to cost a bit more than “Not Never,” you have access to what effectively used to be a cold disaster recovery site.
In the next blog post, I’ll explain how you can put together an offsite backup service with a cloud, colocation and managed service product suite to very cost effectively establish a full set of DR options.
RELATED CONTENT:
Data is money: Just as money belongs in a bank, data belongs in a data center
Disaster Recovery white paper