I recently came across a very interesting question at ITToolbox.com which inquired:
As larger data centers continue build outs, they begin to pose a major risk due to sheer size. It’s also becoming more apparent that a smaller distributed infrastructure is more favored among IT professionals. An April 2nd 2009 FBI raid on a Texas data center that hosted cloud services resulted in 50 businesses offline at 9.00 AM and over 200 (two hundred) businesses adversely affected before noon. In the earlier part of the last decade, mega data centers were an appropriate solution due to the emphasis on physical hardware. However as the industry has moved further towards a services oriented approach including virtualization and cloud computing, we see less need for major hardware deployments. This also will determine the future of the large mega data centers. What is the future of the data center and do you prefer a large data center or a smaller distributed infrastructure that mitigate risks and affords better visibility?
It got me thinking and inspired the response below:
I’m reminded of the inherent resiliency designed into the “Internet” by its founder ARPA. The whole concept was to have such ubiquitous connectivity that no single (or multiple) node failure could disrupt operations.
That same concept seems to make sense for data centers as well. In fact, a data center is a node. So why would you want a few, single mega-data centers?
One reasons often stated is the per unit cost savings of a larger facility. If you need a mega-data center and you need it all at once than this argument may make sense. However if you don’t the mass of area above that utilization curve is very capital-costly and hence could easily eat up any per unit operating costs you were planning on.
If you really need a mega-data center though, then you need more than one mega data center, so you’re back to the same multi-node resiliency model anyway.
One caveat to the entire concept though is the same assumption underlying the Internet’s inherent design. It assumes extremely high quality, fast and affordable connectivity or you can’t spread loads between data centers easily.
Unfortunately, very high speed, high quality connectivity is still very expensive. So the answer as to how “distributed” a network of data centers to build becomes quite a bit more complex than just a financial or resiliency design exercise.
At the end of the day, it’s why we believe there is such a huge demand for outsourced data center services. If you’re not in the business, why would you want spend time and risk capital on getting the right answer?
Where do you see the future of the data center going?
Otava provides the secure, compliant hybrid cloud solutions demanded by service providers, channel partners and enterprise clients in compliance-sensitive industries. By actively aggregating best-of-breed cloud companies and investing in people, tools, and processes, Otava’s global footprint continues to expand. The company provides its customers in highly regulated disciplines with a clear path to transformation through its effective solutions and broad portfolio of hybrid cloud, data protection, disaster recovery, security and colocation services, all championed by an exceptional support team. Learn more at www.otava.com.