Enterprises are quickly gaining cloud experience using a wide variety of public, private and hybrid infrastructures. But as IT learns to manage the mechanics of the cloud, other minds are beginning to ponder broader questions, namely, now that we have a functioning cloud, how do we make best use of it?
For many, the immediate answer is data bursting. It turns out that the cloud addresses the most pressing problem facing data centers today: finding a low-cost solution to manage relentlessly increasing data loads. Through bursting, enterprises will be able to shift those loads ? even for critical apps ? through on-demand provisioning of cloud resources.
Critical to this process, though, is the seamless transfer of data from one set of resources to another. The most effective way to accomplish this, according to eWeek’s Cameron Sturdevant, is through a public/private bridge layer that allows both internal and third-party platforms to forge a common infrastructure. Preferably, the bridge will conform to one or all of the Open Data Center Alliance (ODCA) usage models for virtual machine portability. In this way, you can avoid latency-inducing conversions between VM formats like .VMDK and AMI.
A number of bridging platforms have already hit the channel. Cloud.com, for one, recently teamed up with RightScale to provide automated, on-premise network, security and load management to enable bursting to popular public clouds like Rackspace and Amazon Web Services. At the same time, heavyweights like HP are building burst capabilities directly into their cloud platforms. The company has incorporated something it calls “dual cloud bursting” into its CloudSystem platform, described as the ability to provision and scale resources across public and on-site cloud models.
At the moment, most bursting scenarios involve relatively well-established data patterns, says Smart Data Collective’s Chris Barnes. With a known demand curve, IT managers at least have a heads-up when it comes to provisioning cloud resources. When it comes to more unpredictable loads, there is an argument for hosting the environment entirely in the cloud and then shifting it to internal infrastructure once resource requirements are more well-known.
The ultimate bursting goal is to foster an environment that can perform an automatic load transfer that is so seamless as to be unnoticeable outside the IT department. There’s no reason to think that can’t happen, although it will probably require a bit more leg work at this point.
System integration within the enterprise infrastructure is complicated enough already. Bringing in someone else’s environment adds several orders of magnitude to that complexity.