Even though some dispute the efficacy of the private cloud, there is no denying the strong appeal the technology has generated in the mid-sized to large enterprise industry. Drawn by visions of rapidly scalable and highly flexible resource architectures, organizations are broadly rethinking their infrastructure plans to accommodate new cloud services.
A recent Novell survey highlighted peak interest in the cloud among top IT executives. Nearly 90 percent say the private cloud is the next logical step in the virtualization process, with more than a third saying mixed private/public cloud services are on the agenda for the coming year.
With that in mind, many thoughts are turning toward the more practical aspects of private cloud computing; namely, how exactly to retool existing systems and infrastructures to prepare for the cloudy future?
One of the key components in this equation is storage. After all, if server and even desktop architectures are to meet the more fluid data requirements of the cloud, they will need much more robust access to stored data than is currently available on the typical SAN. Jim Damoulakis, CTO at GlassHouse Technologies, pinpoints a number of crucial functions that future storage platforms must provide to be cloud-ready. Among them are disk pooling, auto-tiering, replication and snapshotting and high-bandwidth, scale-up capability. The overall goals in any storage re-design should be to foster more agile provisioning, support for multiple service profiles and the ability to relocate data quickly, easily and, most importantly, transparently to the user.
This focus on storage for private cloud architectures has been driving some of the biggest industry moves of late. HP’s $2.4 billion purchase of storage firm 3PAR was all about bringing in the expertise needed to add flexibility to the StorageWorks platform so it can better compete against the EMC and Oracle lines that are already being optimized for virtual/cloud deployment. Some say HP overpaid, but as our own Rob Enderle pointed out recently, that purchase price will likely be only a small fraction of the revenue at stake as the entire IT industry retools for the cloud.
In the meantime, more flexible storage platforms are hitting the channel on a regular basis. Hitachi Data Systems recently unveiled what it calls a 3-D scaling system as part of the Virtual Storage Platform, with the expressed intent to provide massive scale-up capabilities for cloud services. Through the Hitachi Command Suite management system, users will be able to dynamically scale storage, computing and virtual resources, accommodating more than 5 million objects and 255 PB of virtual storage capacity.
But it’s not just the deep-pocketed vendors who are driving this capability. Smaller firms like CommVault are rapidly pulling together large-scale cloud platforms capable of pushing storage services across complex infrastructure. The company recently teamed up with Mezeo Software to combine the Simpana management stack with the Mezeo cloud storage platform, providing a ready-made backup and recovery solution for both public and private cloud services. The package features native REST support plus embedded deduplication and encryption designed to get cloud-based storage up and running quickly.
In all likelihood, the transition to cloud computing, whether public, private or hybrid, will play out over the next decade or so. The initial steps taken today, such as virtualization and high-speed networking, provide the basic framework for the advanced IT capabilities headed your way.
But as more and more cloud-based platforms hit the channel, it is clear that the big decisions will have to be made relatively quickly. If you haven’t already done so, now is the time to take stock of what you have and what you hope to accomplish in the cloud.