A private cloud can be a very attractive solution, but just like seeing that attractive person across the room you want to approach, care must be taken so the results don’t end up with you getting shot down for your troubles.
The reason for the attractiveness of a private cloud versus, say, the public cloud or even a hybrid cloud usually boils down to one very important reason: trust. When push comes to shove, IT managers are not yet willing to trust their company’s data to resources that are not completely owned by their own company.
There is a problem, however, with successfully implementing private clouds: a serious misunderstanding of just exactly what constitutes a private cloud. Walk up to the average IT manager on the street (and they’re out there, rest assured) and ask them what it takes to make a private cloud.
“Virtualization!” they will answer with the confident air of someone who knows they’ve got the right answer. And, indeed they do. Virtual machines and the ability to deploy jobs and resources to said VMs are at the heart of any form of cloud computing. What else?
“Er, inside the firewall!” they will respond with slightly less gusto. They’re not as sure, and frankly they’ve got places to be. But they nailed this one as well: a cloud is private when it is completely within the corporate firewall. Typically this means on-site, though you can locate the actual physical hardware off-site and still meet the definition of a private cloud. Keep going, what else makes a private cloud?
Now, unfortunately, the conversation may go awry, and the answers will almost certainly vary. Half-hearted attempts to answer the question with “provisioning” or “management automation” may be heard, but not always. That’s too bad, since these glimmers of the concept of private cloud computing are on the right track.
What is happening right now in IT is that managers are salivating at the ease of use found with VMs, and are assuming that because it’s relatively simple to turn such machines on and off, they can easily set up a private cloud. Off they go, building a superior architecture with hundreds of VMs, all ready to run any processing job. But when a user request special task comes in, and more machines are needed, it’s the IT team who will have to go in and re-provision the new resources, working with multiple consoles to get things set up.
Congratulations, they now have a virtual datacenter, little different from the physical datacenters from which they were trying to remove themselves in the first place.
What can make this worse is overcorrecting the problem. Administrators will often over-provision VMs, providing a fixed amount of memory, storage and processor cores regardless of a the jobs that will be running. This will often lead to the problem of VM sprawl, which is costly in and of itself. According to “Understanding virtual sprawl,” a whitepaper from Embotics, “an environment of 150 VMs will have anywhere from $50,000 to $150,000 locked up in redundant VMs.”
Differentiating virtual datacenters from private clouds
It’s better, therefore, to avoid the problem in the first place, and avoid the scenario of recreating the datacenter altogether. This means planning for and implementing the rest of the toolset that differentiates a virtual datacenter from a private cloud.
First and foremost, automation must be in place. Everything in the cloud architecture must be as automated as possible: hardware configuration, network settings, VM provisioning, jobs management… if it’s involved with the cloud, ideally it should be manageable through single-console access.
“Automated delivery needs to incorporate both the configuration of the IT resources as well as any pre or post configuration steps that need to be completed to either make the IT compute resource usable for the requester or complete the “paperwork” required to monitor and track the resource throughout its life,” writes Richard Bourdeau, VP Product Marketing, DynamicOps. “Some think that it is a lot to ask to address the entire process and only seek to automate part of the process. So, many private cloud management solutions only address part of the process and focus only on the configuring of the machine vs. the end-to-end process.”
But while partial automation is better than nothing, it still will not be as efficient as an full-on automated solution.
“[The b]est way to avoid this trap is map out your process, soup to nuts,” Bourdeau adds, “Note where compromises cannot be made on automation and understand how the new zero-touch approach will affect your processes on a whole. The right vendor will address your needs and bring additional suggestions and functionality to the table.”
Besides lowering IT management overhead, automation will also enable your architecture to achieve the fourth and final private cloud requirement: letting customers self-provision machines based on their needs. That’s where the real notion of the cloud comes into play: Not only are machines easily provisioned thanks to automated tools, but users can manage much of the provisioning process themselves, taking even less of IT’s time.
There’s a lot to think about when approaching the true private cloud, far more than picking up some cloud automation solutions, slapping them onto your infrastructure, and then let your users have at it. Put in that context, it’s easier to imagine what must be considered: policies.
To reap the benefits of a true cloud architecture, you must make sure that all policies–from IT to the user–are clearly and succinctly planned out. Without policies in place, any cloud deployment will become a chaotic mess.
It is well worth the effort to expend the energy on planning, so you can attain maximum efficiency with your private cloud.