Is an 'Open Cloud' Truly in the Cards?

By Arthur Cole | May 5, 2011 | Print this Page
The steady trickle of open virtualization/open cloud technologies is quickly gaining steam, making it appear that a universal, infinitely scaleable and hyper-dynamic compute environment is just around the corner.

But on closer inspection, are the latest open technology announcements actually less than they appear? And is that necessarily a bad thing?

The top announcement this week came from Red Hat and IBM, as they vowed to work together on a set of solutions based on the Kernal-based Virtual Machine (KVM) format. The chief claim to fame here is the ability to launch multiple versions of Linux and Windows on the same physical server. This is designed to foster even greater hardware consolidation in the enterprise, as well as improve data and resource management with the ability to scale from local servers to the public cloud.

This is certainly a step toward openness, but does it go far enough? Wouldn't a fully open environment include the virtual layer itself so that users could shift data across the three leading hypervisors, VMware, Citrix and Hyper-V? That seemed to be part of what VMware has in mind with its Cloud Foundry PaaS solution, but even here the impact may be limited. As primarily a development tool, Cloud Foundry provides access to open source tools like Apache Tomcat, Apache HTTP Server and Hyperic, and the company assures us that it will work across VMware- and non VMware-based public clouds. That's very different, however, from saying ESX or vSphere will now interoperate with rival virtual platforms.

On the network level, the picture is a little brighter. Brocade this week launched the CloudPlex architecture that provides an open framework for building virtualized data centers from the ground up. The system provides what the company calls Virtual Compute Blocks that allow third-party vendors to build interoperable systems to form a flexible, scalable cloud environment. The package also supports a wide range of networking protocols like virtual private LAN services (VPLS), Fibre Channel over IP (FCIP) and FICON to enable broad connectivity between and among enterprise resources. Again, though, no word on whether such openness will extend to Cisco, HP or other archrivals.

And this gets us to the crux of the problem. Despite the high-mindedness of the open source community and the broad support that vendors of all stripes give to various OS organizations, the fact is that entrenched interests generally outweigh broader concerns. Infoworld's David Linthicum correctly points out that a fully open, interoperable cloud would essentially require virtualization vendors to support the execution of their hypervisors outside of their own technology. Ask any of them whether they are willing to give up that level of operational and licensing control.

But this is partly what I mean when I say full-scale openness and interoperability is not necessarily in the best interests of the enterprise community. Sure, a fully functioning, infinitely accessible cloud certainly has its appeal, but let's face it -- such an environment would be a messy conglomeration of systems and resources that probably would not provide the optimal experience you're looking for.

Sometimes, a little proprietary-ness is a good thing, especially if it helps to smooth out the rough edges surrounding data transfer between compatible but nonetheless dissimilar platforms. In the cloud, you'll gain access to all manner of systems and resources. But ultimately, that should make it easier to scale up the proprietary platforms already at your disposal.