Adapting Legacy Networks to the Cloud

By Arthur Cole | May 18, 2012 | Print this Page
http://www.enterprisenetworkingplanet.com/datacenter/datacenter-blog/adapting-legacy-networks-to-the-cloud.html

What is the best way to design network architectures for the cloud? That is literally the million-dollar question as cloud computing quickly evolves into the new normal for IT.

No network at all?

As I've mentioned before, many of the new Flash storage proponents say the best network is no network at all. Simply load servers or near-line arrays with high-capacity solid state drives (SSDs) and run it all through the PCIe interface. You get a high-speed storage infrastructure at a fraction of the cost of a full-blown SAN without all those switches, adapters and lengthy cable runs.

However, most enterprises have poured a lot of sweat and gold into their existing storage infrastructure and wouldn't mind just a little more return as the cloud era gets underway. But as the Wicked Witch once said, "These things must be done delicately … or you hurt the spell."

When it comes to traditional Ethernet SANs, the holy grail would be an infrastructure built on commodity hardware but packed with enough specialized software to enable the kind of flexibility and scalability that cloud computing requires. A start-up called Coraid is moving in this direction with the EtherDrive, a connectionless parallel storage array that delivers up to 1.8 gigabytes per second (Gbps) throughput using raw ATA over Ethernet (AoE) architectures.

With each EtherDrive utilizing its own processor and providing addressability to any host on the network, the system can easily scale to more than 65,000 shelves, providing a mix of SSD, SATA and SAS connectivity plus multiple RAID configurations. And the company says it can deliver a much lower price point than traditional Fibre Channel, iSCSI or FCoE designs.

Fibre and cloud: fire and ice?

Of course, don't try and tell companies like Brocade that Fibre Channel has no place in the cloud. The company recently unveiled a new addition to its suite of cloud-optimized 16 Gbps FC solutions. The entry-level 6505 switch is available in 12- or 24-port configurations in a 1 RU footprint and adheres to the company's inter-chassis link design that allows for flatter data fabrics when joined with higher-end systems like the 6510 switch and the DCX 8510 backbone system.

Brocade and others have long made the argument that as virtual and cloud architectures place greater demands on stStandard cloud computing iconorage and storage networking, the need for higher-end technologies like Fibre Channel will grow, not diminish.

Still others argue that since the cloud rests on flexibility and the provision of dynamic data environments, the time has finally come for a true open source revolution. Organizations like the Open Networking Foundation, developers of the OpenFlow protocol, say that proprietary architectures simply can't keep up with the demands of cloud users. After all, it's no coincidence that early cloud providers like Google and Amazon devised their own network architectures rather than pull off-the-shelf solutions from Cisco or HP.

At the same time, some of the most notorious proprietary developers on the planet are starting to make nice with the open source community now that the cloud is dangling real dollar signs before their eyes. Witness Microsoft's recent support of the Remote Direct Memory Access (RDMA) protocol in Windows Server 2012. RDMA has received strong backing from the OpenFabrics Alliance where it provides the framework for handling large file-based workloads in virtual and cloud environments built on Ethernet and InfiniBand networks.

As part of the deal, Microsoft will enable RDMA for remote file access in server message block v.3 (SMB3), as well as a kernel bypass RDMA API that should enhance third-party development of OpenFabrics software (OFS) applications.

As prior generations of enterprise technicians will tell you, however, open source does not necessarily mean cheaper, better or more flexible. It all depends on the level of cooperation among supporters and the degree to which your existing infrastructure can accommodate multi-vendor environments.

By nature, the cloud is intended to provide a melting pot of solutions from which enterprises or individual users can draw the most efficient and effective solutions. Legacy infrastructure can play a very large role in this new world, but it will have to change to keep up with the times.

Arthur Cole covers networking and the data center for IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming.