OCP and Networking: From Small Beginnings…
The Open Compute Project, Facebook’s bid to simplify enterprise infrastructure in preparation for the hyperscale future, is gathering steam as organizations look to reposition themselves for the fast-paced, dynamic data environments headed their way.
Networking is, of course, a key element in any scale-out infrastructure, and it’s fair to say that OCP has had little to say on this subject until recently. The odd thing is, while it would seem natural that a highly modular, infinitely scalable infrastructure would adopt the most advanced networking technology available, something along the lines of InfiniBand or even PCIe, much of the activity so far has been centered on storage networking using plain old Ethernet and Fibre Channel.
In large part, this is driven by a need to link new OCP infrastructure to existing network environments, which the vendor community may desire just as much as the enterprises they serve. Seagate Technology, for example, showed off a new Ethernet Drive interface and a T-card development adapter for the Kinetic open storage platform. The idea is to allow OCP developers to build applications and systems that utilize Ethernet for the rack backplane but still conform to existing SAS footprints and specifications.
Some in the OCP community are already lining up behind the OCP/Kinetic combination, most likely drawn to the idea of object-based storage for the highly dynamic, sync/share-oriented applications that will populate hyperscale environments. Hyve Solutions, a unit of Synnex Corp., has already embraced the Kinetic platform for its new purpose-built 1320 storage solutions. The company says that with an Ethernet backplane, it can build to the higher densities and power efficiencies called for in the OCP spec and then tailor them to specific workloads, like banking and pharmaceutical research. Hyve's platform incorporates 20 hard disk drives in a 1 RU chassis, although the company has hinted that larger versions may be available soon.
At the same time, Mellanox is contributing its ConnectX-3 Pro 40 GbE NIC to the OCP platform, enabling overlay network offload functions and RDMA over Ethernet on a low-power footprint. This will allow the OCP community to establish high-speed end-to-end network environments when scaling out converged infrastructure, again allowing existing infrastructure to tap into the hyperscale world while simultaneously laying a broad foundation for web-facing operations' high-volume traffic.
Elsewhere on the storage networking stack, QLogic has offered key Fibre Channel technologies for organizations requiring more thorough management and security functions for their scale-out architectures. QLogic has kicked in its QOE2562 8 Gb mezzanine adapter for deployment on Quanta Stratos S215-X1M2Z servers, optimized for rapid deployment in high-volume data ecosystems. And LSI has contributed its Nytro XP6200 PCIe flash accelerator card and a new board design that brings 12 Gbps SAS to the Open Vault storage enclosure.
Still to come, of course, is top-of-rack switching technology that will put OCP on a collision course with Cisco, Juniper and other top networking honchos. Late last fall, OCP announced plans for a new ToR spec that draws on the expertise of Broadcom, Intel and other silicon merchants.
Once that is done, probably toward the end of this year, OCP will have all the foundational elements for an end-to-end server, storage and networking infrastructure capable of broad scalability and rapid deployment. This will make it even more difficult for the CIO to justify continued investment in legacy silo-based architecture, particularly as more and more enterprise applications migrate to the cloud.
The level of openness may vary in hyperscale infrastructure, but the drive toward increased density, efficiency and modularity looks all but certain.
Photo courtesy of Shutterstock.