If current trends affecting the data center can be summed up neatly, we would call this the Age of the Service.
First on the software level and followed quickly on hardware, services are quickly becoming the norm in data environments because they provide a purely opex approach to questions of scalability and dynamic flexibility, absolving the enterprise from the complicated and expensive need to constantly deploy new hardware.
When it comes to physical infrastructure, the service-based approach is only feasible through the broad adoption of virtualization. By creating images of actual hardware in logic, the enterprise is free to contract for additional processing and storage as needed even as it enjoys the fruits of highly dynamic, infinitely scalable infrastructure.
Still, it’s a bit surprising that the introduction of true virtual networking through newly developed software defined networking (SDN) protocols has not led to a broader discussion of Networking as a Service.
While the concept of NaaS has been around for a while, most current platforms focus on wide area services. Telecommunications firms like Aryaka tout NaaS for WAN connectivity as a means to reduce or completely eliminate MPLS charges, coupled with dramatic improvements in bandwidth flexibility, deployment/provisioning and visibility.
But that may be changing. HP is quickly lining up cloud service providers (CSPs) behind its FlexNetwork Utility Advantage Program, which provides a pay-per-use model for local area network services that are leased from top telecommunications providers, using HP hardware, of course. The platform is built on the company’s FlexNetwork architecture that seeks to enable a more convenient way to scale up network resources in support of mobile communications, Big Data and rich media applications that are expected to dramatically increase the burden on traditional infrastructure over the next decade. As with most service offerings, LAN-oriented NaaS promises to provide a more cost-effective means to pursue network scalability while providing access to a steady stream of state-of-the-art technology, all at levels that closely match data requirements to help the enterprise dial back on the need to over-provision network resources.
Part of the irony, though, is that while all of these magical things are happening on the virtual layer, it is having a real impact on the design and implementation of physical infrastructure. LSI Corp.’s David Sonnier, for example, recent spelled out how NaaS, SDN and other developments are producing a need for new SoC designs that are heavy on multicore and hardware acceleration technologies. As data continues to flood the network, processors are still responsible for the heavy lifting, regardless of whether it supports traditional, virtual or cloud-based networking. Expect to see chips with increasing numbers of function-specific modules to handle traffic management tasks like encryption, signal processing, transcoding and packet classification that are simply too numerous and too complex to be handled higher up the stack.
It’s been said that as data environments become more cloud-like, the barriers between local and wide-area infrastructure are coming down. In the near future, users will likely pull data and applications from any number of sources − some even half a world away − with the same ease with which they call up files from their own hard drive. In that light, it shouldn’t come as a surprise that network services are starting to enhance, and may ultimately supplant, traditional datacenter infrastructure.
As a function of enterprise IT, then, expect networking to soon join the list of items, like servers and storage, that no longer require a hands-on approach to management and maintenance. Rather, the focus will shift to service and performance levels, where failure does not necessarily require repair and replacement of physical resources, but isolation and reconfiguration of software.