One of the more ironic aspects of modern network infrastructure is that while high levels of customization are taking root on virtual architectures, the physical layer is becoming increasingly generic.
But even that may be about to change now that the ability to program, and thus customize, network hardware, and even basic silicon, is starting to creep into the latest solutions.
Western Digital’s Dave Montgomery noted recently that by itself, today’s hyperconverged infrastructure (HCI), which is usually built on white-box, modular hardware, presents a number of inefficiencies, particularly in areas like scalability. This is why manufacturers are already turning to Composable-Disaggregated Infrastructure (CDI), which provides more flexibility when deploying varying amounts of compute, storage and networking. But even this is not enough without a virtual layer that can pool these resources so they can be tailored to the consumption needs of individual workloads. A programmable fabric is crucial to this effort because it provides the necessary connectivity to selected resources within and between clusters, and it can do so at the speed of a modern, automated data environment.
Still, without complete control over both the software and hardware stacks, the enterprise is limited in its ability to craft a fully customized data ecosystem. Even in fully open environments that incorporate programmable APIs, a network operator must still write code to a controller to implement basic functions like forwarding and routing. This is what the Faucet SDN controller is hoping to address. Built under the OpenFlow protocol, Faucet essentially incorporates these basic functions into DevOps workflows. It works using a sleight-of-hand move by ingesting a YAML configuration file that links network topology to required network functions. This allows it to preprogram every device in the path with OpenFlow so that the final DevOps-generated product has a fully programmable network environment at its disposal right from the start – no manual coding required. The solution was recently demoed at the ACM/IEEE Supercomputing Conference in Dallas, Texas.
But even this is not the end-all and be-all of network customization. For that, the enterprise will need to either design and commission its own hardware from ODMs in Asia, the way hyperscalers like Google and Facebook do, or employ programmable silicon like the Field Programmable Gate Array across its data footprint. Tech consultant Janakiram MSV sees the trend of custom chips replacing generic software kicking into high gear in the coming year as functions like virtualization, graphics and HPC start to push performance and efficiency requirements. Microsoft’s Project Brainwave and the Google Cloud TPU already utilize FPGAs, so there is every expectation that they will eventually make their way into the enterprise data center.
In fact, FPGAs are already showing up on key networking devices. Last week, Israeli SDN solutions provider Ethernity Networks delivered an undisclosed quantity of its 100 Gbps ACE-NIC100 FPGA SmartNICs to what it described as a “major Korean OEM.” The units feature the ENET Flow Processor FPGA software that will allow the manufacturer to incorporate a range of customizable solutions into off-the-shelf servers, including hierarchical QoS, flow classification and protocol offloading. Ethernity said their client intends to use the devices for edge solutions, but they can also be incorporated into data center, cloud and telecom environments.
All of this customization and programmability will put an end to the many strictures that have inhibited data performance thus far, ushering in an entirely new era of data productivity. Going forward, when somebody wants something – whether it’s a user, a knowledge worker or the CEO – IT can no longer point to network infrastructure and say “Sorry, that’s not possible.”
Arthur Cole is a freelance journalist with more than 25 years’ experience covering Enterprise IT, telecommunications and other hi-tech industries.