FPGAs Poised to Remake Enterprise Networking

The answer to networking needs could be in chips.

By Arthur Cole | Posted Oct 11, 2016
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

Chip developers are turning to the field programmable gate array (FPGA) for all manner of data processing needs, but it is arguably in networking that the technology can have the most profound impact.

As software defined networking (SDN) becomes the de facto standard for data communications infrastructure, the enterprise will likely encounter increasingly diverse networking needs – everything from bandwidth allocation and throughput to security and dynamic pathway management. While all of this is possible in software, it is often cheaper, and more effective, to implement on hardware, as long as you can reprogram silicon in at least a marginally dynamic fashion.

Intel, natch, has been at the forefront of the FPGA wave since last year’s acquisition of Altera Group. The result, says Enterprise Tech’s George Leopold, is a multi-die SoC that integrates a monolithic FPGA fabric capable of supporting multiple networking protocols. Earlier this month, the company started sampling its newest FPGA, the Stratix 10, which is optimized for data-heavy applications spanning the data center and distributed IoT devices. The Stratix-10 doubles the core performance over previous chip generations, delivering 10 teraflops of single-precision floating-point calculations and 10 tbps memory bandwidth.

The Stratix 10 also incorporates the quad-core 64-bit Cortex A53 designed by ARM Holdings, which makes it feasible that it could eventually take advantage of several new backplane technologies developed by ARM. These include the CoreLink CMN-600 coherent mesh network interconnect and the CoreLink DMC-620 dynamic memory controller, both of which are designed to help FPGAs, GPUs and other network acceleration devices deal with the increasingly disparate workloads migrating to the cloud. The DMC-620, for example, supports up to eight channels of DDR4-3200 memory, plus 3D stacked DRAM, to deliver upwards of 1 TB per channel. At the same time, the Agile System Cache intelligent data allocation system boosts data sharing between processors, accelerators and interfaces.

An example of FPGAs in action is Microsoft’s Azure cloud, says PC World’s Blair Hanley Frank. The company recently installed the technology in hundreds of servers across 15 countries where they are used to support machine learning and other emerging applications. As explained by Azure CTO Mark Russinovich, the FPGAs are vital to supporting 40 Gbps networking by offloading what would otherwise be an enormous data burden on standard CPUs. And should throughput requirements truly start to push the envelope, as in the advanced neural networks that some large customers are starting to toy with, each FPGA card can communicate directly with others via standard NIC, PCIe and ToR infrastructure.

FPGAs are also key to the development of emerging Flexible Ethernet standards. A preliminary FE spec for 100 Gbps was recently put forward by the Optical Internetworking Forum earlier this year, while an upcoming 2.0 proposal is expected to incorporate 25, 200 and 400 G rates. Multiple vendors are already working on FPGA solutions, says EE Times’ Rick Merritt, which should give organizations the ability to deploy next-generation ASICs and DSPs now with the knowledge that they can be reprogrammed later when the specs are finalized. Flex Ethernet can not only define network characteristics between racks and across the data center but can also form the backbone of emerging data center interconnect (DCI) platforms that could see a variety of speeds between 300 and 600 G, and beyond.

Speed and throughput will always be critical in advanced data networks, but an increasing number of applications are demanding virtually unfettered flexibility as well. A chip that can be reconfigured on the fly goes a long way toward support streamlined, efficient networking environments for functions that require optimal hardware support in order to deliver the most value.

And the more flexibility that can be built into physical network infrastructure, the less hands-on support it will require in the face of changing application and data loads.

Arthur Cole covers networking and the data center for Enterprise Networking Planet and IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter