First SDN, Then Disaggregation
In the SDN era, enterprises will need ultra-flexible disaggregated architectures to remain competitive.
A key element in the drive for software defined networks is the idea of disaggregation. More than just simple resource distribution, disaggregation incorporates the ability to compile disparate data resources – everything from servers and storage to individual cores and interconnects – in support of large, dynamic workflows.
Understandably, this level of functionality requires a high degree of cooperation across network architectures and up and down the hardware/software stack. This is, at the moment, beyond the scope of today’s cloud-facing, multi-platform data infrastructure. But the idea is intriguing enough to put it front and center in both the vendor- and user-driven open source communities. The question is, can all the disparate elements needed to pull off a functioning disaggregated data environment come together in the spirit of mutual cooperation?
Juniper seems to thinks so. The company just launched a disaggregated version of its JunOS network operating system, which Juniper says will allow deployment of third-party network applications and services directly onto Juniper systems via the Open Compute Project’s (OCP) Open Network Install Environment (ONIE,) developed by Cumulus Networks. At the same time, the new QFX5200 switch will ship with built-in support for disaggregation across 25 and 50 GbE architectures, allowing users to create highly flexible network automation and programming stacks using consumption-based software licensing models.
Meanwhile, Linux developers like Ubuntu are reworking their networking platforms with disaggregation in mind. The new “snappy” version of the Ubuntu Core provides data plane control from a variety of vendor solutions, which can be deployed as applications, or “snaps.” The Core maintains a common management and security framework and enables various functions like transactional updates, automatic rollback and minimal server image capabilities. The platform has already gained the support of Quanta, Agema and others and has been integrated into Microsoft Azure and Google Cloud, allowing users to essentially mix and match multiple network tools and services across distributed architectures.
Consider disaggregation the “anti-convergence” of the datacenter, says eWeek’s Chris Preimesberger. In a recently posted slideshow, he, along with Vapor IO’s Cole Crawford and Andrew Cencini, highlights the key steps needed to create a “well-connected but intentionally decoupled IT system” that will finally allow the collective capabilities of multiple disparate data resources to be harnessed in a coordinated fashion. To get there, though, designers will have to embrace concepts such as transportable environments, Open Systems Interconnection (OSI) optimization, “embarrassing parallelism” and “eventual consistency.” Done correctly, a disaggregated data environment will enable higher utilization and greater agility than even the most scaled-out datacenter can provide.
Indeed, as the digital economy evolves, disaggregation will quickly transition from “nice-to-have” to strategic imperative, says HP India’s Vikram K. Whether it is ecommerce, mobile banking or virtually any other application, businesses that fail to engage consumers in a very dynamic and immediate fashion will quickly fall to competitors that embrace the new paradigm. To break the brittle mold of current IT infrastructure and replace it with a flexible, disaggregated architecture, the enterprise will need to concentrate on highly modular, converged computing elements coupled with fluid, composable resource pools and workload-optimized infrastructure capable of delivering the right resources at the right time. In this way, data infrastructure becomes a proactive partner in the business process, rather than a reactive, and often inadequate, technology construct.
Network managers should also be aware that disaggregation will further erode the enterprise’s direct control of the entire networking stack. Primarily the physical infrastructure, but some higher-layer functions as well. This isn’t as scary as it sounds, because once connectivity and other services are charted in the abstract, the underlying support structure shouldn’t matter a great deal, as long as it is constructed and maintained properly . A house's décor isn’t dependent on the concrete used to pour the foundation, after all.
And once you are able to incorporate the best tools from data facilities across the globe, enterprise architectures will truly be limited only by what you can imagine.
Arthur Cole covers networking and the data center for IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming.