Occasionally, Enterprise Networking Planet is pleased to publish articles contributed by leaders in the field. Today’s contribution comes from Rich Napolitano, CEO of Plexxi, who shares his predictions for the enterprise network to come and his advice for achieving its requirements.
The dynamics of IT are changing. Just as many analysts and pundits predicted, big data, mobile and the Internet of Things are putting an ever-increasing strain on the network. While this change empowers end users around the world, most networks are unprepared for the added stresses on already-fragile infrastructures. The rest of the datacenter has sped ahead (servers have been virtualized, the software powering them has changed), but the technology powering our networks has remained essentially unchanged.
The shift we are seeing is what many, including IDC, call the “Third Platform Era” of IT innovation. It is defined by data and applications becoming increasingly distributed. The concept is as follows: As more and more data is created and transferred between resources, the network must be increasingly resilient, dynamic and agile to adjust to application demands accordingly.
You might be wondering: If we’re already in the Third Platorm Era, what were the First and the Second Era?
Even though you may not be able to instantly pinpoint them, the footprints of each era are clearly defined if you look closely. The First Era was defined by monolithing IT infrastructure. This meant that everything – storage, compute, networking, applications – were all contained in one central mainframe. Following that came the Second Era, marked by the introduction and meteoric rise of the personal computer, leading to increased strain on networking systems. During this era, we saw the growth of the IT giants, like Cisco and EMC.
That brings us to the Third Platform Era, where we stand today. As data and applications become increasingly distributed, there is an inherent architectural dependence on the interconnect, which enables these resources to work in concert to deliver application workloads. That interconnect — the network — must undergo its own transformation to meet the new needs of the Third Platform Era.
So, what characteristics are networks currently missing? What features does the network of the future need to succeed? In my opinion, there are five characteristics each much have:
1: Agility. Application agility is meaningless if the network cannot keep pace. Keeping pace means removing complexity, simplifying operations, and embracing automation to provide a dynamic and responsive infrastructure. Conditions in and around the datacenter are perpetually changing, placing a greater focus on resource management. Infrastructure agility must match business agility, requiring that the underlying infrastructure be responsive to the applications for which it is responsible.
How to achieve agility in your network: Network agility comes largely from the ability to change the network from the “wires up,” based on new business needs, without lots of complex, manual, error-prone and labor-intensive work (the antithesis of agility!). Think about developing a long-term plan for a fully automated agile network, with interim steps to get to that point. Start by looking at a cap-and-grow strategy from the existing legacy network and build out new applications on a network that can be fully automated from the wires up. The next step is to build hooks from the self-service IT domain into the virtualized overlay. When it is time to replace the physical network, look for one that provides full Layer 1 re-configurability with higher-level automation hooks.
2: Scalability. In a dynamic application environment, it’s about more than scale. The challenge with scalability is less about how large the clusters are now, and more about how to gracefully scale for future deployments. If an infrastructure is architected for small deployments, how will it evolve as the number of nodes increases? Will it require a complete re-architecture at some point? The key point to remember is that scalability is less about the absolute scale and more about developing the path to a sufficiently scaled solution.
How to achieve scalability in your network: When evaluating a new network, look at the break points that cause high degrees of pain to get beyond. For example, in a traditional “leaf-and-spine” network, typically the network will scale up to the capacity of the spines. Beyond that, further scaling requires re-architecture and re-cabling, a painful and costly experience. This type of overbuilding creates waste and increases the likelihood that the technology will become obsolete before it is usable at the scale needed. Instead, look for a network that grows as incrementally as possible and in the smallest possible unit (usually the number of ports). Finally, look for one that doesn’t necessitate major re-cabling or re-architecture at break points.
3: Integration. The infrastructure must work in an orchestrated fashion to deliver application experience. This means that compute, storage, networking, applications and all the surrounding systems must be capable of frictionless coordination. A well-integrated network should be able to affect different behaviors via policy and have the ability to efficiently fit available network resources to the most critical business needs. All of this needs to be done in ways that can easily be automated.
How to achieve integratability in your network: As you move from bespoke applications and infrastructure to self-service utility computing for both test/dev and production workloads, think about how production workloads can be configured to express their intent directly to the infrastructure. Most new orchestration systems are starting to build in the ability to encapsulate workloads with policy meta data. This makes it important for network engineers to think about how to structure that policy and define the syntax/grammar, in addition to operational considerations like change control and governance. There needs to be a holistic way and common approach for workloads to tell the infrastructure what is needed, within the bounds of what that workload is equipped to request. Put simply, start to define the interface points and policies and procedures between infrastructure and applications before individual application developers start to build their own ad-hoc integrations.
4: Resiliency. Distributed systems only function if the interconnect is reliable. This means the network must be fault-tolerant and resilient.
How to achieve resiliency in your network: Resiliency and cost-effective redundancy are byproducts of good design. As applications become more distributed and scaled-out, they are inherently more resilient (unless they run on a network that takes the entire application down with it). As networks transition to scale-out applications, consider moving from a redundancy perspective to a resiliency perspective. Leaf-and-spine networks typically create a single point of failure at the spine layer, which means that redundancy must be engineered into the system. This can be done with multiple spines, even if they are not all needed for capacity. There are alternative scale-out approaches that create additional network ports at the leaf layer completely “spinelessly,” but in these systems, if a single leaf goes down, the rest of the network continues to provide its connectivity services.
5: Security. With data at the center of the network, and with more and more organizations paying close attention to big data trends, security is more important than ever, not just for the infrastructure, but also for the applications and data.
How to achieve security in your network: In many ways, security and resiliency go hand-in-hand from an architectural perspective. Security is no longer about keeping the bad guys out. It’s about assuming they are already in and limiting their potential to inflict damage. New networking approaches must provide the ability to leverage the “tells” in the data to spot anomalies and provide the levers to do something about it, including immediate quarantining of the bad hosts. Look for networks that not only provide network statistics visibility (most networks will provide sFlow/NetFlow data feeds), but also those that provide tight integration to analytics applications and those that provide low-level remediation controls, like L1 containing.
Tectonic shifts that fundamentally restructure the whole IT landscape are rare. But such changes do happen, perhaps only once in a generation. The very nature of IT — serving as the underpinnings for many businesses — does not lend itself well to frequent, massive upheaval. We are in the midst of one of those upheavals now, one that will leave IT and networks looking very different (from even five years ago) by the time the transformation is complete. To keep up, the network must adjust accordingly. The five characteristics laid out above may not be the full story, but they are a step in the right direction.
Rich Napolitano is CEO of networking software and hardware vendor Plexxi.
Photo courtesy of Shutterstock.