There are many benefits to application-centric infrastructure: streamlined architectures, lower costs, easier management, and faster, more responsive throughput and connectivity.
But app-centrism is not simply another step in the evolution of networking. It is a completely new way of looking at data and the data environment, about as closely related to today’s network as the elephant is to the hippo.
For one thing, networks will no longer be managed in terms of pathways, router/switch configurations or even layers, at least not as far as human operators are concerned. Rather, network managers will set the policies regarding resource utilization, access, communications and a host of other factors and let the applications themselves determine the best way to navigate across local and wide-area infrastructure.
And here lies the rub, because with such a dramatic change to existing systems and processes, conflicts are bound to occur that may make the goal of app-centrism seem like more trouble than it’s worth.
“The two biggest changes with application-centric infrastructure are that the operational model becomes more Dev/Ops-like,” said Ish Limkakeng, vice president of product management at Cisco. “Instead of a serial process to set up an application, networking, security, etc., you’ll see a more Dev/Ops-based model to define things, then push it into limited operations and then scaled operations.
“The other change is the pure automation of it,” he added. “Automation will stretch across the entire infrastructure, not just one piece of the stack. It probably won’t happen on Day 1, but it will definitely go that route.”
In Cisco’s case, the Application Centric Infrastructure incorporates a precise blend of legacy hardware and emerging software defined networking capabilities. The platform is designed to work on the well-known Nexus 9000 switch, which can function in either standard mode or ACI mode. This allows the enterprise to implement a powerful networking upgrade for today’s legacy applications while laying the foundation for more autonomous applications when they become available.
As with virtually everything enterprise-related these days, the biggest challenge is scale. Pushing a complex environment from the test bed to production environments is one thing, but scaling it out to thousands, if not tens of thousands, of nodes, perhaps across geographically distributed infrastructure, is quite another.
According to Andrew Clay Shafer, senior director of technology at Pivotal Software, the key to scale is a flattened network fabric, which can then be used to establish the basic pattern for applications to navigate across far-reaching infrastructure.
“By building a relatively flat L3 network, you can achieve scale while maintaining addressability through the IP layer,” he said. “It works like a domain name system (DNS) in that it can be very app-specific and applications know what is required to do their thing, like perform the dynamic discovery for the appropriate services. The goal is to stay with flexible, decoupled network architectures that provide the opportunity for both scale and portability.”
In Pivotal’s case, the way to do this is by leveraging emerging container technology for the Cloud Foundry platform to enable “application instances” (AIs) that can act as the basic building block for a broad, scaled out application ecosystem. In this way, data environments can grow and change while maintaining a common unit of measure (the AI) for things like pricing and charge-back. At the same time it defines the basic unit of scale across multiple application types and even programming languages and frameworks, and even offers better resource utilization and greater application density than VM-based abstract architectures.
But far beyond simply building and designing app-centric infrastructure, ongoing management of this new free-wheeling environment is likely to be a challenge. For generations, the mandate of network management was to, well, manage the network. In the app-centric world, management of policies and the application programming interface (API) becomes paramount, and the network is left largely to fend for itself.
This will produce significant disruption not only within the network itself, but in the management staff, which will have to acquire entirely new skillsets in order to keep the data flowing. Programming and Dev/Ops will rise to prominence while things like router and firewall configuration, traffic management and even knowledge of the various network protocols will diminish.
But does that mean the world will belong to programmers? Will there be nothing left for those who specialize in physical or even virtual systems management?
Perhaps, says Mark Harris, vice president of Pluribus Networks, which is working with Dell and other network leaders to enable app-centric functionality on the Open Netvisor platform, but it won’t happen right away.
“Dev/Ops…requires a lot of intricate knowledge of the applications,” he said. “There is a huge number of enterprise-class applications that can take full advantage of the fabric approach to network programmability. These applications today must rely on complex and manual setup of VLANs and other parameters, often a delicate requirement. Those same applications are being retooled to take advantage of SDN principles, and it is just a matter of time before more and more have this inherent capability.”
But that leads to an even thornier issue: how much autonomy should a particular application have, even within a clearly defined abstract networking environment?
“I think most people will be reluctant to go fully autonomous,” said Pivotal’s Shafer. “What is happening now is that the infrastructure is becoming an application in that it is software-driven, so if you don’t do experiments in your development phase before you put something into production, then your production environment will always be in an experimental phase. In most cases, there is an extreme amount of testing; you’re not just throwing code into the real world.”
Indeed, this is the thinking at many of the hyperscale enterprises like Google, which is building advanced logic into its Borg cluster management system, and even Netflix’ Chaos Monkey auto-scaling service on Amazon, which incorporates failure into the infrastructure and application layers so that even if a particular function crashes, it can be recovered or removed quickly and easily without disrupting the broader data environment.
Few organizations have Google-scale infrastructure to work with, however. In fact, many are dealing with aging infrastructure that should ideally be integrated with app-centric deployments in order to deliver full value. Fortunately, most app-centric platforms utilize existing L2 and L3 protocols, but it pays to look deeply into any given solution to see how thoroughly it can accommodate the rest of the legacy stack.
As with most deployments of this magnitude, implementation will likely be the toughest phase, followed closely by the retraining needed to bring technicians up to speed on the new world of infrastructure management. If everything comes together as expected, there won’t be a need for individual storage, networking and server managers, as everyone involved in resource provisioning and architecture configuration will possess expertise across all of these disciplines – and even then, human oversight will focus largely on policy and governance rather than actual provisioning.
No matter how difficult the transition, app-centric functionality is a must-have for enterprises that wish to remain relevant in the coming decade. Mobile communications are app-driven, and with the rising generation of knowledge workers more comfortable with their smartphones and tablets than the desktop, infrastructure that does not provide significant app flexibility will likely be avoided.
And hopefully, when all is said and done and the app-centric infrastructure is in place, data users will finally be able to look at IT as the way to achieve their goals and not an obstacle to overcome.