The virtual desktop is often cited as the cure for what ails the enterprise. As users continue to turn to mobile client devices for their data needs, the ability to centralize desktop images apart from the trusty desktop is seen as both cost-effective and highly productive.
But is it possible that the cure could be worse than the disease?
As too many enterprises have discovered to their dismay, the true cost of VDI is much higher than expected while the benefits, at the outset at least, are limited. It all has to do with the fundamental nature of data environments, which are rapidly evolving from static, largely isolated infrastructures to a more integrated, some would say organic, whole.
Storage is usually listed as the primary unexpected cost of desktop virtualization, and indeed the need for a substantially scaled out storage infrastructure cannot be overstated. However, networking is often overlooked in the planning stage, only to place a real crimp in performance once the number of users rises to even moderate levels.
On a basic level, this is nothing more than a good old-fashioned bandwidth problem. Once the network has gone from simple storage/server/desktop operation to a virtual world of constantly shifting elements and vastly higher resource utilization, throwing a fleet of virtual desktop images on top is usually all it takes to bring things to a crawl. At a minimum, 10 gigabit Ethernet (GbE) will be needed to support a working environment; with many leading organizations already pushing toward 40 GbE and higher.
Wider pipes are only the beginning
As client devices become more varied, enterprises may need to start building more intelligence to their networking infrastructure. Knowing what kind of device is requesting the desktop image will make it easier to tailor the network connection for optimal service without allocating too much bandwidth so as to interfere with other functions. This also helps limit the number of tiers needed to service all users, preferably flatting the entire infrastructure to a single tier.
There’s also the distinct possibility that many users will want to access their desktop on personal devices. Anything for increased productivity, right? To enable this, you’ll need to either upgrade or deploy new network access control mechanisms, preferably integrating both wired and wireless infrastructure and capable of automated patch and licensing upgrades to avoid any unpleasant surprises following every software and OS update.
Then again, all this assumes that enterprises will be content to build virtual desktop environments on the same infrastructure that has served their data and application needs so well. But that doesn’t have to be the case. As I’ve mentioned before, new SAN-less infrastructure is starting to hit the channel, usually involving PCIe-based local NAND Flash storage, which alleviates over-burdened network storage pathways and replaces slow disk drives with high-speed solid state drives (SSDs). True, you still have to get the image out to the users, but retrieval from a centralized repository is now much quicker.
If anything, VDI is the technology that keeps on trying. Make no mistake, the barriers to widespread deployment have been and remain significant, but as enterprise infrastructure continues to remake itself into a more dynamic and flexible entity, its absence may prove more costly than its presence.
Arthur Cole covers networking and the data center for IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming.