Enterprises of all stripes have deployed solid-state drives to speed up data response times and improve overall IT productivity.
Of course, a faster drive is only as useful as the surrounding network infrastructure allows it to be. Out of necessity, most of the initial SSDs came with SAS/SATA interfaces that made for easy integration into existing storage arrays.
But while both SAS and SATA have seen steady upgrades in throughput, 6 Gbps technology is on the cusp of giving way to new 12 Gbps systems. Many systems designers are frustrated that high-speed flash storage is being hampered by a networking environment designed for spinning media. With solid-state drives capable of providing high-speed, high-capacity storage at or near the server, the transition from SAS/SATA solutions to PCIe configurations is gathering steam. After all, why limit yourself to 12 Gbps when there is a ready-made network infrastructure that can top several hundred Gbps when fully configured?
For example, Marvell recently released the SSD version of the Dragonfly NVRAM system: the NVDRIVE. The NVDRIVE is designed to act as a cache accelerator in high-speed enterprise applications such as web transactions, database analytics and distributed Big Data functions. The device features up to 1.5 TB of NV DRAM cache, as well as RAID and HA synchronous peer-to-peer mirroring. In addition, it supports all major storage networking protocols — namely SCSI, iSCSI, NFS, FCoE and FCP — through an application-transparent OS filter driver.
Moreover, OCZ recently showed off a pre-production PCIe version of the Vector SSD, which the company bills as a high-powered solution for content creation and heavy data workstation applications. The system is built around the Indilinx Barefoot 3 controller and supports up to four lanes of PCIe 2.0 in either full-height or half-height configurations. OCZ is guaranteeing five-year support for 100 GB host writes per day.
Still, other developers are questioning the need for an actual solid-state “drive” to begin with. Companies like Fusion-io have made no secret of their desire to foster the all-flash datacenter. They are optimizing systems like the ioMemory module for integrated computing, storage and networking components linked through the PCIe bus.
The company’s latest configuration, dubbed ioScale, packs more than 3 TB of capacity for less than $4 per GB, and comes with a host of enterprise-class features like self-healing, wear management and predictive monitoring. Since the system’s servers can boot directly from the ioScale module, there is no need for RAID controllers or other disk-centered infrastructure that add complexity and drive up costs in traditional datacenters.
Even people who support the continued use of standard drive protocols are looking toward a PCIe future. For instance, the Serial ATA International Organization (SATA-IO) has begun the ratification process for SATA Express, which uses PCIe as an interface for client-side storage. The idea is to have SATA coexist peacefully with PCIe so that networking tiers can more closely match the mixed flash/mechanical drive environments that will likely exist in most datacenters for years to come. The spec defines new device and motherboard connectors that support both protocols simultaneously.
If history is any guide, data environments will continue to thrive amid diversity. The needs of users and the sheer variety of data types almost guarantees that the enterprise will try to cast as wide a net as possible when it comes to standards, formats, protocols and the like, whether the data resides in server, storage or network infrastructure.
In that regard, groups like the SATA-IO have the right approach: it’s far better to make peace with the world than to try and dominate it.