A Closer Look at SSD Benchmarking
As user communities gain more experience with the technology, concern is rising over many of the performance benchmarks used to compare solid-state devices to each other and to tape. With most performance claims based on vendors' own testing, can we be sure throughput and IOPS claims are accurate? And do those numbers vary greatly based on specific data environments and surrounding legacy infrastructure?
But questions of performance are nevertheless working their way into SSD circles. As user communities gain more experience with the technology, concern is rising over many of the performance benchmarks used to compare solid-state devices to each other and to tape. With most performance claims based on vendors' own testing, can we be sure throughput and IOPS claims are accurate? And do those numbers vary greatly based on specific data environments and surrounding legacy infrastructure?
Good questions, all, and they're part of the reason organizations like the Storage Networking Industry Association (SNIA) are looking to establish some ground rules surrounding solid-state benchmarking. The group recently unveiled the Solid State Storage Performance Test Specification (SSS PST) as a means to cut through often conflicting claims for things like performance states, read/write numbers, data transfer capabilities and other functions. A key component of the spec is that it seeks to measure performance at a "steady state," the point at which enough data has been written and re-written to flash cells to reflect true long-term performance, rather than the initial period of high performance right out of the box.
The inaccuracies surrounding the current plethora of performance benchmarks are evident in the recent experiences of tech author Bill O'Brien. He recounts having tested the new Crucial M4 drive from Crucial Technologies, a subsidiary of Micron, using various benchmarks. Using the older HD Tach method, the drive produces fair to middling results on such crucial functions as sequential read and burst speed. Under ATTO's Disk Benchmark, however, which uses variable packet sizes and read/write run lengths more commonly found in transactional databases, the drive shows a marked improvement compared to previous generations.
And don't be fooled into thinking solid state does not have some of the same degradation problems as spinning disks, say the folks at defrag specialist Diskeeper. True, an SSD can access all data in an equal amount of time, but fragmentation of free space will still slow down the rewrite process when the device needs to zero out data in existing cells first. The company's HyperFast add-on is designed to mitigate this problem for NTFS environments.
Undoubtedly, deploying solid state in key areas of your storage infrastructure will produce a net gain in performance. But before you commit yourself to an upgrade plan, some basic apples-to-apples research is warranted.
Once you have something you can rely on, you can let the front office know exactly how their investment is paying off.