Exascale Storage: Focusing on Speed and Capacity

As the relentless expansion of data loads marches on through good times and bad, it’s always helpful to keep an eye on state-of-the-art storage technologies to see what the future holds for the rest of us.

After all, today’s HPC environment will inevitably trickle down to the masses, or to be more precise, capacity requirements will inevitably trickle up.

To that end, it’s encouraging to note a number of key developments on what today is considered to be exascale storage.

One is the new IBM DCS3700, an OEM system built on Engenio technology that NetApp acquired last month from LSI. The system boasts 60 2 TB SAS drives in a 4 U footprint with a dual-controller configuration, each sporting a pair of 6 Gbps SAS interfaces. That means you can add two more 60-drive boxes for a total capacity of 360 TB. It is also available with eight 8G Fibre Channel ports.

IBM has paired the system with its DS Storage Manager software, which provides functions like Flash Copy, Volume Copy and Remote Mirroring for managing physical volumes. It also supports RAID levels 0 through 10, as well as mirrored cache and redundant, hot-swappable components.

An even more impressive solution is Xyratex’ ClusterStor 3000, a Lustre-based platform that scales into the petabyte range coupled with file system throughput that tops out at 1 TBps. The system offers a unique scale-out approach in which usually separate server, RAID controller and disk modules are consolidated into a single appliance known as the scalable storage unit. As storage demands increase, you simply add appliances to the cluster.

As most HPC aficionados will tell you, raw capacity is only half the battle when dealing with extremely large data sets. Panasas CTO Garth Gibson gave an illuminating interview to HPCwire recently, in which he espoused the virtues of rapid checkpoint storage, hybrid storage configurations and intelligent copying and tiering. The gist is that without increased throughput and more advanced forms of data management, large-capacity storage systems can quickly turn into vast warehouses of unusable data.

Fortunately, this lesson is not lost on the storage community. As enterprises continue to push into uncharted storage territory, users and providers alike are finding out what works and what doesn’t. It’s probably a stretch to say that the industry has devised the optimal exascale storage environment at this point, but at least things are moving in the right direction.

Get the Free Newsletter!
Subscribe to Daily Tech Insider for top news, trends & analysis
This email address is invalid.
Get the Free Newsletter!
Subscribe to Daily Tech Insider for top news, trends & analysis
This email address is invalid.

Latest Articles

Follow Us On Social Media

Explore More