Cisco or Not, InfiniBand Persists

Cisco Systems may have abandoned the InfiniBand market in favor
of Fibre Channel over Ethernet (FCoE) and 10GbE, but the rest of
the InfiniBand market appears to be carrying on just fine without the networking giant,
according to industry analysts and other vendors.

“Although InfiniBand remains a small industry compared to the Ethernet juggernaut, it
continues to grow aggressively, and this year it is growing beyond projections,” said
Jeff Boles, senior analyst and director of validation services at the Taneja Group.

When Performance Matters

In high performance computing (HPC) environments, for example, InfiniBand (IB) is
still the dominant protocol, according to a March 2009 Tabor Research report titled
“InfiniBand: Increases in Speed, Usage, Competition.” According to the report, 60 percent
of HPC organizations surveyed said they used InfiniBand as a system interconnect. And
those HPC organizations that were considering a converged fabric strategy said they were
more likely to consolidate on InfiniBand than on Ethernet.

HPC isn’t the only area where InfiniBand continues to make inroads. Oracle is
utilizing InfiniBand in the HP Oracle Database Machine, Exadata. “And we’re seeing other
database vendors following in Oracle’s steps and creating their own interfaces, which
will allow them to connect directly to InfiniBand,” said Gilad Shainer, director of
technical marketing at IB chipmaker Mellanox.

InfiniBand is also an attractive option for organizations with high-throughput
processing needs that are looking for a localized fabric. The protocol is also gaining
traction in the virtualization and
cloud computing
arenas, which require high bandwidth and low latency, as well as with organizations
looking to reduce their power consumption and real estate without giving up
performance.

Because InfiniBand can be used as a unified fabric, “you can run your storage, you can
run your network, you can run your database, and you can run your management all on the
same wire,” explained Shainer. “And once you do that you can eliminate unnecessary
components and unnecessary networks in your infrastructure. So if you take out a
dedicated network from management, and you take out a dedicated network from storage, you
have less components in your data center, less cabling, less switches, less NICs, and of
course reduced consumption of power and cooling.”

Indeed, compared to Gigabit Ethernet, even 10 Gigabit Ethernet, 20 GB/s and 40GB/s
InfiniBand provides much greater speed and system efficiency and a lower total cost of
ownership. Not that Ethernet speeds won’t get faster, however.

“When you’re looking at a unified solution, it is enormously cheaper to do InfiniBand
in a localized cloud pool ‘for example’ than it is to try to provision traditional
storage and traditional networking connectivity in the same set of servers,” said Boles.
Similarly, “when compared to a single 40Gb/s QDR InfiniBand fabric, traditional fabrics
can more than double what it costs to operate and manage I/O in virtual server
infrastructures.”

However, as the cost of 10GbE continues to decline, enterprises that have already made
a significant investment in Ethernet may be less inclined to adopt InfiniBand. That’s why
some InfiniBand vendors are hedging their bets.

Hedging Bets with 10 Gigabit Ethernet

In June, Voltaire announced that it plans to release the Voltaire Vantage 8500 10
Gigabit Ethernet switch, which it claims is the largest non-blocking Ethernet data center
switch with 288 wire speed and 10 Gigabit Ethernet ports in a 15U high chassis, intended
for enterprise data centers and cloud computing environments. And competitor QLogic,
fresh off its NetXen acquisition, announced the introduction of its new line of 3100
Series Intelligent Ethernet Adapters, offering fast 10GbE connectivity for
bandwidth-intensive applications such as virtualization, database clustering, IP content
delivery and grid computing.

Despite these moves, however, both companies are continuing to sell and support and
come out with new InfiniBand adapters and switches, at least for now, as is Mellanox,
although other vendors, most notably Cisco, no longer are.

Indeed, in a recent e-mail, Cisco spokesperson Lee Davis wrote that while Cisco felt
InfiniBand “is an excellent protocol, we believe the future lies with the new Data Center Ethernet, which
combines the ease of management and high performance Ethernet with the reliability of
Fibre Channel (no
packet loss).”

Cisco made a splash four years ago when it paid $250 million to enter the IB market
through the acquisition
of Topspin Communications.

Will IB Play Nicely with Ethernet?

When asked about the future of InfiniBand, Mellanox’s Shainer was optimistic. “The
number of InfiniBand customers has been increasing. The number of market segments ‘where
InfiniBand is being adopted’ is increasing. Volume has been increasing. And we’re seeing
increased demand from the market for higher performance and higher networking
capabilities,” which is no doubt why Mellanox experienced a significant increase in
sales, mainly of its 40GB/s InfiniBand adapters and switches, in the first half of the
year, with demand showing no sign of slowing.

“InfiniBand was thought to be a networking solution for the high-performance computing
segment only, but that’s really not true,” explained Shainer. “InfiniBand enables you to
run any kind of application, any kind of transport, and can be used for a database, for
HPC, for storage, for management. And InfiniBand enables you to do the same kind of
things that you were doing with an Ethernet infrastructure … but significantly
increases your system’s efficiency and utilization. And InfiniBand is not a niche
technology. It’s fully supported by the Linux vendors, the Linux community, by Microsoft,
and by many others. It’s perfect for virtualization, for cloud computing, for any kind of
usage.”

When asked to look in his crystal ball and see what the future holds for InfiniBand –
and whether FCoE and 10GbE pose a significant threat – Boles said he didn’t see any
significant changes in the marketplace in the next three to six months – and felt both
protocols would be viable in the short and medium term.

As for the long run, “I think InfiniBand is going to find tremendous opportunity to
solve some sticky issues in the data center, especially when it comes to pockets of cloud
computing that have very high I/O demands,” Boles said. “And you’ll find InfiniBand where
there’s a bunch of localized servers and storage with high-performance demands, as well
as in traditional enterprises and businesses that don’t consider themselves HPC experts
but have high-performance I/O workloads, which are rapidly exceeding even converged
Ethernet, 10GbE.”

However, he said that ultimately, “I think we may see some convergence,” which he saw
as a good thing, “with each technology bringing its best assets to the table.”

Jennifer Lonoff Schiff is a regular contributor to Enterprise Storage Forum and
writes about technology.

Article courtesy of Enterprise Storage Forum

Latest Articles

Follow Us On Social Media

Explore More