10 Gig: What Took So Darn Long?

10 GbE adoption took much longer than originally expected. Henry Newman explores the reasons why.

By Henry Newman | Posted Jun 5, 2013
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

I had high hopes for the market success of 10 GbE. Given the market needs and how long 1 GbE had been in the market, I expected 10 GbE to take off far sooner than it eventually did. In fact, the technology penetrated the market more slowly than any other Ethernet product had. What specific factors caused 10 GbE to take six years to become a commodity product?

PCIe performance

In a way, 10 GbE was simply ahead of its time.

Think back to 2007, when 10 GbE started to ship. Back then, we were still on PCIe 1 with 250 MiB/sec per lane. Most systems had slots that were either 4 or 8 lanes, and many of those systems had slots that failed to run at their rated speed, causing performance issues for Ethernet card vendors. Now add to that the fact that many system motherboards still had PCI-X support too, which meant two form factors for anyone wanting to do 10 GbE NICs. Most of the market, then, had PCIe 1 slots and PCI-X slots.

With PCIe 1 and 8 lane slots, a bus running a full rate could support two 10 GbE ports, while PCIe 2 could support three 10 GbE ports running at full rate. For the most part, though, vendors make cards with 1, 2, or 4 ports, not 3 or 5. So with a 4-lane slot and PCIe 1, you could only run a single 10 GbE port, while with 4 lanes at PCIe 2, you could at least run two 10 GbE ports if your PCIe slots were running at full rate. Given the performance of PCIe 1 buses, 10 GbE demanded PCIe 2.  If you only had PCI-X slots, meanwhile, your systems were old and already slated for replacement by PCIe 2, since that technology also came out in 2007.

As you can see, PCIe 2 was necessary for wide 10 GbE adoption. And that's where the recession comes in.

Lack of investment

For a short time following PCIe 2's launch in 2007, things might have gone well for 10 GbE. Many organizations were considering moving to new processors that supported PCIe 2 and its far better mapping of bus performance to the new 10 GbE NICs. But in 2008, the stock market crashed. No one was buying any new hardware; in fact, some were getting rid of the hardware they'd already purchased, as they lacked the staff to use the hardware. This serious lack of investment meant serious problems for 10 GbE.

More problems arose thanks to the dropoff in vendor investment in developing future hardware. In the summer of 2008, industry watchers expected to see PCIe 3 buses, 40 GbE, and even 100 GbE by about 2010. This was not to be. Vendors were scaling back and going through massive layoffs. Recovery took a long time. It was only slowly, over the next few years, that many older systems were replaced with newer PCIe 2 systems and buses with better mapping of PCIe lanes to 10 GbE ports.

But what about PCIe 3? Well, it was going to be too fast to be an external connection off of the Intel QPI. Intel would have to build it directly into the chip. That looked to be a complex and expensive undertaking: it had to be, if not 100 percent correct, at least 100 percent workable with workarounds pretty much the first time around. The design of PCIe 3 marks a radical departure from PCIe 1 and 2, and ensuring backward compatibility given the major encode changes from 8/10 to 64/66 had to require a massive amount of work and testing. In the post-crash environment, that meant far more time than originally anticipated.

Motherboard support

Motherboard support for 10 GbE also created barriers to adoption. To understand why, go back to the 1990s and ask yourself what made 1 GbE take off in the market over 100 BaseT. Based on adoption rates, I believe the answer is motherboard support. It's easy to justify new technology if it comes on the motherboard. In fact, many people tell themselves that they aren't taking full advantage of their motherboard purchases if they aren't using the latest technologies on their motherboards.

In my opinion, this is why the fibre channel industry worked so hard to get motherboard support in the early 2000s. It didn't happen, and look at where fibre channel is today. Widespread motherboard support did happen with 1 GbE, however, and that technology became the commodity technology for communications.

What 10 GbE needs to become a commodity is exactly what 1 GbE got and what fibre channel failed to get: support on every motherboard. The current landscape looks promising. 10 GbE is starting to appear on motherboards from every major server vendor, and I suspect that in just a few years, we'll start to see it on home PC boards, with the price dropping from the double digits to single digits, and then even down to cents.

Final thoughts

10 GbE's market success has been a long time coming. All the way back in 2007, 10 GbE was the fastest-performing commercially available product besides Infiniband, which was faster, but was also a very specialized interconnect used in those days almost exclusively for HPC. 10 GbE was supposed to be the commodity interconnect years ago, but it's only today that it has come out in the volumes and at the prices to make it so.

There are lessons to be learned from 10 GbE's delayed market dominance. The convergence of factors that held 10 GbE back from mass adoption could happen again, after all. Recessions are as inevitable as faster hardware, and the former may happen just when we get our hopes up about the latter. We can all learn from 10 GbE's story, and next time make our predictions with more caution.

 

Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 30 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn't require diplomatic skills. Diplomacy's loss was HPC's gain.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter