Bufferbloat: Sacrificing Latency for Throughput - Page 2

By Brian Proffitt | Posted Feb 24, 2011
Page 2 of 2   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

Recently, the situation has been exacerbated by the proliferation of wireless networks, which are almost automatically going to be a bottleneck, given that transmission speeds are usually slower than over-the-wire. Another problem, which Gettys has repeatedly pointed out, is the retirement of Windows XP, which capped buffer space at the OS level to 64K. Newer versions of Windows -- and OS X and Linux -- have TCP window scaling, which enables the operating system to increase the buffer size as needed.

Now think about bufferbloat and picture a variable-sized buffer right next to a wireless home network. The problem is not hard to imagine.

The solution, at least in a broader sense, will be.

Solving bufferbloat

What makes this problem even more difficult to solve is Gettys' assertion that buffers will only be noticeable in a network path where they are near a saturated bottleneck. So you may have a buffer causing problems one day and be perfectly fine (and invisible) the next.

There are, as Gettys himself points out, various ways to engineer around the problem. Gamer routers and other end-to-end congestion avoidance solutions can be applied, usually to short-term gains. But such solutions may only work for a short time, because while they benefit the few using them, such congestion-avoidance systems can disrupt "normal" traffic, further damaging the network ecosystem. For Gettys, it's an application of gaming theory: short-term gain may not bring a long-term win.

Gettys' blog and the new bufferbloat website have become a touchstone for network engineers working on solving this problem, but it will be tricky. Active queue management (AQM) is pointed to as a good solution, but even though AQM has been around a while it is not widely deployed, and when it is deployed, it may not be properly configured. Wider deployment and education will help.

Traffic classification may also help in the short-term. Gettys has plenty of advice on his sites for tweaking Quality of Service settings on routers to try to decrease latency.

In the end, it may take an entirely new kind of protocol for packet delivery to solve this problem. Hardware and software deployed now is abusing TCP/UDP congestion avoidance mightily, and if that can't be put in check, new protocols may have to be the answer.

Build smarter cars, in other words, and maybe you'll avoid more traffic jams.


Brian Proffitt is a technology expert who writes for a number of publications. Formerly the Community Manager for Linux.com and the Linux Foundation, he is the author of 20 consumer technology books, including the most recent Take Your iPad to Work. Follow him on Twitter at @TheTechScribe.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter