A Spoonful Of CIDR Helps the Routing Tables Go Down

In 1990, things looked grim for the Internet as routers sagged under the weight of an exploding address space. Learn how CIDR saved the day, and how it can help you deal with your own growing network.

By Michael Burton | Posted Jul 22, 2004
Page 1 of 3
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

In 1990, the situation for the Internet looked bleak. "The World" (www.world.std.com) became the first Internet Service Provider (ISP) open to the public. ARPANET, the project giving birth to the Internet was spinning down; and NSFNET, the new backbone provider, was about to start allowing commercial use on-line. Universities, government agencies, and businesses were adding their isolated campus networks into the larger interconnected Internet. E-mail was the champion of the day, with FTP closely behind in popularity. Microsoft Windows 3.0 was allowing businesses of all sizes access to new services, connecting even more people to the budding new network. The biggest event of the year, though, was Tim Berners-Lee demonstrating "WorldWideWeb," the first GUI "Web" browser.

While all of these events pointed toward the Internet exploding in popularity, the underlying flawed structure of IP address allocation and routing was moving the Internet toward exploding in an altogether nasty way. Network engineers were predicting the total collapse of the Internet by 1995, when routing tables would be too large to be handled by the technology of the day. And besides, even if the routers could handle the load, there would be no more IP addresses to give out. The Internet of the early 1990s was simply not designed to be the Next Big Thing.

As we'll see, getting into this mess then getting out of it saved the 'net, and what we learned can help you manage your network today.

It became clear in the old ARPANET days that a hierarchical routing approach would be needed for a global network. Hierarchical routing allows for separate routing layers to be used, and gives engineers more flexibility in allocating resources. Think about a phone number. By having the first three numbers define a particular area code or prefix, the seven digits after could repeat themselves in each individual area. 503-555-1234 is just as valid as 812-555-1234.

The Internet Engineering Task Force (IETF) knew that networks were going to be various sizes with a few very large networks and many small ones. With that in mind, the "Area Code" sizes of the Internet were broken down in to three separate categories; Class A , Class B, and Class C. Like population distribution in most countries, IETF expected a few very large metropolitan areas (Class A), a handful of cities (Class B), and many small towns (Class C).

In networking, the human readable decimal IP address (10.0.0.1) is transformed into a series of four 8-bit binary strings or octets; such as 00001010.00000000.00000000.00000001. (Eight 1s in binary equals 255, the highest number you will ever see in an octet.) The four octets described in binary form is what your IP address looks like on the network. The first three bits of the first octet, or most significant bits were used to encode the class. Class A started with binary 000 (0.0.0.0 is reserved for special purposes.),Class B started with 010, and Class C started with 110.

This mathematically elegant approach created the following network breakdown:

Network Class

Most Significant bits used

IP Address Ranges

Number of Networks

Available

Number of Hosts Per Net

Percentage of total IP Space

Class A

[000]00000

1.0.0.0 to 126.0.0.0

126

16,777,214   
50%

Class B

[010]00000

128.0.0.0 to 191.0.0.0

16,384

65,534
25%

Class C

[110]00000

192.0.0.0 to 223.0.0.0

2,097,152

254
12.5%

(Note this only accounts for 87.5% of the total IP space. "Class D" or 11100000, was reserved and is now used for multicast space which is why all multicast addresses start above 224.0.0.0)

While this was good for an initial estimate of how addresses would be used, the reality of the Internet was moving toward many more small networks "towns" than large "metropolises." And with half of the total address space wrapped up in only 126 mega networks, there was a lot of wasted space. The arbitrary network sizes made it difficult for universities and companies to get the right amount of space. If a company only needed 8,000 addresses, there was a difficult choice between a very limited and expensive class B network, wasting another 8,000 addresses, or purchasing and attempting to administer 32 separate class C addresses.

If the company in the example above went with the cheaper class C option, that company would be part of the second major issue extremely large and unwieldy routing tables. Our company would need to "inject" 32 separate router entries into the Internet core for just one location. Border Gateway Protocol (BGP), the protocol behind the Internet, would need to update every router that touched the Internet when any of those 32 separate class C entries went down. In the early 1990s, volatile RAM to handle the large tables was extremely expensive and computing capabilities of CPUs running routers was simply not up to handling the constant changes (or "router flaps") from these small networks constantly going up and down.

The Internet was doomed to collapse under its own weight before the first eBay auction could even be conceived.

Continued on page 2: CIDR to the Rescue

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter