A Spoonful Of CIDR Helps the Routing Tables Go Down - Page 2

 By Michael Burton
Page 2 of 3   |  Back to Page 1
Print Article

Continued From Page 1

In September of 1993, RFC 1519 laid out a new direction in network architecture, called Classless Inter-Domain Routing. The idea, proposed by network engineers from the Bay Area Regional network (BARRNet), Cisco, and the Michigan Educational Research Information Triad (MERIT) was to simply do away with the "classful" breakdown of deploying networks and rely on the already well understood subnet mask to handle the heavy lifting.

Traditionally, subnetting is the component of IP networking that allows further hierarchical breakdowns of an IP space. If the three most significant bits of an IP address are compared to the area code, then the subnet mask is analogous to the phone prefix, or the 555 portion of 503-555-1234. For example lets us know we are dealing with a Class B network (based on most significant bits in first octet) and the space is subnetted at the third octet. The administrator can have separate subnets using the "two to the sixteen" subnets available (every possible combination in the second and third octets) for about 65,535 possibilities.

If a computer has a class B source IP address ( and a network mask (, it knows to look at the first three octets only for differences in a destination network and identify if it were on the same subnet ( or on a different subnet ( If the destination is on the same network, the computer sends the IP packet along. If the destination address is on a different subnet, the computer simply sends the IP packet along to the default gateway for the subnet. The router handles the rest.

What the creators of RFC1519 were proposing was permitting administrators to go beyond simple subnetting, and gain the ability to do "supernetting": Using any subnet or supernet mask possible, not just classic,, and This supernetting along with variable length subnet masking (or VLSM) fixed both the IP space allocation and the routing table issue.

The IP space allocation problem was diminished because the Internet Assigned Numbers Authority (IANA) was no longer constrained to classful deployment of address spaces. Instead of having half of the entire IP space reserved for massive Class A networks, those spaces could be chopped up into smaller more usable sizes. CIDR, along with Network Address Translation (NAT, proposed in RFC 1631 in 1994), made it possible for IPv4 addressing to survive almost ten years past the point where all addresses would have been unavailable. While engineers are still concerned about address allocation and migrating to the new and expansive IPv6 (which uses a 128-bit address scheme rather than 32 bits of IPv4), they are no longer warning of an address catastrophe.

The routing table size issue was also resolved by CIDR and VLSM. Supernetting gives administrators a mask to represent multiple networks with a single routing entry.

For example, if you ran an ISP and were allocated, you could dole out subnets to your customers ( to company A, to company B, etc.) and inject into the BGP tables to represent all of your networks. VLSM enables you to purchase only half of that space, for example, which means you would own to The to range could be sold to another company. The ability to summarize multiple subnets with one supernetted address and mask drastically reduced the sizes of routing tables globally. While the sizes of the tables are increasing again, memory and CPU technology is now sufficiently advanced to handle the slower growth.

Continued on page 3: CIDR: It's Not Just for Big Companies Anymore
This article was originally published on Jul 23, 2004
Get the Latest Scoop with Networking Update Newsletter