Simplify Server Deployments With VLANs
Servers are frequently shipping with VLAN capabilities now, which means you can save time and money by using fewer switch ports per server. Many struggle with the concept of VLANs and never succeed in taking advantage of this technology. Once configured, new servers can be deployed quickly without needing to dedicate a switch port for a management network in addition to the normal network connection. Let us show you how.
As you probably know, a VLAN is a Virtual LAN, or put another way: a partitioned switch. Imagine slicing a switch into four virtual pieces, or even gluing together four (or more) small switches. Each grouping is in a distinct broadcast domain, so devices in one VLAN cannot see broadcast traffic from others. The glued-together analogy works well, because it's clear that you're really talking about different layer 2 networks on each switch.
VLANs are extremely handy, and you're probably using a separate VLAN for your server management network. Often, a VLAN is created with the idea that no router will live on that network. This keeps others from gaining access to the network unless they are physically located on it. Common practice is to connect a server to both the management network and a regular subnet, so that after logging into that server you will have access to the management network. While this works well, it also means your server's management interface will occupy a switch port (configured to live in the management network), in addition to the server's normal network interface.
To Tag or Not to Tag
When speaking about basic VLAN concepts, which we're sure most server admins already understand, we leave out important details. These details are the missing pieces that provide the necessary background to configure VLAN interfaces.
Server admins are not normally concerned with VLAN interfaces. They live in a VLAN world, but they are generally just a consumer. You ask your network dudes to place switch port 3/42 in 'the windows server VLAN' and then connect a host when you're ready. To keep the premise of this article going, let's assume you also ask for switch port 3/43 to be placed in the management VLAN. This basic consumer relationship with VLANs can ignore VLAN tagging, but sooner or later it becomes necessary.
The network admins often link together multiple switches. To make all VLANs on a switch flow through, they create an 802.1q tagged VLAN link. In Cisco terms, this is called a 'trunk' mode on the port. To make this work, the sending switch must wrap Ethernet frames in a VLAN tag, so that the next device (the other switch, in this case) knows what VLAN it belongs to. On ingress, the receiving switch will strip that VLAN tag before forwarding the frame to the right VLAN. Take a moment to process that.
It is worth noting that this is done in hardware, so there is no performance penalty. Network cards that support 802.1q can also do this. For example, if the switch port connected to your workstation was configured to as a .1q link, and then your workstation could bring up a network interface on whatever VLAN it wished, without ever touching the switch.
The tagging aspect comes into play when configuring these .1q links. There is a concept of a native (untagged) VLAN, which allows 'default' access to the VLAN number when untagged frames are received.
VLAN Interface Configuration
First, you should realize that both switch and server need to be configured. Hypothetically, let's pretend the switch is configured with an untagged VLAN number 100, and that it allows all VLANs between 101-200 to pass tagged.
Any device that is attached to the switch port will now be in VLAN 100. If the device, a server in this case, supports 802.1q, it can bring up a virtual 100 interfaces - one in each VLAN. Or the server can bring up just one interface in VLAN 152, for example.
We think this is what trips most people up. You must know three pieces of information before you can hope to configure the .1q-enabled server properly:
- What switch port is configured as a .1q link for you
- What VLANs are allowed to pass across that link (as configured on the switch)
- What, if any, native/untagged VLAN is configured
Once you have this information, it's just a matter of configuring your server's hybrid management interface / regular network card. If the network dudes give you a port and tell you the management network is VLAN 101, your server's VLAN is 112, and there is no native VLAN, you know what to do. Most BIOS or network configuration menus have you specify those pieces of information: management and server VLAN number, and whether or not they are tagged.
VLANs and Blade Servers
Blade servers are often configured the exact same way, only instead of limiting your configuration to one management VLAN and one server VLAN, blade server chassis will likely be able to access many other VLANs for the various types of servers contained within. One issue, however, is that all blades will now be sharing a 1Gb/s link unless you get fancy. Most blade chassis switches will have four or more ports, so in theory you could manually partition the network traffic. One port physically connected to each VLAN, and then pray that you never need more than four networks to serve all those blades. That is how this would have been done in 1995.
Now, we'll use link aggregation. Link aggregation, or 802.3ad, is a protocol by which both ends can dynamically balance network traffic across multiple links, using Link Aggregation Control Protocol (LACP). LACP links are often used in conjunction with .1q links, which enables blade servers to utilize all available network ports, across all available VLANs.
It takes a bit of configuring, but this can also be done with stand-alone server. Sun's Thumper storage devices are a great example, as they ship with four gigabit connections and ZFS throughput across 48 disks can easily saturate a gigabit network link.
Now go re-think how your servers are connected. Understanding VLANs and link aggregation (where appropriate) can lead to server configurations that both save management time and use half the number of expensive Cisco switch ports.