We’re starting to see a serious uptake in interest in deploying the Nexus 1000v virtual distributed switch. I believe the latest service release calmed some fears about this new product and having some references to point at always helps. As we perform more implementations we’re starting to see some things that customers need to be aware of in the planning stages. To put it simply, the standard vSwitch network architecture lets you do some things that the Nexus 1000v won’t, and that’s normally a good thing. It stops a lot of bad habits that will get you in to trouble later. So in no particular order or train of thought…here you go!
Do You Have the Knowledge In House?
One of the great features of the Nexus 1000v is it once again gives the network team control of the network stack and takes that burden off the server administrators. But what if you don’t have a network team or person assigned who is generally knowledgeable about Cisco networking? Then you may want to reconsider the move. The problem is that the 1000v is a virtual Cisco switch. When you want to create a new VMware port-group you have to do it from within the virtual switch at the NX-OS (similar to IOS) command-line. There is no GUI from vCenter. So if there is no one on staff with the skill set to do that you may be calling a partner for every little change that you need, which isn’t beneficial to anyone.
For those that do have the staff and knowledge make sure those people are involved with the entire deployment of the virtual switch. Again, I’ve often seen cases where the network staff is very open with communication but not involved in the planning and deployment. It needs to be stressed that they will essentially own the Nexus 1000v just like they own the other switches in the data center. Sometimes they don’t realize that as the 1000v is often promoted as part of vSphere.
Trunking is Required
The number of sites I found not doing trunking to their VMware hosts surprised me. I’m not talking about 2 or 3 server clusters either but several times larger than that. Many sites still do not trunk all connections to their VMware hosts. It’s very common to see Service Console, vMotion, and sometimes even VM Networks being connected to access-ports on the physical switches. This won’t work going forward. Everything needs to be trunked if you plan to run it through the distributed switch, or allow communication from the 1000v VSM to the VEMs as often we’ll run the Control and Packet VLANs over the management interface.
Before you begin go ahead and migrate the hosts and ports over to trunking and get that resolved. While you are doing that inventory all of your VLANs to make sure you know the purpose of each, label them, and trim any down on the trunks that are not needed. Even if you’re only doing one VLAN on a connection it still makes sense to do it via a trunk. This way when you do decide to add something else later you can just add it on the switch side without taking any sort of an outage.
No Duplicate VLANs!
This one bit me the other week and is one thing that you can do on the old vSwitch but not on the Nexus 1000v. What I mean by duplicate VLANs is an environment with two separate networks both using the same VLAN numbers. In the example from the other week it was a site that had a production/internal network and two public DMZ networks. Each network was using VLAN 1 for the primary network, but they were all separate networks with no connectivity between them. Again, this works fine in the legacy vSwitch environment because frames get tagged and untagged as they move in and out of the port-group and a port-group only exists in one vSwitch. So a VM can have a NIC on one port-group and a second on another (production and DMZ) but since there is no relationship between those port-groups it doesn’t matter what VLAN the frame gets tagged with. Below is a simple diagram of the vSwitch arrangement.
So how is the Nexus 1000v different? It acts more like a traditional switch. In the 1000v physical connections are configured by using Ethernet uplinks. Here is a sample configuration for an uplink:
port-profile type ethernet VM_Data_Uplink
switchport mode trunk
switchport trunk allowed vlan 1,10-20,40,50,401,403-404
channel-group auto mode on sub-group cdp
system vlan 401,403-404
The way the Nexus 1000v works is that when a frame comes in from a VM to be switched it looks at the uplinks and finds one that can service the VLAN of the originating port-group. So in the example above, if the port-group was for VLAN 10 it would go out the VM_Data_Uplink. But what if it was VLAN 5? Then it would look for another that could do it. What if two uplinks both can service VLAN 1? That’s when you hit a problem. The 1000v has no idea that VLAN 1 on one uplink isn’t the same as VLAN 1 on another uplink. It just switches the packet and moves on. See the problem? There is no separate vSwitch for each set of uplinks. All port-groups switch through the same set of uplinks. So…if you’re using overlapping VLANs fix that…soon. It’s bad practice anyway.
Do I Need Cisco Switches?
The simple answer to that is, no, you don’t need Cisco switches to take advantage of the 1000v. You won’t get some advantages like CDP (Cisco Discovery Protocol) but you’ll still get all the major functionality. Just keep in mind that you’ll want Cisco knowledge in-house, as we discussed earlier, but you don’t need Cisco switches in your VMware environment to use it.
Can I Use Both vSwitch and dvSwitch?
Yes…you can use both conventional vSwitches as well as the dvSwitch and Nexus 1000v. In fact, this is currently our recommended method if the physical NICs and ports are available. I like to keep the Service Console on a vSwitch connected uplink just in case there is a major problem with the dvSwitch. I refer to this as a “life line connection” and it keeps the servers from becoming unreachable should something very bad happen. It’s possible with some misconfiguration to accidentally disable all access to the management console for the servers. Often once customers get accustomed to managing and configuring the 1000v they’ll switch to a more pure environment, especially if they go 10Gb with CNA ports and no Gb ports.
Like I said in the beginning these are just a few things we’ve hit along the way and wanted to pass along. So before you start implementing the Nexus 1000v take a good look at your networking infrastructure and resolve any issues. It’s a lot easier to do that before than during the cut over.
Read Full Post »