Port binding for multiple NICs on a standard vSwitch in vSphere 5.1, 5 and 6

The process of initiating port binding depends on whether hardware or iSCSI adapters are present on the host(s). With hardware iSCSI, the host(s) typically has/have two or more hardware iSCSI adapters available through which the storage system can be reached using one or more switches. Alternatively, the setup may include one adapter and two SPs so that the adapter can use a different path to reach the storage system.

For software iSCSI and dependent hardware iSCSI, multiple NICS are used to provide failover connections between the host(s) and iSCSI storage systems.

It is very important to understand that multipathing plugins, including the built-in Round-Robin, do not have direct access to the physical NICs. You must first connect each physical NIC to a separate VMkernal port. Each VMkernal port has its own IP address. Afterwards, use a port-binding technique to associate all VMkernal ports with the same iSCSI initiator (i.e. they share the same iSCSI initiator IQN). This VMkernal port binding will result in each VMkernal port connected to a separate NIC becoming a different path that the iSCSI storage stack and its multipathing plug-in can use. Multipathing will not function if VMkernal port binding is not set up, even if more than one uplink vmnic is configured against the vSwitch and a multipathing option (e.g. round-robin) is set in the datastore properties (i.e. fixed path – along one adapter, a single path – will be used if the port binding is not configured), and so path & load-balancing will be unavailable.

After multipathing is set up, the VMkernal routing table is not consulted when identifying the outbound to use. Instead, iSCSI multipathing is managed using vSphere’s multipathing modules. Please note that VMware does not recommend routing iSCSI traffic due to the latency which is potentially incurred.

Preliminary checks

It is far too common a situation for a VMware system to be incorrectly configured. A classic example is traffic not being separated out, whether this is by means of isolated switches or VLANs with subnets, at the physical switch level and for this misconfiguration extending into the vSphere environment by the use of one vSwitch for management, virtual machine, storage, vMotion and possibly (unbelievably) Fault Tolerance logging traffic. Stop here immediately if your network has any of these fault and get it sorted out before you make any changes to your architecture, regardless of who put it in place.

You also should refer to VMware’s documentation on the configuration maximums for your version of vSphere to further verify the health status of your environment is in compliance with VMware’s recommended standards. Of particular relevance for this post is the maximum number of port groups which are supported:

  • vSphere 5.1. Standard vSwitch port groups: 256. Distributed virtual network switch ports per vCenter: 60 000. Static port groups per vCenter: 10 000.
  • vSphere 5.5. Standard vSwitch port groups: 512. Static/Dynamic port groups per distributed vSwitch: 6 500.
  • vSphere 6. Standard vSwitch port groups: 512. Static/Dynamic port groups per distributed vSwitch: 10 000.

The rest of this port will cover how to set up port binding on a standard vSwitch. You can either set this up on your existing vSwitch (for example, production & management vSwitch, and a separate Storage vSwitch with storage port binding and separate vmnics for vMotion) or you can create a new vSwitch (for example, production & management vSwitch, Storage vSwitch and vMotion vSwitch). Finally, keep in mind through all this that multipathing cannot be used with NFS storage. The latter still impacts you if you don’t have dedicated NFS storage but use a backup product such as Veeam as Veeam’s Instant Recovery mounts a NFS datastore to your ESXi hosts. I therefore recommend that you have a minimum of 2 vmnics per host which are not teamed and which have vMotion enabled on them: Veeam will mount the NFS datastore using one of those NICs and it will also help potentially lessen the traffic load if you decide to vMotion the Instant Recovered VM to a production datastore.

vSwitch VMkernal configuration

The steps here can be done using the CLI but see the VMware document Multipathing configuration for software iSCSI using port binding if you wish to.

Using vSphere Client

  1. Connect to vCenter and navigate to each host in turn.
  2. Migrate all virtual machines off the host.
  3. Configuration -> Networking and look at the network adapters of your current switch on the same subnet as your storage. Make a note of what IP addresses are in use on the same subnet utilised by your storage.
  4. If you have an existing port group for your storage, go into its NIC Teaming properties and move all bar one of the assigned adapters to unused adapters. There can only be one active adapter.
  5. To add new or additional port groups, click Add back at the properties of the vSwitch. Select VMkernal and click Next.Provide a name (Network Label) for the new port group and click Next. Specify a new IP address (on the same subnet as your storage) and subnet mask and click Next, then Finish. Go into its NIC Teaming properties, activate Network Failover Detection & Notify Switches & any of the other two (Load Balancing & Failback) as appropriate for your environment. Tick Override switch failover order and ensure that there is one unique vmnic listed under Active Adapters; all the other vmnics should be moved to unused adapters. None of these port groups should have vMotion or Fault Tolerance Logging etc enabled.
  6. Repeat Step 5 until you have as many VMkernal port groups as vmnics you want to participate in multipathing to your storage, while ensuring you also have separate vmkernals for NFS and vMotion as appropriate.

Using vSphere 6 web client

  1. Under vCenter Home, click Hosts and Clusters.
  2. Click on the host.
  3. Click Manage -> Networking -> Virtual Switches.
  4. Click the vSwitch that has your iSCSI vmkernel port group.
  5. In the lower pane with the vSwitch diagram, click the portgroup.
  6. Click the pencil icon to open the Edit Settings menu and modify the vmkernel portgroup properties.
  7. Open the Teaming and Failover section.
  8. Under Failover order check the Override check box.
  9. Select all VMnic adapters except the one you want to bind and click the Move Down arrow to move all other NIC adapters to the Unused Adapters list.
  10. Repeat steps 5-9 for each iSCSI vmkernel port to ensure each vmkernel port has a unique active adapter.

Software iSCSI adapter port binding

Using vSphere Client

  1. Connect to vCenter and navigate to each host in turn.
  2. Configuration -> Storage Adapters.
  3. Select your iSCSI Software Adapter and click Properties.
  4. Click the Network Configuration tab and click Add.
  5. All binding-compatible VMkernal adapters are listed. Select and add the relevant adapters in turn.
  6. Close the iSCSI Initiator Properties window.
  7. Select your iSCSI Software Adapter and Rescan. Afterwards, double-check the number of paths and also that all the required VMkernal adapters are listed as active in the adapters properties. Please note that the number of listed paths will be more than what was previously listed, e.g. if you originally had 10 paths and now have three VMkernal adapters bound, the number of paths will show as 40 (10 x 3 + the original 10).
  8. Restart the host.
  9. Afterwards, double-check the number of paths and also that all the required VMkernal adapters are listed as active in the adapters properties. Please note that the number of listed paths will still be more than what was originally listed before you began, e.g. if you originally had 10 paths and now have three VMkernal adapters bound, the number of paths will show as 30 (10 x 3).
  10. Verify that multipathing (e.g. round-robin) is enabled in the properties of the storage datastore(s).

Using vSphere 6 web client

  1. In the host Manage tab, click Storage
  2. Select your iSCSI Software Adapter vmhba.
  3. In the adapter details pane click the Network Port Binding tab.
  4. Click the + symbol to bring up a list of eligible adapters.
    Note: There should be no adapters in the list at this point.
  5. Check the box beside your 2 storage vmkernel ports and click OK.
  6. Click the Rescan Adapter icon to rescan the iSCSI Software Adapter. Afterwards, double-check the number of paths and also that all the required VMkernal adapters are listed as active in the adapters properties. Please note that the number of listed paths will be more than what was previously listed, e.g. if you originally had 10 paths and now have three VMkernal adapters bound, the number of paths will show as 40 (10 x 3 + the original 10).
  7. Restart the host.
  8. Afterwards, double-check the number of paths and also that all the required VMkernal adapters are listed as active in the adapters properties. Please note that the number of listed paths will still be more than what was originally listed before you began, e.g. if you originally had 10 paths and now have three VMkernal adapters bound, the number of paths will show as 30 (10 x 3).
  9. Verify that multipathing (e.g. round-robin) is enabled in the properties of the storage datastore(s).

NB. Anyone who would first like to watch a video displaying some of the above in action should check out http://wahlnetwork.com/2012/12/20/configuring-proper-vsphere-iscsi-multipathing-via-binding-vmkernel-ports-video/ .

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s