Introduction
This document describes the teaming and bonding options available for common operating systems when using the Cisco Virtual Interface Card (VIC) adapters on the Cisco Unified Computing System (UCS) servers (B-Series, C-Series Integrated, S-Series Integrated, HyperFlex Series) connected to a UCS Fabric Interconnect.
Prerequisites
Requirements
Cisco recommends that you have knowledge of these topics:
- Cisco UCS and UCS Manager (UCSM)
- Cisco VIC
- VMware ESX Versions 4.1 and later
- Microsoft Windows Server Version 2008 R2
- Microsoft Windows Server Version 2012 and later
- Microsoft Windows Server Version 2016 and later
- Linux operating systems
Components Used
The information in this document is based on these software and hardware versions:
- UCSM version 2.2(6c)
- Cisco UCS server with a VIC card
- VIC firmware Version 4.0(8b)
- VMware ESXi Version 5.5, Update 3
- Microsoft Windows Server Version 2008 R2 SP1
- Microsoft Windows Server Version 2012 R2
- Microsoft Windows Server Version 2016
- Redhat Enterprise Linux (RHEL) 6.6
The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.
Support matrix
All teaming/bonding methods that are switch independent are supported in the UCS Fabric Interconnect environment. These bonding modes do not require any special configuration on the switch/UCS side.
The restriction to this is that any load balancing method used in switch independent configuration must send traffic for a given source MAC address via a single UCS Fabric Interconnect other than in a failover event (where the traffic should be sent to the alternate fabric interconnect) and not periodically to redistribute load.
Using other load balancing methods that operate on mechanisms beyond source MAC address (such as IP address hashing, TCP port hashing etc.) can cause instability as a given MAC address is flapped between UCS Fabric Interconnects. Such configuration is hence unsupported.
Switch dependent bonding modes require a port-channel to be configured on the switch side. The Fabric Interconnect, which is the switch in this case, cannot form a port-channel with the VIC card present in the servers. Furthermore, such bonding modes will also cause MAC flapping on the UCS and upstream switches and is hence unsupported.
This list is applicable for both the native (bare metal) operating system and for a hypervisor environment with virtual machines.
Operating system |
Supported |
Not supported |
VMWare ESXi |
- Route Based on Originating Port ID
- Route Based on Source MAC Hash
|
- Route Based on IP Hash
- Route Based on Physical NIC Load
|
Windows 2012 and later standalone NIC Teaming (using native driver)
Windows 2016 and later Switch Embedded Teaming (SET)
|
Switch independent modes (Active/Standby and Active/Active2)
When using Load balancing method:
- Hyper-V port
|
Switch Dependent
- Static teaming
- LACP
Switch independent modes (Active/Standby and Active/Active2)
When using Load balancing method:
- Dynamic
- Address Hash
|
Windows 2008 R2 SP1 (using Cisco VIC NIC teaming driver) |
- Active Backup (mode 1)
- Active Backup with Failback to Active (mode 2)
- Active Active Transmit Load Balancing (mode 3)
|
- 802.3ad LACP (mode 4)
|
Linux operating systems1 |
- active-backup (mode 1)
- balance-tlb (mode 5)
- balance-alb (mode 6)
|
- balance-rr (mode 0)
- balance-xor (mode 2)
- broadcast (mode 3)
- 802.3ad (mode 4)
|
- fail_over_mac=1 must be used to avoid limitations as documented in CSCva09592
- When connected behind an ACI fabric, certain active/active algorithms may cause endpoints to move from one leaf switch another. When a leaf detects too many endpoint moves, it disables learning for the endpoint’s bridge domain (with an error message).
Related information