Installing Cisco NFVI Hardware
Switch on the Cisco UCS C-Series or B-Series hardware, before you install the Cisco VIM. Depending upon the pod type, you need to set up the CIMC connection or UCSM IP ahead of time.The following table lists the UCS hardware options and network connectivity protocol used with virtual extensible LAN (VXLAN) over a Linux bridge, VLAN over OVS or VLAN over VPP. If Cisco Virtual Topology Services (VTS), an optional Cisco NFVI application, is installed, Virtual Topology Forwarder (VTF) is used with VXLAN for tenants, and VLANs for providers on C-Series pods.
UCS Pod Type |
Compute and Controller Node |
Storage Node |
Network Connectivity Protocol |
||
---|---|---|---|---|---|
Rack Type |
UCS C220/240 M4/M5 |
UCS C240 M4 (SFF) with two internal SSDs |
OVS/VLAN or VPP/VLAN (only on intel NIC) or ACI/VLAN |
||
Rack Type |
Controller: UCS C220/240 Compute: HP DL360 Gen9 Quanta servers for Fullon or Edge pod |
UCS C240 M4 (SFF) with two internal SSDs |
OVS/VLAN |
||
C-Series with Cisco VTS |
UCS C220/240 M4 |
UCS C240 M4 (SFF) with two internal SSDs |
For tenants: VTF with VXLAN. For providers: VLAN |
||
C-Series Micropod |
UCS 240 M4/M5 with 12 HDD and 2 external SSDs. Pod can be expanded to 16 computes. Each compute will have 2x1.2 TB HDD or UCS 220 M4/M5 with 6 HDD and 1 external SSDs. Pod can be expanded to 16 computes. Each compute will have 2x1.2 TB HDD.
|
Not applicable as it is integrated with Compute and Controller. |
OVS/VLAN or VPP/VLAN (on intel NIC). |
||
C-Series Hyperconverged |
UCS 240 M4/M5. |
UCS C240 M4/M5 (SFF) with 10 HDD and two external SSDs, acts as compute node |
OVS/VLAN |
||
B-Series |
UCS B200 M4. |
UCS C240 M4 (SFF) with two internal SSDs. |
OVS/VLAN. |
Note |
The storage nodes boot off two internal SSDs. It also has four external SSDs for journaling, which gives a 1:5 SSD-to-disk ratio (assuming a chassis filled with 20 spinning disks). Each C-Series pod has either a dual-port 10 GE Cisco vNIC 1227 card or dual-port/quad-port Intel X 710 card. UCS B-Series blade servers only support Cisco 1340 and 1380 NICs. For more information on Cisco vNICs, see LAN and SAN Connectivity for a Cisco UCS Blade. Cisco VIM has a Micropod (based on UCS-M4/M5 hardware) which works on Cisco VIC 1227 or Intel NIC 710, with OVS/VLAN or VPP/VLAN (for Intel NIC only) as the virtual network protocol. The Micropod supports with a small, functional, but redundant cloud with capability of adding standalone computes (maximum of 16) to an existing pod. |
Cisco VIM supports M4/M5-based Micropod on a VIC/NIC system with OVS, to extend the SRIOV support on a 2x2-port Intel 520 or 2x40G XL710 NIC card. The same pod can be extended to include M5 computes having 40G Cisco VIC with an option to have 2x40G XL710 intel NIC as SRIOV.
Note |
M5 can only use 2x40G XL710 for SRIOV. |
The M5-based Micropod is based on Intel NIC 710 and supports SRIOV over XL710, with OVS/VLAN or VPP/VLAN as the virtual network protocol. From release Cisco VIM 2.4.2 onwards, 40G M5-based Micropod is supported on a VIC (40G)/NIC (2-XL710 for SRIOV) system.
In addition, the Cisco Nexus 9372 or 93180YC, or 9396PX is also available to serve the Cisco NFVI ToR function.
After verifying that you have required Cisco UCS servers, blades and Nexus 93xx, install the hardware following procedures at the following links:
The figure below shows C-Series Cisco NFVI pod. Although the figure shows a full complement of UCS C220 compute nodes, the number of compute nodes vary depending on the implementation requirements. The UCS C220 control and compute nodes can be replaced with UCS 240 series. However, in that case the number of computes fitting in one chassis system is reduced by half.
Note |
The combination of UCS-220 and UCS-240 within the compute and control nodes is not supported. |
For more information on wiring schematic of various pod configuration, see Appendix.