Control

This chapter contains the following topics:

Fabrics

The following terms are referred to in the document:

  • Greenfield Deployments: Applicable for provisioning new VXLAN EVPN fabrics, and eBGP based Routed fabrics.

  • Brownfield Deployments: Applicable for existing VXLAN EVPN fabrics:

    • Migrate CLI configured VXLAN EVPN fabrics to DCNM using the Easy_Fabric_11_1 fabric template.

    • NFM migration to Cisco DCNM using the Easy_Fabric_11_1 fabric template.

For information about upgrades, refer to the Cisco DCNM Installation and Upgrade Guide for LAN Fabric Deployment.

This section contains the following topics:

VXLAN BGP EVPN Fabrics Provisioning

DCNM 11 introduces an enhanced “Easy” fabric workflow for unified underlay and overlay provisioning of VXLAN BGP EVPN configuration on Nexus 9000 and 3000 series of switches. The configuration of the fabric is achieved via a powerful, flexible, and customizable template-based framework. Using minimal user inputs, an entire fabric can be brought up with Cisco recommended best practice configurations, in a short period of time. The set of parameters exposed in the Fabric Settings allow users to tailor the fabric to their preferred underlay provisioning options.

Border devices in a fabric typically provide external connectivity via peering with appropriate edge/core/WAN routers. These edge/core routers may either be managed or monitored by DCNM. These devices are placed in a special fabric called the External Fabric. The same DCNM controller can manage multiple VXLAN BGP EVPN fabrics while also offering easy provisioning and management of Layer-2 and Layer-3 DCI underlay and overlay configuration among these fabrics using a special construct called a Multi-Site Domain (MSD) fabric.

Note that in this document the terms switch and device are used interchangeably.

The DCNM GUI functions for creating and deploying VXLAN BGP EVPN fabrics are as follows:

Control > Fabric Builder menu option (under the Fabrics sub menu).

Create, edit, and delete a fabric:

  • Create new VXLAN, MSD, and external VXLAN fabrics.

  • View the VXLAN and MSD fabric topologies, including connections between fabrics.

  • Update fabric settings.

  • Save and deploy updated changes.

  • Delete a fabric (if devices are removed).

Device discovery and provisioning start-up configurations on new switches:

  • Add switch instances to the fabric.

  • Provision start-up configurations and an IP address to a new switch through POAP configuration.

  • Update switch policies, save, and deploy updated changes.

  • Create intra-fabric and inter-fabric links (also called Inter-Fabric Connections [IFCs]).

Control > Interfaces menu option (under the Fabrics sub menu).

Underlay provisioning:

  • Create, deploy, view, edit and delete a port-channel, vPC switch pair, Straight Through FEX (ST-FEX), Active-Active FEX (AA-FEX), loopback, subinterface, etc.

  • Create breakout and unbreakout ports.

  • Shut down and bring up interfaces.

  • Rediscover ports and view interface configuration history.

Control > Networks and Control > VRFs menu options (under the Fabrics sub menu).

Overlay network provisioning.

  • Create new overlay networks and VRFs (from the range specified in fabric creation).

  • Provision the overlay networks and VRFs on the switches of the fabric.

  • Undeploy the networks and VRFs from the switches.

  • Remove the provisioning from the fabric in DCNM.

Control> Services menu option (under the Fabrics sub menu).

Provisioning of configuration on service leafs to which L4-7 service appliances may be attached. For more information, see L4-L7 Service Basic Workflow.

This chapter mostly covers configuration provisioning for a single VXLAN BGP EVPN fabric. EVPN Multi-Site provisioning for Layer-2/Layer-3 DCI across multiple fabrics using the MSD fabric, is documented in a separate chapter. The deployment details of how overlay Networks and VRFs can be easily provisioned from the DCNM, is covered under Creating and Deploying Networks and VRFs.

Guidelines for VXLAN BGP EVPN Fabrics Provisioning

  • For any switch to be successfully imported into DCNM, the user specified for discovery/import, should have the following permissions:

    • SSH access to the switch

    • Ability to perform SNMPv3 queries

    • Ability to run the show commands including show run, show interfaces, etc.

  • The switch discovery user need not have the ability to make any configuration changes on the switches. It is primarily used for read access.

  • When an invalid command is deployed by DCNM to a device, for example, a command with an invalid key chain due to an invalid entry in the fabric settings, an error is generated displaying this issue. This error is not cleared after correcting the invalid fabric entry. You need to manually cleanup or delete the invalid commands to clear the error.

    Note that the fabric errors related to the command execution are automatically cleared only when the same failed command succeeds in the subsequent deployment.

  • LAN credentials are required to be set of any user that needs to be perform any write access to the device. LAN credentials need to be set on the DCNM, on a per user per device basis. When a user imports a device into the Easy Fabric, and LAN credentials are not set for that device, DCNM moves this device to a migration mode. Once the user sets the appropriate LAN credentials for that device, a subsequent Save & Deploy will retrigger the device import process.

  • The Save & Deploy button triggers the intent regeneration for the entire fabric as well as a configuration compliance check for all the switches within the fabric. This button is required but not limited to the following cases:

    • A switch or a link is added, or any change in the topology

    • A change in the fabric settings that must be shared across the fabric

    • A switch is removed or deleted

    • A new vPC pairing or unpairing is done

    • A change in the role for a device

    When you click Save & Deploy, the changes in the fabric are evaluated, and the configuration for the entire fabric is generated. You can preview the generated configuration, and then deploy it at a fabric level. Therefore, Save & Deploy can take more time depending on the size of the fabric.

    When you right-click on a switch icon, you can use the Deploy Config option to deploy per switch configurations. This option is a local operation for a switch, that is, the expected configuration or intent for a switch is evaluated against it’s current running configuration, and a config compliance check is performed for the switch to get the In-Sync or Out-of-Sync status. If the switch is out of sync, the user is provided with a preview of all the configurations running in that particular switch that vary from the intent defined by the user for that respective switch.

  • Persistent configuration diff is seen for the command line: system nve infra-vlan int force . The persistent diff occurs if you have deployed this command via the freeform configuration to the switch. Although the switch requires the force keyword during deployment, the running configuration that is obtained from the switch in DCNM does not display the force keyword. Therefore, the system nve infra-vlan int force command always shows up as a diff.

    The intent in DCNM contains the line:

    system nve infra-vlan int force

    The running config contains the line:

    system nve infra-vlan int

    As a workaround to fix the persistent diff, edit the freeform config to remove the force keyword after the first deployment such that it is system nve infra-vlan int .

    The force keyword is required for the initial deploy and must be removed after a successful deploy. You can confirm the diff by using the Side-by-side Comparison tab in the Config Preview window.

    The persistent diff is also seen after a write erase and reload of a switch. Update the intent on DCNM to include the force keyword, and then you need to remove the force keyword after the first deployment.

  • When the switch contains the hardware access-list tcam region arp-ether 256 command, which is deprecated without the double-wide keyword, the below warning is displayed:

    WARNING: Configuring the arp-ether region without "double-wide" is deprecated and can result in silent non-vxlan packet drops. Use the "double-wide" keyword when carving TCAM space for the arp-ether region.

    Since the original hardware access-list tcam region arp-ether 256 command does not match the policies in DCNM, this config is captured in the switch_freeform policy. After the hardware access-list tcam region arp-ether 256 double-wide command is pushed to the switch, the original tcam command that does not contain the double-wide keyword is removed.

    You must manually remove the hardware access-list tcam region arp-ether 256 command from the switch_freeform policy. Otherwise, config compliance shows a persistent diff.

    Here is an example of the hardware access-list command on the switch:

    
    switch(config)# show run | inc arp-ether
    switch(config)# hardware access-list tcam region arp-ether 256
    Warning: Please save config and reload the system for the configuration to take effect
    switch(config)# show run | inc arp-ether
    hardware access-list tcam region arp-ether 256
    switch(config)# 
    switch(config)# hardware access-list tcam region arp-ether 256 double-wide 
    Warning: Please save config and reload the system for the configuration to take effect
    switch(config)# show run | inc arp-ether
    hardware access-list tcam region arp-ether 256 double-wide
    

    You can see that the original tcam command is overwritten.

Creating a New VXLAN BGP EVPN Fabric

This procedure shows how to create a new VXLAN BGP EVPN fabric.

This procedure contains descriptions for the IPv4 underlay. For information about IPv6 underlay, see IPv6 Underlay Support for Easy Fabric.

  1. Choose Control > Fabric Builder.

    The Fabric Builder window appears. When you log in for the first time, the Fabrics section has no entries. After you create a fabric, it is displayed on the Fabric Builder window, wherein a rectangular box represents each fabric.

    A standalone or member fabric contains Switch_Fabric (in the Type field), the AS number (in the ASN field), and mode of replication (in the Replication Mode field).

  2. Click Create Fabric, the Add Fabric screen appears.

    The fields are explained:

    Fabric Name - Enter the name of the fabric.

    Fabric Template - From the drop-down menu, choose the Easy_Fabric_11_1 fabric template. The fabric settings for creating a standalone fabric appear.

    The tabs and their fields in the screen are explained in the subsequent points. The overlay and underlay network parameters are included in these tabs.


    Note


    If you are creating a standalone fabric as a potential member fabric of an MSD fabric (used for provisioning overlay networks for fabrics that are connected through EVPN Multi-Site technology), then browse through the Multi-Site Domain for VXLAN BGP EVPN Fabrics topic before member fabric creation.


  3. The General tab is displayed by default. The fields in this tab are:

    BGP ASN: Enter the BGP AS number the fabric is associated with.

    Enable IPv6 Underlay: Enable the IPv6 underlay feature. For information, see IPv6 Underlay Support for Easy Fabric.

    Enable IPv6 Link-Local Address: Enables the IPv6 Link-Local address.

    Fabric Interface Numbering : Specifies whether you want to use point-to-point (p2p) or unnumbered networks.

    Underlay Subnet IP Mask - Specifies the subnet mask for the fabric interface IP addresses.

    Underlay Routing Protocol : The IGP used in the fabric, OSPF, or IS-IS.

    Route-Reflectors (RRs) – The number of spine switches that are used as route reflectors for transporting BGP traffic. Choose 2 or 4 from the drop-down box. The default value is 2.

    To deploy spine devices as RRs, DCNM sorts the spine devices based on their serial numbers, and designates two or four spine devices as RRs. If you add more spine devices, existing RR configuration will not change.

    Increasing the count - You can increase the route reflectors from two to four at any point in time. Configurations are automatically generated on the other two spine devices designated as RRs.

    Decreasing the count - When you reduce four route reflectors to two, remove the unneeded route reflector devices from the fabric. Follow these steps to reduce the count from 4 to 2.

    1. Change the value in the drop-down box to 2.

    2. Identify the spine switches designated as route reflectors.

      An instance of the rr_state policy is applied on the spine switch if it is a route reflector. To find out if the policy is applied on the switch, right-click the switch, and choose View/edit policies. In the View/Edit Policies screen, search rr_state in the Template field. It is displayed on the screen.

    3. Delete the unneeded spine devices from the fabric (right-click the spine switch icon and choose Discovery > Remove from fabric).

      If you delete existing RR devices, the next available spine switch is selected as the replacement RR.

    4. Click Save & Deploy in the fabric topology window.

    You can preselect RRs and RPs before performing the first Save & Deploy operation. For more information, see Preselecting Switches as Route-Reflectors and Rendezvous-Points.

    Anycast Gateway MAC : Specifies the anycast gateway MAC address.

    NX-OS Software Image Version : Select an image from the list.

    If you upload Cisco NX-OS software images through the image upload option, the uploaded images are listed in this field. If you select an image, and save the Fabric Settings, the system checks that all the switches within the fabric have the selected version. If some devices do not run the image, a warning is prompted to perform an In-Service Software Upgrade (ISSU) to the specified image. The warning is also accompanied with a Resolve button. This takes the user to the image management screen with the mismatched switches auto selected for device upgrade/downgrade to the specified NX-OS image specified in Fabric Settings. Till, all devices run the specified image, the deployment process is incomplete.

    If you want to deploy more than one type of software image on the fabric switches, don’t specify any image. If an image is specified, delete it.

  4. Click the Replication tab. Most of the fields are auto generated. You can update the fields if needed.

    Replication Mode : The mode of replication that is used in the fabric for BUM (Broadcast, Unknown Unicast, Multicast) traffic. The choices are Ingress Replication or Multicast. When you choose Ingress replication, the multicast related fields get disabled.

    You can change the fabric setting from one mode to the other, if no overlay profile exists for the fabric.

    Multicast Group Subnet : IP address prefix used for multicast communication. A unique IP address is allocated from this group for each overlay network.

    In the DCNM 11.0(1) release, the replication mode change is not allowed if a policy template instance is created for the current mode. For example, if a multicast related policy is created and deployed, you cannot change the mode to Ingress.

    Enable Tenant Routed Multicast (TRM) – Select the check box to enable Tenant Routed Multicast (TRM) that allows overlay multicast traffic to be supported over EVPN/MVPN in the VXLAN BGP EVPN fabric.

    Default MDT Address for TRM VRFs: The multicast address for Tenant Routed Multicast traffic is populated. By default, this address is from the IP prefix specified in the Multicast Group Subnet field. When you update either field, ensure that the TRM address is chosen from the IP prefix specified in Multicast Group Subnet.

    For more information, see Overview of Tenant Routed Multicast.

    Rendezvous-Points - Enter the number of spine switches acting as rendezvous points.

    RP mode – Choose from the two supported multicast modes of replication, ASM (for Any-Source Multicast [ASM]) or BiDir (for Bidirectional PIM [BIDIR-PIM]).

    When you choose ASM, the BiDir related fields are not enabled. When you choose BiDir, the BiDir related fields are enabled.


    Note


    BIDIR-PIM is supported on Cisco's Cloud Scale Family platforms 9300-EX and 9300-FX/FX2, and software release 9.2(1) onwards.


    When you create a new VRF for the fabric overlay, this address is populated in the Underlay Multicast Address field, in the Advanced tab.

    Underlay RP Loopback ID – The loopback ID used for the rendezvous point (RP), for multicast protocol peering purposes in the fabric underlay.

    The next two fields are enabled if you choose BIDIR-PIM as the multicast mode of replication.

    Underlay Primary RP Loopback ID – The primary loopback ID used for the phantom RP, for multicast protocol peering purposes in the fabric underlay.

    Underlay Backup RP Loopback ID – The secondary loopback ID used for the phantom RP, for multicast protocol peering purposes in the fabric underlay.

    Underlay Second Backup RP Loopback Id and Underlay Third Backup RP Loopback Id: Used for the second and third fallback Bidir-PIM Phantom RP.

  5. Click the vPC tab. Most of the fields are auto generated. You can update the fields if needed.

    vPC Peer Link VLAN – VLAN used for the vPC peer link SVI.

    Make vPC Peer Link VLAN as Native VLAN - Enables vPC peer link VLAN as Native VLAN.

    vPC Peer Keep Alive option – Choose the management or loopback option. If you want to use IP addresses assigned to the management port and the management VRF, choose management. If you use IP addresses assigned to loopback interfaces (and a non-management VRF), choose loopback.

    If you use IPv6 addresses, you must use loopback IDs.

    vPC Auto Recovery Time - Specifies the vPC auto recovery time-out period in seconds.

    vPC Delay Restore Time - Specifies the vPC delay restore period in seconds.

    vPC Peer Link Port Channel ID - Specifies the Port Channel ID for a vPC Peer Link. By default, the value in this field is 500.

    vPC IPv6 ND Synchronize – Enables IPv6 Neighbor Discovery synchronization between vPC switches. The check box is enabled by default. Clear the check box to disable the function.

    vPC advertise-pip - Select the check box to enable the Advertise PIP feature.

    You can enable the advertise PIP feature on a specific vPC as well. For more information, see Advertising PIP on vPC.

    Enable the same vPC Domain Id for all vPC Pairs: Enable the same vPC Domain ID for all vPC pairs. When you select this field, the vPC Domain Id field is editable.

    vPC Domain Id - Specifies the vPC domain ID to be used on all vPC pairs.

    vPC Domain Id Range - Specifies the vPC Domain Id range to use for new pairings.

    Enable Qos for Fabric vPC-Peering - Enable QoS on spines for guaranteed delivery of vPC Fabric Peering communication. For more information, see QoS for Fabric vPC-Peering.


    Note


    QoS for vPC fabric peering and queuing policies options in fabric settings are mutually exclusive.


    Qos Policy Name - Specifies QoS policy name that should be same on all fabric vPC peering spines. The default name is spine_qos_for_fabric_vpc_peering.

  6. Click the Protocols tab. Most of the fields are auto generated. You can update the fields if needed.

    Underlay Routing Loopback Id - The loopback interface ID is populated as 0 since loopback0 is usually used for fabric underlay IGP peering purposes.

    Underlay VTEP Loopback Id - The loopback interface ID is populated as 1 since loopback1 is used for the VTEP peering purposes.

    Underlay Routing Protocol Tag - The tag defining the type of network.

    OSPF Area ID – The OSPF area ID, if OSPF is used as the IGP within the fabric.


    Note


    The OSPF or IS-IS authentication fields are enabled based on your selection in the Underlay Routing Protocol field in the General tab.


    Enable OSPF Authentication – Select the check box to enable OSPF authentication. Deselect the check box to disable it. If you enable this field, the OSPF Authentication Key ID and OSPF Authentication Key fields get enabled.

    OSPF Authentication Key ID - The Key ID is populated.

    OSPF Authentication Key - The OSPF authentication key must be the 3DES key from the switch.


    Note


    Plain text passwords are not supported. Log in to the switch, retrieve the encrypted key and enter it in this field. Refer, Retrieving the Authentication Key section for details.

    IS-IS Level - Select the IS-IS level from this drop-down list.

    Enable IS-IS Authentication - Select the check box to enable IS-IS authentication. Deselect the check box to disable it. If you enable this field, the IS-IS authentication fields are enabled.

    IS-IS Authentication Keychain Name - Enter the Keychain name, such as CiscoisisAuth.

    IS-IS Authentication Key ID - The Key ID is populated.

    IS-IS Authentication Key - Enter the Cisco Type 7 encrypted key.


    Note


    Plain text passwords are not supported. Log in to the switch, retrieve the encrypted key and enter it in this field. Refer the Retrieving the Authentication Key section for details.


    Enable BGP Authentication - Select the check box to enable BGP authentication. Deselect the check box to disable it. If you enable this field, the BGP Authentication Key Encryption Type and BGP Authentication Key fields are enabled.


    Note


    If you enable BGP authentication using this field, leave the iBGP Peer-Template Config field blank to avoid duplicate configuration.

    BGP Authentication Key Encryption Type – Choose the 3 for 3DES encryption type, or 7 for Cisco encryption type.

    BGP Authentication Key - Enter the encrypted key based on the encryption type.


    Note


    Plain text passwords are not supported. Log in to the switch, retrieve the encrypted key and enter it in the BGP Authentication Key field. Refer the Retrieving the Authentication Key section for details.

    Enable PIM Hello Authentication - Enables the PIM hello authentication.

    PIM Hello Authentication Key - Specifies the PIM hello authentication key.

    Enable BFD: Select the check box to enable feature bfd on all switches in the fabric. This feature is valid only on IPv4 underlay and the scope is within a fabric.

    From Cisco DCNM Release 11.3(1), BFD within a fabric is supported natively. The BFD feature is disabled by default in the Fabric Settings. If enabled, BFD is enabled for the underlay protocols with the default settings. Any custom required BFD configurations must be deployed via the per switch freeform or per interface freeform policies.

    The following config is pushed after you select the Enable BFD check box:

    feature bfd


    Note


    After you upgrade from DCNM Release 11.2(1) with BFD enabled to DCNM Release 11.3(1), the following configurations are pushed on all P2P fabric interfaces:

    
    no ip redirects
    no ipv6 redirects

    For information about BFD feature compatibility, refer your respective platform documentation and for information about the supported software images, see Compatibility Matrix for Cisco DCNM.

    Enable BFD for iBGP: Select the check box to enable BFD for the iBGP neighbor. This option is disabled by default.

    Enable BFD for OSPF: Select the check box to enable BFD for the OSPF underlay instance. This option is disabled by default, and it is grayed out if the link state protocol is ISIS.

    Enable BFD for ISIS: Select the check box to enable BFD for the ISIS underlay instance. This option is disabled by default, and it is grayed out if the link state protocol is OSPF.

    Enable BFD for PIM: Select the check box to enable BFD for PIM. This option is disabled by default, and it is be grayed out if the replication mode is Ingress.

    Here are the examples of the BFD global policies:

    
    router ospf <ospf tag>
       bfd
    
    router isis <isis tag>
      address-family ipv4 unicast
        bfd
    
    ip pim bfd
    
    router bgp <bgp asn>
      neighbor <neighbor ip>
        bfd
    

    Enable BFD Authentication: Select the check box to enable BFD authentication. If you enable this field, the BFD Authentication Key ID and BFD Authentication Key fields are editable.


    Note


    BFD Authentication is not supported when the Fabric Interface Numbering field under the General tab is set to unnumbered. The BFD authentication fields will be grayed out automatically. BFD authentication is valid for only for P2P interfaces.


    BFD Authentication Key ID: Specifies the BFD authentication key ID for the interface authentication. The default value is 100.

    BFD Authentication Key: Specifies the BFD authentication key.

    For information about how to retrieve the BFD authentication parameters, see Retrieving the Encrypted BFD Authentication Key.

    iBGP Peer-Template Config – Add iBGP peer template configurations on the leaf switches to establish an iBGP session between the leaf switch and route reflector.

    If you use BGP templates, add the authentication configuration within the template and clear the Enable BGP Authentication check box to avoid duplicate configuration.

    In the sample configuration, the 3DES password is displayed after password 3.

    router bgp 65000
        password 3 sd8478fswerdfw3434fsw4f4w34sdsd8478fswerdfw3434fsw4f4w
    

    Until Cisco DCNM Release 11.3(1), iBGP peer template for iBGP definition on the leafs or border role devices and BGP RRs were same. From DCNM Release 11.4(1), the following fields can be used to specify different configurations:

    • iBGP Peer-Template Config – Specifies the config used for RR and spines with border role.

    • Leaf/Border/Border Gateway iBGP Peer-Template Config – Specifies the config used for leaf, border, or border gateway. If this field is empty, the peer template defined in iBGP Peer-Template Config is used on all BGP enabled devices (RRs, leafs, border, or border gateway roles).

    In brownfield migration, if the spine and leaf use different peer template names, both iBGP Peer-Template Config and Leaf/Border/Border Gateway iBGP Peer-Template Config fields need to be set according to the switch config. If spine and leaf use the same peer template name and content (except for the “route-reflector-client” CLI), only iBGP Peer-Template Config field in fabric setting needs to be set. If the fabric settings on iBGP peer templates do not match the existing switch configuration, an error message is generated and the migration will not proceed.

  7. Click the Advanced tab. Most of the fields are auto generated. You can update the fields if needed.

    VRF Template and VRF Extension Template: Specifies the VRF template for creating VRFs, and the VRF extension template for enabling VRF extension to other fabrics.

    Network Template and Network Extension Template: Specifies the network template for creating networks, and the network extension template for extending networks to other fabrics.

    Site ID - The ID for this fabric if you are moving this fabric within an MSD. The site ID is mandatory for a member fabric to be a part of an MSD. Each member fabric of an MSD has a unique site ID for identification.

    Intra Fabric Interface MTU - Specifies the MTU for the intra fabric interface. This value should be an even number.

    Layer 2 Host Interface MTU - Specifies the MTU for the layer 2 host interface. This value should be an even number.

    Power Supply Mode - Choose the appropriate power supply mode.

    CoPP Profile - Choose the appropriate Control Plane Policing (CoPP) profile policy for the fabric. By default, the strict option is populated.

    VTEP HoldDown Time - Specifies the NVE source interface hold down time.

    Brownfield Overlay Network Name Format: Enter the format to be used to build the overlay network name during a brownfield import or migration. The network name should not contain any white spaces or special characters except underscore (_) and hyphen (-). The network name must not be changed once the brownfield migration has been initiated. See the Creating Networks for the Standalone Fabric section for the naming convention of the network name. The syntax is [<string> | $$VLAN_ID$$] $$VNI$$ [<string>| $$VLAN_ID$$] and the default value is Auto_Net_VNI$$VNI$$_VLAN$$VLAN_ID$$. When you create networks, the name is generated according to the syntax you specify. The following table describes the variables in the syntax.

    Variables

    Description

    $$VNI$$

    Specifies the network VNI ID found in the switch configuration. This is a mandatory keyword required to create unique network names.

    $$VLAN_ID$$

    Specifies the VLAN ID associated with the network.

    VLAN ID is specific to switches, hence DCNM picks the VLAN ID from one of the switches, where the network is found, randomly and use it in the name.

    We recommend not to use this unless the VLAN ID is consistent across the fabric for the VNI.

    <string>

    This variable is optional and you can enter any number of alphanumeric characters that meet the network name guidelines.

    Example overlay network name: Site_VNI12345_VLAN1234


    Note


    Ignore this field for greenfield deployments. The Brownfield Overlay Network Name Format applies for the following brownfield imports:

    • CLI-based overlays

    • Configuration profile-based overlay where the configuration profiles were created in Cisco DCNM Release

      10.4(2).


    Enable CDP for Bootstrapped Switch - Enables CDP on management (mgmt0) interface for bootstrapped switch. By default, for bootstrapped switches, CDP is disabled on the mgmt0 interface.

    Enable VXLAN OAM - Enables the VXLAM OAM functionality for devices in the fabric. This is enabled by default. Clear the check box to disable VXLAN OAM function.

    If you want to enable the VXLAN OAM function on specific switches and disable on other switches in the fabric, you can use freeform configurations to enable OAM and disable OAM in the fabric settings.


    Note


    The VXLAN OAM feature in Cisco DCNM is only supported on a single fabric or site.


    Enable Tenant DHCP – Select the check box to enable feature dhcp and associated configurations globally on all switches in the fabric. This is a pre-requisite for support of DHCP for overlay networks that are part of the tenant VRFs.


    Note


    Ensure that Enable Tenant DHCP is enabled before enabling DHCP related parameters in the overlay profiles.


    Enable NX-API - Specifies enabling of NX-API on HTTPS. This check box is checked by default.

    Enable NX-API on HTTP on Port - Specifies enabling of NX-API on HTTP. Enable this check box and the Enable NX-API check box to use HTTP. This check box is checked by default. If you uncheck this check box, the applications that use NX-API and supported by Cisco DCNM, such as Endpoint Locator (EPL), Layer 4-Layer 7 services (L4-L7 services), VXLAN OAM, and so on, start using the HTTPS instead of HTTP.


    Note


    If you check the Enable NX-API check box and the Enable NX-API on HTTP check box, applications use HTTP.


    Enable Policy-Based Routing (PBR) - Select this check box to enable routing of packets based on the specified policy. Starting with Cisco NX-OS Release 7.0(3)I7(1) and later releases, this feature works on Cisco Nexus 9000 Series switches with Nexus 9000 Cloud Scale (Tahoe) ASICs. This feature is used along with the Layer 4-Layer 7 service workflow. For information on Layer 4-Layer 7 service, refer the Layer 4-Layer 7 Service chapter.

    Enable Strict Config Compliance - Enable the Strict Config Compliance feature by selecting this check box. By default, this feature is disabled. For more information, refer Strict Configuration Compliance.

    Enable AAA IP Authorization - Enables AAA IP authorization, when IP Authorization is enabled in the remote authentication server. This is required to support DCNM in scenarios where customers have strict control of which IP addresses can have access to the switches.

    Enable DCNM as Trap Host - Select this check box to enable DCNM as a SNMP trap destination. Typically, for a native HA DCNM deployment, the eth1 VIP IP address will be configured as SNMP trap destination on the switches. By default, this check box is enabled.

    Greenfield Cleanup Option – Enable the switch cleanup option for switches imported into DCNM with Preserve-Config=No, without a switch reload. This option is typically recommended only for the fabric environments with Cisco Nexus 9000v Switches to improve on the switch clean up time. The recommended option for Greenfield deployment is to employ Bootstrap or switch cleanup with a reboot. In other words, this option should be unchecked.

    Enable Precision Time Protocol (PTP): Enables PTP across a fabric. When you select this check box, PTP is enabled globally and on core-facing interfaces. Additionally, the PTP Source Loopback Id and PTP Domain Id fields are editable. For more information, see Precision Time Protocol for Easy Fabric.

    PTP Source Loopback Id: Specifies the loopback interface ID Loopback that is used as the Source IP Address for all PTP packets. The valid values range from 0 to 1023. The PTP loopback ID cannot be the same as RP, Phantom RP, NVE, or MPLS loopback ID. Otherwise, an error will be generated. The PTP loopback ID can be the same as BGP loopback or user-defined loopback which is created from DCNM.

    If the PTP loopback ID is not found during Save & Deploy, the following error is generated:

    Loopback interface to use for PTP source IP is not found. Create PTP loopback interface on all the devices to enable PTP feature.

    PTP Domain Id: Specifies the PTP domain ID on a single network. The valid values range from 0 to 127.

    Enable MPLS Handoff: Select the check box to enable the MPLS Handoff feature. For more information, see the Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - MPLS SR and LDP Handoff chapter.

    Underlay MPLS Loopback Id: Specifies the underlay MPLS loopback ID. The default value is 101.

    Enable TCAM Allocation: TCAM commands are automatically generated for VXLAN and vPC Fabric Peering when enabled.

    Enable Default Queuing Policies: Check this check box to apply QoS policies on all the switches in this fabric. To remove the QoS policies that you applied on all the switches, uncheck this check box, update all the configurations to remove the references to the policies, and save and deploy. From Cisco DCNM Release 11.3(1), pre-defined QoS configurations are included that can be used for various Cisco Nexus 9000 Series Switches. When you check this check box, the appropriate QoS configurations are pushed to the switches in the fabric. The system queuing is updated when configurations are deployed to the switches. You can perform the interface marking with defined queuing policies, if required, by adding the required configuration to the per interface freeform block.

    From Cisco DCNM Release 11.4(1), the DSCP mapping for QoS 5 has changed from 40 to 46 in the policy template. For DCNM 11.3(1) deployments that have been upgraded to 11.4(1), you will see the diffs that need to be deployed.

    Review the actual queuing policies by opening the policy file in the template editor. From Cisco DCNM Web UI, choose Control > Template Library. Search for the queuing policies by the policy file name, for example, queuing_policy_default_8q_cloudscale. Choose the file and click the Modify/View template icon to edit the policy.

    See the Cisco Nexus 9000 Series NX-OS Quality of Service Configuration Guide for platform specific details.

    N9K Cloud Scale Platform Queuing Policy: Choose the queuing policy from the drop-down list to be applied to all Cisco Nexus 9200 Series Switches and the Cisco Nexus 9000 Series Switches that ends with EX, FX, and FX2 in the fabric. The valid values are queuing_policy_default_4q_cloudscale and queuing_policy_default_8q_cloudscale. Use the queuing_policy_default_4q_cloudscale policy for FEXes. You can change from the queuing_policy_default_4q_cloudscale policy to the queuing_policy_default_8q_cloudscale policy only when FEXes are offline.

    N9K R-Series Platform Queuing Policy: Choose the queuing policy from the drop-down list to be applied to all Cisco Nexus switches that ends with R in the fabric. The valid value is queuing_policy_default_r_series.

    Other N9K Platform Queuing Policy: Choose the queuing policy from the drop-down list to be applied to all other switches in the fabric other than the switches mentioned in the above two options. The valid value is queuing_policy_default_other.

    Enable MACsec - Enables MACsec for the fabric. For more information, see MACsec Support in Easy Fabric and eBGP Fabric.

    Freeform CLIs - Fabric level freeform CLIs can be added while creating or editing a fabric. They are applicable to switches across the fabric. You must add the configurations as displayed in the running configuration, without indentation. Switch level freeform configurations such as VLAN, SVI, and interface configurations should only be added on the switch. For more information, refer Enabling Freeform Configurations on Fabric Switches.

    Leaf Freeform Config - Add CLIs that should be added to switches that have the Leaf, Border, and Border Gateway roles.

    Spine Freeform Config - Add CLIs that should be added to switches with a Spine, Border Spine, Border Gateway Spine, and Super Spine roles.

    Intra-fabric Links Additional Config - Add CLIs that should be added to the intra-fabric links.

  8. Click the Resources tab.

    Manual Underlay IP Address AllocationDo not select this check box if you are transitioning your VXLAN fabric management to DCNM.

    • By default, DCNM allocates the underlay IP address resources (for loopbacks, fabric interfaces, etc) dynamically from the defined pools. If you select the check box, the allocation scheme switches to static, and some of the dynamic IP address range fields are disabled.

    • For static allocation, the underlay IP address resources must be populated into the Resource Manager (RM) using REST APIs.

      Refer the Cisco DCNM REST API Reference Guide, Release 11.2(1) for more details. The REST APIs must be invoked after the switches are added to the fabric, and before you use the Save & Deploy option.

    • The Underlay RP Loopback IP Range field stays enabled if BIDIR-PIM function is chosen for multicast replication.

    • Changing from static to dynamic allocation keeps the current IP resource usage intact. Only future IP address allocation requests are taken from dynamic pools.

    Underlay Routing Loopback IP Range - Specifies loopback IP addresses for the protocol peering.

    Underlay VTEP Loopback IP Range - Specifies loopback IP addresses for VTEPs.

    Underlay RP Loopback IP Range - Specifies the anycast or phantom RP IP address range.

    Underlay Subnet IP Range - IP addresses for underlay P2P routing traffic between interfaces.

    Underlay MPLS Loopback IP Range: Specifies the underlay MPLS loopback IP address range.

    For eBGP between Border of Easy A and Easy B, Underlay routing loopback and Underlay MPLS loopback IP range must be a unique range. It should not overlap with IP ranges of the other fabrics, else VPNv4 peering will not come up.

    Layer 2 VXLAN VNI Range and Layer 3 VXLAN VNI Range - Specifies the VXLAN VNI IDs for the fabric.

    Network VLAN Range and VRF VLAN Range - VLAN ranges for the Layer 3 VRF and overlay network.

    Subinterface Dot1q Range - Specifies the subinterface range when L3 sub interfaces are used.

    VRF Lite Deployment - Specify the VRF Lite method for extending inter fabric connections.

    The VRF Lite Subnet IP Range field specifies resources reserved for IP address used for VRF LITE when VRF LITE IFCs are auto-created. If you select Back2BackOnly, ToExternalOnly, or Back2Back&ToExternal then VRF LITE IFCs are auto-created.

    Auto Deploy Both - This check box is applicable for the symmetric VRF Lite deployment. When you select this check box, it would set the auto deploy flag to true for auto-created IFCs to turn on symmetric VRF Lite configuration.

    This check box can be selected or deselected when the VRF Lite Deployment field is not set to Manual. In the case, a user explicitly unchecks the auto-deploy field for any auto-created IFCs, then the user input is always given the priority. This flag only affects the new auto-created IFC and it does not affect the existing IFCs.

    VRF Lite Subnet IP Range and VRF Lite Subnet Mask – These fields are populated with the DCI subnet details. Update the fields as needed.

    The values shown in your screen are automatically generated. If you want to update the IP address ranges, VXLAN Layer 2/Layer 3 network ID ranges or the VRF/Network VLAN ranges, ensure the following:


    Note


    When you update a range of values, ensure that it does not overlap with other ranges. You should only update one range of values at a time. If you want to update more than one range of values, do it in separate instances. For example, if you want to update L2 and L3 ranges, you should do the following.

    1. Update the L2 range and click Save.

    2. Click the Edit Fabric option again, update the L3 range and click Save.


    Service Network VLAN Range - Specifies a VLAN range in the Service Network VLAN Range field. This is a per switch overlay service network VLAN range. The minimum allowed value is 2 and the maximum allowed value is 3967.

    Route Map Sequence Number Range - Specifies the route map sequence number range. The minimum allowed value is 1 and the maximum allowed value is 65534.

  9. Click the Manageability tab.

    The fields in this tab are:

    DNS Server IPs - Specifies the comma separated list of IP addresses (v4/v6) of the DNS servers.

    DNS Server VRFs - Specifies one VRF for all DNS servers or a comma separated list of VRFs, one per DNS server.

    NTP Server IPs - Specifies comma separated list of IP addresses (v4/v6) of the NTP server.

    NTP Server VRFs - Specifies one VRF for all NTP servers or a comma separated list of VRFs, one per NTP server.

    Syslog Server IPs – Specifies the comma separated list of IP addresses (v4/v6) IP address of the syslog servers, if used.

    Syslog Server Severity – Specifies the comma separated list of syslog severity values, one per syslog server. The minimum value is 0 and the maximum value is 7. To specify a higher severity, enter a higher number.

    Syslog Server VRFs – Specifies one VRF for all syslog servers or a comma separated list of VRFs, one per syslog server.

    AAA Freeform Config – Specifies the AAA freeform configurations.

    If AAA configurations are specified in the fabric settings, switch_freeform PTI with source as UNDERLAY_AAA and description as AAA Configurations will be created.

  10. Click the Bootstrap tab.

    Enable Bootstrap - Select this check box to enable the bootstrap feature. Bootstrap allows easy day-0 import and bring-up of new devices into an existing fabric. Bootstrap leverages the NX-OS POAP functionality.

    After you enable bootstrap, you can enable the DHCP server for automatic IP address assignment using one of the following methods:

    • External DHCP Server: Enter information about the external DHCP server in the Switch Mgmt Default Gateway and Switch Mgmt IP Subnet Prefix fields.

    • Local DHCP Server: Enable the Local DHCP Server check box and enter details for the remaining mandatory fields.

    Enable Local DHCP Server - Select this check box to initiate enabling of automatic IP address assignment through the local DHCP server. When you select this check box, the DHCP Scope Start Address and DHCP Scope End Address fields become editable.

    If you do not select this check box, DCNM uses the remote or external DHCP server for automatic IP address assignment.

    DHCP Version – Select DHCPv4 or DHCPv6 from this drop-down list. When you select DHCPv4, the Switch Mgmt IPv6 Subnet Prefix field is disabled. If you select DHCPv6, the Switch Mgmt IP Subnet Prefix is disabled.


    Note


    Cisco Nexus 9000 and 3000 Series Switches support IPv6 POAP only when switches are either Layer-2 adjacent (eth1 or out-of-band subnet must be a /64) or they are L3 adjacent residing in some IPv6 /64 subnet. Subnet prefixes other than /64 are not supported.


    DHCP Scope Start Address and DHCP Scope End Address - Specifies the first and last IP addresses of the IP address range to be used for the switch out of band POAP.

    Switch Mgmt Default Gateway - Specifies the default gateway for the management VRF on the switch.

    Switch Mgmt IP Subnet Prefix - Specifies the prefix for the Mgmt0 interface on the switch. The prefix should be between 8 and 30.

    DHCP scope and management default gateway IP address specification - If you specify the management default gateway IP address 10.0.1.1 and subnet mask 24, ensure that the DHCP scope is within the specified subnet, between 10.0.1.2 and 10.0.1.254.

    Switch Mgmt IPv6 Subnet Prefix - Specifies the IPv6 prefix for the Mgmt0 interface on the switch. The prefix should be between 112 and 126. This field is editable if you enable IPv6 for DHCP.

    Enable AAA Config – Select this check box to include AAA configurations from the Manageability tab as part of the device startup config post bootstrap.

    Bootstrap Freeform Config - (Optional) Enter additional commands as needed. For example, if you require some additional configurations to be pushed to the device and be available post device bootstrap, they can be captured in this field, to save the desired intent. After the devices boot up, they will contain the configuration defined in the Bootstrap Freeform Config field.

    Copy-paste the running-config to a freeform config field with correct indentation, as seen in the running configuration on the NX-OS switches. The freeform config must match the running config. For more information, see Resolving Freeform Config Errors in Switches.

    DHCPv4/DHCPv6 Multi Subnet Scope - Specifies the field to enter one subnet scope per line. This field is editable after you check the Enable Local DHCP Server check box.

    The format of the scope should be defined as:

    DHCP Scope Start Address, DHCP Scope End Address, Switch Management Default Gateway, Switch Management Subnet Prefix

    For example: 10.6.0.2, 10.6.0.9, 10.6.0.1, 24

  11. Click the Configuration Backup tab. The fields on this tab are:

    Hourly Fabric Backup: Select the check box to enable an hourly backup of fabric configurations and the intent.

    The hourly backups are triggered during the first 10 minutes of the hour.

    Scheduled Fabric Backup: Check the check box to enable a daily backup. This backup tracks changes in running configurations on the fabric devices that are not tracked by configuration compliance.

    Scheduled Time: Specify the scheduled backup time in a 24-hour format. This field is enabled if you check the Scheduled Fabric Backup check box.

    Select both the check boxes to enable both back up processes.

    The backup process is initiated after you click Save.

    The scheduled backups are triggered exactly at the time you specify with a delay of up to two minutes. The scheduled backups are triggered regardless of the configuration deployment status.

    The backup configuration files are stored in the following path in DCNM: /usr/local/cisco/dcm/dcnm/data/archive

    The number of archived files that can be retained is set in the # Number of archived files per device to be retained: field in the Server Properties window.


    Note


    To trigger an immediate backup, do the following:

    1. Choose Control > Fabric Builder. The Fabric Builder screen comes up.

    2. Click within the specific fabric box. The fabric topology screen comes up.

    3. From the Actions pane at the left part of the screen, click Re-Sync Fabric.


    You can also initiate the fabric backup in the fabric topology window. Click Backup Now in the Actions pane.

  12. Click ThousandEyes Agent tab. This feature is supported on Cisco DCNM Release 11.5(3) only. For more information, refer to Configuring Global Settings for ThousandEyes Enterprise Agent.

    The fields on this tab are:


    Note


    The fabric settings for ThousandEyes Agent overwrites the global settings and applies the same configuration for all the ThousandEyes Agent installed on switches in that fabric.


    • Enable Fabric Override for ThousandEyes Agent Installation: Select the check box to enable the ThousandEyes Enterprise Agent on the fabric.

    • ThousandEyes Account Group Token: Specifies ThousandEyes Enterprise Agent account group token for installation.

    • VRF on Switch for ThousandEyes Agent Collector Reachability: Specifies the VRF data which provides internet reachability.

    • DNS Domain: Specifies the switch DNS domain configuration.

    • DNS Server IPs: Specifies the comma separated list of IP addresses (v4/v6) of Domain Name System (DNS) server. You can enter a maximum of three IP addresses for the DNS Server.

    • NTP Server IPs: Specifies comma separated list of IP addresses (v4/v6) of Network Time Protocol (NTP) server. You can enter a maximum of three IP addresses for the NTP Server.

    • Enable Proxy for Internet Access: Select the check box to enable the proxy setting for NX-OS switch internet access.

    • Proxy Information: Specifies the proxy server port information.

    • Proxy Bypass: Specifies the server list for which proxy is bypassed.

  13. Click Save after filling and updating relevant information. A note appears briefly at the bottom right part of the screen, indicating that the fabric is created. When a fabric is created, the fabric page comes up. The fabric name appears at the top left part of the screen.

    (At the same time, the newly created fabric instance appears on the Fabric Builder screen. To go to the Fabric Builder screen, click the left arrow () button above the Actions pane [to the left of the screen]).

    The Actions pane allows you to perform various functions. One of them is the Add switches option to add switches to the fabric. After you create a fabric, you should add fabric devices. The options are explained:

    • Tabular View - By default, the switches are displayed in the topology view. Use this option to view switches in the tabular view.

    • Refresh topology - Allows you to refresh the topology.

    • Save Layout – Saves a custom view of the topology. You can create a specific view in the topology and save it for ease of use.

    • Delete saved layout – Deletes the custom view of the topology

    • Topology views - You can choose between Hierarchical, Random and Custom saved layout display options.

      • Hierarchical - Provides an architectural view of your topology. Various Switch Roles can be defined that draws the nodes on how you configure your CLOS topology.

      • Random - Nodes are placed randomly on the window. DCNM tries to make a guess and intelligently place nodes that belong together in close proximity.

      • Custom saved layout - You can drag nodes around to your liking. Once you have the positions as how you like, you can click Save Layout to remember the positions. Next time you come to the topology, DCNM will draw the nodes based on your last saved layout positions.

    • Restore Fabric – Allows you to restore the fabric to a prior DCNM configuration state (one month back, two months back, and so on). For more information, see the Restore Fabric section.

    • Backup Now: You can initiate a fabric backup manually by clicking Backup Now. Enter a name for the tag and click OK. Regardless of the settings you choose under the Configuration Backup tab in the Fabric Settings dialog box, you can initiate a backup using this option.

    • Resync Fabric - Use this option to resynchronize DCNM state when there is a large scale out-of-band change, or if configuration changes do not register in the DCNM properly. The resync operation does a full CC run for the fabric switches and recollects “show run” and “show run all” commands from the switches. When you initiate the re-sync process, a progress message is displayed on the window. During the re-sync, the running configuration is taken from the switches. Then, the Out-of-Sync/In-Sync status for the switch is recalculated based on the intent or expected configuration defined in DCNM versus the current running configuration that was taken from the switches.

    • Add Switches – Allows you to add switch instances to the fabric.

    • Fabric Settings – Allows you to view or edit fabric settings.

    • Cloud icon - Click the Cloud icon to display (or not display) an Undiscovered cloud.

      When you click the icon, the Undiscovered cloud and its links to the selected fabric topology are not displayed.

      Click the Cloud icon again to display the Undiscovered cloud.

SCOPE - You can toggle between fabrics by using the SCOPE drop-down box at the top right. The current fabric is highlighted. An MSD and its member fabrics are distinctly displayed, wherein the member fabrics are indented, under the MSD fabric.

Adding Switches to a Fabric

Switches in each fabric are unique, and hence, each switch can only be added to one fabric.

Click the Add Switches option from the Actions panel to add switches to the fabric created in DCNM. The Inventory Management screen comes up. The screen contains two tabs, one for discovering existing switches and the other for discovering new switches. Both options are explained.

Additionally, you can pre-provision switches and interfaces. For more information, see Pre-provisioning a Device and Pre-provisioning an Ethernet Interface.


Note


When DCNM discovers a switch with the hostname containing the period character (.), it is treated as a domain-name and truncated. Only the text prior to the period character (.) is considered as a hostname. For example:

  • If hostname is leaf.it.vxlan.bgp.org1-XYZ, DCNM shows only leaf

  • If hostname is leaf-itvxlan.bgp.org1-XYZ, DCNM shows only leafit-vxlan


Discovering Existing Switches
  1. After clicking on Add Switches, use the Discover Existing Switches tab to add one or more existing switches into the fabric. In this case, a switch with known credentials and a pre-provisioned IP address, is added to the fabric. The IP address (Seed IP), administrator username, and password (Username and Password fields) of the switch are provided as the input by a user. The Preserve Config knob is set to yes by default. This is the option that a user would select for a brownfield import of a device into the fabric. For a greenfield import where the device configuration will be cleaned up as part of the import process, the user should set the Preserve Config knob to no.


    Note


    Easy_Fabric_eBGP does not support brownfield import of a device into the fabric.


  2. Click Start discovery. The Scan Details window comes up shortly. Since the Max Hops field was populated with 2 (by default), the switch with the specified IP address (leaf-91) and switches two hops from that switch, are populated in the Scan Detailsresult.

  3. If the DCNM was able to perform a successful shallow discovery to a switch, the status will show up as Manageable. Select the check box next to the appropriate switch(es) and click Import into fabric.

    Though this example describes the discovery of one switch, multiple switches can be discovered at once.

    The switch discovery process is initiated. The Progress column displays progress for all the selected switches. It displays done for each switch on completion.


    Note


    You must not close the screen (and try to add switches again) until all selected switches are imported or an error message comes up.

    If an error message comes up, close the screen. The fabric topology screen comes up. The error messages are displayed at the top right part of the screen. Resolve the errors wherever applicable and initiate the import process again by clicking Add Switches in the Actions panel.


    DCNM discovers all the switches, and the Progress column displays done for all switches, close the screen. The Standalone fabric topology screen comes up again. The switch icons of the added switches are displayed in it.


    Note


    You will encounter the following errors during switch discovery sometimes.
  4. Click Refresh topology to view the latest topology view.

    When all switches are added and roles assigned to them, the fabric topology contains the switches and connections between them.

  5. After discovering the devices, assign an appropriate role to each device. For this purpose, tight click the device, and use the Set role option to set the appropriate role. Alternatively, the tabular view may be employed to assign the same role to multiple devices at one go.

    If you choose the Hierarchical layout for display (in the Actions panel), the topology automatically gets aligned as per role assignment, with the leaf devices at the bottom, the spine devices connected on top of them, and the border devices at the top.

    Assign vPC switch role - To designate a pair of switches as a vPC switch pair, right-click the switch and choose the vPC peer switch from the list of switches.

    AAA server password - During fabric creation, if you have entered AAA server information (in the Manageability tab), you must update the AAA server password on each switch. Else, switch discovery fails.

    When a new vPC pair is created and deployed successfully using Cisco DCNM, one of the peers might be out-of-sync for the no ip redirects CLI even if the command exists on the switch. This out-of-sync is due to a delay on the switch to display the CLI in the running configuration, which causes a diff in the configuration compliance. Re-sync the switches in the Config Deployment window to resolve the diff.

  6. Click Save & Deploy at the top right part of the screen.

    The template and interface configurations form the underlay network configuration on the switches. Also, freeform CLIs that were entered as part of fabric settings (leaf and spine switch freeform configurations entered in the Advanced tab) are deployed. For more details on freeform configurations, refer Enabling Freeform Configurations on Fabric Switches.

    Configuration Compliance: If the provisioned configurations and switch configurations do not match, the Status column displays out-of-sync. For example, if you enable a function on the switch manually through a CLI, then it results in a configuration mismatch.

    To ensure configurations provisioned from DCNM to the fabric are accurate or to detect any deviations (such as out-of-band changes), DCNM’s Configuration Compliance engine reports and provides necessary remediation configurations.

    When you click Save & Deploy, the Config Deployment window appears.

    If the status is out-of-sync, it suggests that there is inconsistency between the DCNM and configuration on the device.

    The Re-sync button is displayed for each switch in the Re-sync column. Use this option to resynchronize DCNM state when there is a large scale out-of-band change, or if configuration changes do not register in the DCNM properly. The re-sync operation does a full CC run for the switch and recollects “show run” and “show run all” commands from the switch. When you initiate the re-sync process, a progress message is displayed on the screen. During the re-sync, the running configuration is taken from the switch. The Out-of-Sync/In-Sync status for the switch is recalculated based on the intent defined in DCNM.

    Click the Preview Config column entry (updated with a specific number of lines). The Config Preview screen comes up.

    The PendingConfig tab displays the pending configurations for successful deployment.

    The Side-by-sideComparison tab displays the current configurations and expected configurations together.

    In DCNM 11, multi-line banner motd configuration is supported. Multi-line banner motd configuration can be configured in DCNM with freeform configuration policy, either per switch using switch_freeform, or per fabric using leaf/spine freeform configuration. Note that after the multi-line banner motd is configured, deploy the policy by executing the Save & Deploy option in the (top right part of the) fabric topology screen. Else, the policy may not be deployed properly on the switch. The banner policy is only to configure single-line banner configuration. Also, you can only create one banner related freeform configuration/policy. Multiple policies for configuring banner motd are not supported.

  7. Close the screen.

    In the Configuration Deployment screen, click Deploy Config at the bottom part of the screen to initiate pending configuration onto the switch. The Status column displays FAILED or SUCCESS state. For a FAILED status, investigate the reason for failure to address the issue.

    After successful configuration provisioning (when all switches display a progress of 100%), close the screen.

    The fabric topology is displayed. The switch icons turn green to indicate successful configuration.

    If a switch icon is in red color, it indicates that the switch and DCNM configurations are not in sync.When deployment is pending on a switch, the switch is displayed in blue color. The pending state indicates that there is a pending deployment or pending recomputation. You can click on the switch and review the pending deployments using Preview or Deploy Config options, or click Save & Deploy to recompute the state of the switch.


    Note


    If there are any warning or errors in the CLI execution, a notification will appear in the Fabric builder window. Warnings or errors that are auto-resolvable have the Resolve option.


When a leaf switch boots up after a switch reload or RMA operation, DCNM provisions configurations for the switch and FEX devices connected to it. Occasionally, FEX connectivity comes up after DCNM provisions FEX (host interface) configurations, resulting in a configuration mismatch. To resolve the mismatch, click Save & Deploy again in the fabric topology screen.

From Cisco NX-OS Release 11.4(1), if you uncheck the FEX check box in the Topology window, FEX devices are hidden in the Fabric Builder topology window as well. To view FEX in Fabric Builder, you need to check this check box. This option is applicable for all fabrics and it is saved per session or until you log out of DCNM. If you log out and log in to DCNM, the FEX option is reset to default, that is, enabled by default. For more information, see Show Panel.

An example of the Deploy Config option usage is for switch-level freeform configurations. Refer Enabling Freeform Configurations on Fabric Switches for details.

Discovering New Switches
  1. When a new Cisco NX-OS device is powered on, typically that device has no startup configuration or any configuration state for that matter. Consequently, it powers on with NX-OS and post initialization, goes into a POAP loop. The device starts sending out DHCP requests on all the interfaces that are up including the mgmt0 interface.

  2. As long as there is IP reachability between the device and the DCNM, the DHCP request from the device, will be forwarded to the DCNM. For easy day-0 device bring-up, the bootstrap options should be enabled in the Fabric Settings as mentioned earlier.

  3. With bootstrap enabled for the fabric, the DHCP request coming from the device will be serviced by the DCNM. The temporary IP address allocated to the device by the DCNM will be employed to learn basic information about the switch including the device model, device NX-OS version, etc.

  4. In the DCNM GUI, go to a fabric (Click Control > Fabric Builder and click a fabric). The fabric topology is displayed.

    Go to the fabric topology window and click the Add switches option from the Actions panel. The Inventory Management window comes up.

  5. Click the POAP tab.

    As mentioned earlier, DCNM retrieves the serial number, model number, and version from the device and displays them on the Inventory Management along window. Also, an option to add the IP address, hostname, and password are made available. If the switch information is not retrieved, refresh the window.


    Note


    • At the top left part of the window, export and import options are provided to export and import the .csv file that contains the switch information. You can pre-provision devices using the import option as well.


    Select the checkbox next to the switch and enter the switch credentials: IP address and host name.

    Based on the IP address of your device, you can either add the IPv4 or IPv6 address in the IP Address field.

    Beginning with Release 11.2(1), you can provision devices in advance. To pre-provision devices, refer to Pre-provisioning a Device.

  6. In the Admin Password and Confirm Admin Password fields, enter and confirm the admin password.

    This admin password is applicable for all the switches displayed in the POAP window.


    Note


    If you do not want to use admin credentials to discover switches, you can instead use the AAA authentication, that is, RADIUS or TACACS credentials for discovery only.


  7. (Optional) Use discovery credentials for discovering switches.

    1. Click the Add Discovery Credentials icon to enter the discovery credentials for switches.

    2. In the Discovery Credentials window, enter the discovery credentials such as discovery username and password.

      Click OK to save the discovery credentials.

      If the discovery credentials are not provided, DCNM uses the admin user and password to discover switches.

  8. Click Bootstrap at the top right part of the screen.

    DCNM provisions the management IP address and other credentials to the switch. In this simplified POAP process, all ports are opened up.

  9. Click Refresh Topology to get updated information. The added switch goes through the POAP cycle. Monitor and check the switch for POAP completion.

  10. After the added switch completes POAP, the fabric builder topology page is refreshed with the added switch thereby depicting its discovered physical connections. Set the appropriate role for the switch followed by a Save & Deploy operation at the fabric level. The Fabric Settings, switch role, the topology etc. are evaluated by the Fabric Builder and the appropriate intended configuration for the switch is generated as part of the Save operation. The pending configuration will provide a list of the configurations that need to be deployed to the new switch in order to bring it IN-SYNC with the intent.


    Note


    For any changes on the fabric that results in the Out-of-Sync, then you must deploy the changes. The process is the same as explained in the Discovering Existing Switches section.

    During fabric creation, if you have entered AAA server information (in the Manageability tab), you must update the AAA server password on each switch. Else, switch discovery fails.


  11. After the pending configurations are deployed, the Progress column displays 100% for all switches.

  12. Click Close to return to the fabric builder topology.

  13. Click Refresh Topology to view the update. All switches must be in green color indicating that they are functional.

  14. The switch and the link are discovered in DCNM. Configurations are built based on various policies (such as fabric, topology, and switch generated policies). The switch image (and other required) configurations are enabled on the switch.

  15. In the DCNM GUI, the discovered switches can be seen in the Standalone fabric topology. Up to this step, the POAP is completed with basic settings. You must setup interfaces through the Control > Interfaces option for any additional configurations, but not limited to the following:

    • vPC pairing.

    • Breakout interfaces.

    • Port channels, and adding members to ports.

    When you enable or disable a vPC pairing/un-pairing or the advertise-pip option, or update Multi-Site configuration, you should use the Save & Deploy operation. At the end of the operation, an error prompts you to configure the shutdown or no shutdown command on the nve interface. A sample error screenshot when you enable a vPC setup:

    To resolve, go to the Control > Interfaces screen and deploy the Shutdown operation on the nve interface followed by a No Shutdown configuration. This is depicted in the figure below where the up arrow corresponds to a No Shutdown operation while a down arrow corresponds to a Shutdown operation.

You can right-click the switch to view various options:

  • Set Role - Assign a role to the switch (Spine, Border Gateway, and so on).


    Note


    • Changing of the switch role is allowed only before executing Save & Deploy.

    • Starting from DCNM 11.1(1), switch roles can be changed if there are no overlays on the switches, but only as per the list of allowed switch role changes given at Switch Operations.


  • Modes - Maintenance and Active/Operational modes.

  • vPC Pairing - Select a switch for vPC and then select its peer.

    You can create a virtual link for a vPC pair or change the existing physical link to a virtual link for a vPC pair.

  • Manage Interfaces - Deploy configurations on the switch interfaces.

  • View/Edit Policies - See switch policies and edit them as required.

  • History - View per switch deployment and policy change history.

    The Policy Change History tab lists the history of policies along with the users who made the changes like add, update, or delete.

    Under the Policy Change History tab, for a policy, click Detailed History under the Generated Config column to view the generated config before and after.

    The following table provides the summary of generated config before and after for Policy Template Instances (PTIs).

    PTI Operations Generated Config Before Generated Config After
    Add Empty Contains the config
    Update Contains config before changes Contains config after changes
    Mark-Delete Contains the config to be removed. Contains the config to be removed with colour change.
    Delete Contains the config Empty

    Note


    When a policy or profile template is applied, an instance is created for each application of the template, which is known as Policy Template Instance or PTI.


  • Preview Config - View the pending configuration and the side-by-side comparison of the running and expected configuration.

  • Deploy Config - Deploy per switch configurations.

  • Discovery - You can use this option to update the credentials of the switch, reload the switch, rediscover the switch, and remove the switch from the fabric.

The new fabric is created, the fabric switches are discovered in DCNM, the underlay configuration provisioned on those switches, and the configurations between DCNM and the switches are synced. The remaining tasks are:

Pre-provisioning Support in DCNM 11

Cisco DCNM supports provisioning of device configuration in advance. This is specifically applicable for scenarios where devices have been procured, but not yet delivered or received by the Customers. The purchase order typically has information about the device serial number, device model and so on, which in turn can be used to prepare the device configuration in DCNM prior to the device connectivity to the Network. Pre-provisioning is supported for Cisco NX-OS devices in both Easy Fabric and External/Classic_LAN fabrics.

Pre-provisioning a Device
From Cisco DCNM Release 11.2, you can provision devices in advance.

Note


Ensure that you enter DHCP details in the Bootstrap tab in the fabric settings.
  • The pre-provisioned devices support the following configurations in DCNM:

    • Base management

    • vPC Pairing

    • Intra-Fabric links

    • Ethernet ports

    • Port-channel

    • vPC

    • ST FEX

    • AA FEX

    • Loopback

    • Overlay network configurations

  • The pre-provisioned devices do not support the following configurations in DCNM:

    • Inter-Fabric links

    • Sub-interface

    • Interface breakout configuration

  • When a device is being pre-provisioned has breakout links, you need to specify the corresponding breakout command along with the switch's model and gateway in the Data field in the Add a new device to pre-provisioning window in order to generate the breakout PTI.

    Note the following guidelines:

    • Multiple breakout commands can be separated by a semicolon (;).

    • The definitions of the fields in the data JSON object are as follows:

      • modulesModel: (Mandatory) Specifies the switch module’s model information.

      • gateway: (Mandatory) Specifies the default gateway for the management VRF on the switch. This field is required to create the intent to pre-provision devices. You must enter the gateway even if it is in the same subnet as DCNM to create the intent as part of pre-provisioning a device.

      • breakout: (Optional) Specifies the breakout command provided in the switch.

      • portMode: (Optional) Specifies the port mode of the breakout interface.

    The examples of the values in the Data field are as follows:

    • {"modulesModel": ["N9K-C93180LC-EX"], "gateway": "10.1.1.1/24"}

    • {"modulesModel": ["N9K-C93180LC-EX"],"breakout": "interface breakout module 1 port 1 map 10g-4x", "portMode": "hardware profile portmode 4x100G+28x40G", "gateway": "172.22.31.1/24" }

    • {"modulesModel": ["N9K-X9736C-EX", "N9K-X9732C-FX", "N9K-C9516-FM-E2", "N9K-C9516-FM-E2", "N9K-C9516-FM-E2", "N9K-C9516-FM-E2", "N9K-SUP-B+", "N9K-SC-A", "N9K-SC-A"], "gateway": "172.22.31.1/24"}

    • {"breakout":"interface breakout module 1 port 50 map 10g-4x" , "gateway": "172.16.1.1/24", "modulesModel": ["N9K-C93180YC-EX "]}

    • {"modulesModel": ["N9K-X9732C-EX", "N9K-X9732C-EX", "N9K-C9504-FM-E", "N9K-C9504-FM-E", "N9K-SUP-B", "N9K-SC-A", "N9K-SC-A"], "gateway": "172.29.171.1/24", "breakout":"interface breakout module 1 port 1,11,19 map 10g-4x; interface breakout module 1 port 7 map 25g-4x"}

    • {"modulesModel": ["N9K-C93180LC-EX"], "gateway": "10.1.1.1/24","breakout": "interface breakout module 1 port 1-4 map 10g-4x","portMode": "hardware profile portmode 48x25G + 2x100G + 4x40G"}

Procedure

Step 1

Click Control > Fabric Builder.

The Fabric Builder screen is displayed.

Step 2

Click within the fabric box.

Step 3

From the Actions panel, click the Add switches option.

The Inventory Management screen is displayed.

Step 4

Click the POAP tab.

Step 5

In the POAP tab, do the following:

  1. Click + from the top left part of the screen.

    The Add a new device screen comes up.

  2. Fill up the device details as shown in the screenshot.

  3. Click Save.

IP Address: Specify the IPv4 or IPv6 address of the new device.

Serial Number: The serial number for the new device. Serial number is found in the Cisco Build of Material Purchase and you can refer to these values while using the pre-provisioning feature.

For information about the Data field, see the examples provided in guidelines.

The device details appear in the POAP screen. You can add more devices for pre-provisioning.

At the top left part of the window, Export and Import icons are provided to export and import the .csv file that contains the switch information.

Using the Import option, you can pre-provision multiple devices.

Add new devices’ information in the .csv file with all the mandatory fields (SerialNumber, Model, version, IpAddress, Hostname, and Data fields [JSON Object]).

The Data column consists of the model name of the module to identify the hardware type from the fabric template. A .csv file screenshot:

Step 6

Enter the administration password in the Admin Password and Confirm Admin Password fields.

Step 7

Select the device(s) and click Bootstrap at the top right part of the screen.

The leaf1 device appears in the fabric topology.

From the Actions panel, click Tabular View. You cannot deploy the fabric till the status of all the pre-provisioned switch(es) are displayed as ok under the Discovery Status column.

Note

 

When a switch is in Unreachable discovery status, the last available information of the switch is retained in other columns.

When you connect leaf1 to the fabric, the switch is provisioned with the IP address 10.1.1.1.

Step 8

Navigate to Fabric Builder and set roles for the device.

Create intra-link policy using one of the templates:

  • int_pre_provision_intra_fabric_link to automatically generate intra fabric interface configuration with DCNM allocated IP addresses

  • int_intra_fabric_unnum_link_11_1 if you are using unnumbered links

  • int_intra_fabric_num_link_11_1 if you want to manually assign IP addresses to intra-links

Click Save & Deploy.

Configuration for the switches are captured in corresponding PTIs and can be seen in the View/Edit Policies window.

Step 9

To bring in the physical device, you can follow the manual RMA or POAP RMA procedure.

For more information, see Return Material Authorization (RMA).

If you use the POAP RMA procedure, ignore the error message of failing to put the device into maintenance mode due to no connectivity since it is expected to have no connectivity to a non-existing device.

You need to click Save & Deploy in the fabric after one or more switches are online to provision the host ports. This action must be performed before overlays are provisioned for the host port attachment.


Pre-provisioning an Ethernet Interface

From DCNM Release 11.4(1), you can pre-provision Ethernet interfaces in the Interface window. This pre-provisioning feature is supported in the Easy, External, and eBGP fabrics. You can add Ethernet interfaces to only pre-provisioned devices before they are discovered in DCNM.


Note


Before attaching a network/VRF, you must pre-provision the Ethernet interface before adding it to Port-channels, vPCs, ST FEX, AA FEX, loopback, subinterface, tunnel, ethernet, and SVI configurations.


Before you begin
Make sure that you have a preprovisioned device in your fabric. For information, see Pre-provisioning a Device.
Procedure

Step 1

Navigate to the fabric containing the pre-provisioned device from the Fabric Builder window.

Step 2

Right click the pre-provisioned device and select Manage Interfaces.

You can also navigate to the Interfaces window by selecting Control > Fabrics > Interfaces. From the Scope drop-down list, select the fabric containing the pre-provisioned device.

Step 3

Click Add.

Step 4

Enter all the required details in the Add Interface window.

Type: Select Ethernet from this drop-down list.

Select a device: Select the pre-provisioned device.

Note

 

You cannot add an Ethernet interface to an already managed device in DCNM.

Enter Interface Name: Enter a valid interface name based on the module type. For example, Ethernet1/1, eth1/1, or e1/1. The interface with same name should be available on the device after it is added.

Policy: Select a policy that should be applied on the interface.

For more information, see Adding Interfaces.

Step 5

Click Save.

Step 6

Click Preview to check the expected configuration that will be deployed to the switch after it is added.

Note

 

The Deploy button is disabled for Ethernet interfaces since the devices are pre-provisioned.


Pre-provisioning a vPC Pair
Before you begin

Ensure that you have enabled Bootstrap in the Fabric Settings.

Procedure

Step 1

Import both the devices into the fabric.

For instructions, see Pre-provisioning a Device.

The following example in the image shows two Cisco Nexus 9000 Series devices that are pre-provisioned and added to an existing Fabric. Choose Add Switches in the Action panel. On the Inventory Management screen, click PowerOn Auto Provisioning (POAP).

The devices will show up in the fabric as gray/undiscovered devices.

Step 2

Right click and select appropriate roles for these devices similar to other reachable devices.

Step 3

To create vPC pairing between the devices with physical peer-link or MCT, perform the following steps:

  1. Provision the physical Ethernet interfaces that form the peer-link.

    The vPC peer-link between leaf1-leaf2 comprises of interfaces Ethernet1/44-45 on each device. Choose Control > Fabrics > Interfaces to pre-provision ethernet interfaces.

    For instructions, see Preprovisioning an Ethernet Interface.

  2. Create a pre-provisioned link between these interfaces.

    In the Fabric Builder view, right click and select Add link or click Add(+) icon in the Links tab in the Fabric Builder Tabular view.

    Create two links, one for leaf1-Ethernet1/44 to leaf2-Ethernet1/44 and another one for leaf1-Ethernet1/45 to leaf2-Ethernet1/45.

    Ensure that you choose int_pre_provision_intra_fabric_link as link template. The Source Interface and Destination Interface field names, must match with the Ethernet interfaces pre-provisioned in the previous step.

    An example of pre-provisioned link creation is as depicted in the following image.

    After the links are created, they are listed in the Links tab under Fabric builder as shown in the following image.

  3. On Fabric topology, right click on a switch and choose vPC Pairing from the drop-down list.

    Select the vPC pair and click vPC pairing for the pre-provisionsed devices.

  4. Click Save & Deploy to generate the required intended vPC pairing configuration for the pre-provisioned devices.

    After completion, the devices will be correctly paired and the vPC pairing intent will be generated for the devices. The policies are generated as shown in the following image:

    Note

     

    Because the devices are not yet operational, Config Compliance will not return any IN-SYNC or OUT-OF-SYNC status for these devices.

    This is expected as CC requires the running configuration from the devices in order to compare that with the intent and calculate and report the compliance status.


Pre-provisioning a vPC Host Interface
Procedure

Step 1

Create physical ethernet interfaces on the pre-provisioned devices. Add a vPC host interface similar to a regular vPC pair or switches.

For instructions, see Pre-provisioning an Ethernet Interface.

For example, leaf1-leaf2 represents the pre-provisioned vPC device pair, assuming that Ethernet interfaces 1/1 is already pre-provisioned on both devices leaf1 and leaf2.

Step 2

Create a vPC host truck interface as shown in the following image.

Preview and Deploy actions doesn't yield any result, because both require the device to be present. The vPC host interface is created and displays status as Not discovered as shown in the following image.


Attaching Overlays to Pre-provisioned Devices

Overlay VRFs and Networks can be attached to pre-provisioned devices similar to any other discovered device.

The following example shows where an overlay network is attached to the pre-provisioned vPC pair of leafs (leaf1-leaf2). It is also attached to the pre-provisioned vPC host interface port-channels created on leaf1-leaf2.

Preview and Deploy operations are disabled for the pre-provisioned devices, because the devices are not reachable. After the pre-provisioned device is reachable, all operations are enabled similar to other discovered devices.

On Fabric Builder > View/Edit Policies, you can view the entire intent generated for the pre-provisioned device, including the overlay network/VRF attachment information as shown in the following image.

Precision Time Protocol for Easy Fabric

In the fabric settings for the Easy_Fabric_11_1 template, select the Enable Precision Time Protocol (PTP) check box to enable PTP across a fabric. When you select this check box, PTP is enabled globally and on core-facing interfaces. Additionally, the PTP Loopback Id and PTP Domain Id fields are editable.

The PTP feature works only when all the devices in a fabric are cloud-scale devices. Warnings are displayed if there are non-cloud scale devices in the fabric, and PTP is not enabled. Examples of the cloud-scale devices are Cisco Nexus 93180YC-EX, Cisco Nexus 93180YC-FX, Cisco Nexus 93240YC-FX2, and Cisco Nexus 93360YC-FX2 switches.

For more information, see the Configuring PTP chapter in Cisco Nexus 9000 Series NX-OS System Management Configuration Guide and Cisco Network Insights for Resources Application for Cisco DCNM User Guide.

For LAN fabric deployments, specifically in a VXLAN EVPN based fabric deployments, you have to enable PTP globally, and also enable PTP on core-facing interfaces. The interfaces could be configured to the external PTP server like a VM or Linux-based machine. Therefore, the interface should be edited to have a connection with the grandmaster clock.

It is recommended that the grandmaster clock should be configured outside of Easy Fabric and it is IP reachable. The interfaces toward the grandmaster clock need to be enabled with PTP via the interface freeform config.

All core-facing interfaces are auto-enabled with the PTP configuration after you click Save & Deploy. This action ensures that all devices are PTP synced to the grandmaster clock. Additionally, for any interfaces that are not core-facing, such as interfaces on the border devices and leafs that are connected to hosts, firewalls, service-nodes, or other routers, the ttag related CLI must be added. The ttag is added for all traffic entering the VXLAN EVPN fabric and the ttag must be stripped when traffic is exiting this fabric.

Here is the sample PTP configuration:

feature ptp
 
ptp source 100.100.100.10 -> IP address of the loopback interface (loopback0) that is already created or user created loopback interface in the fabric settings

ptp domain 1 -> PTP domain ID specified in fabric settings

interface Ethernet1/59 -> Core facing interface
  ptp
 
interface Ethernet1/50 -> Host facing interface
  ttag
  ttag-strip

The following guidelines are applicable for PTP:

  • The PTP feature can be enabled in a fabric when all the switches in the fabric have Cisco NX-OS Release 7.0(3)I7(1) or a higher version. Otherwise, the following error message is displayed:

    PTP feature can be enabled in the fabric, when all the switches have NX-OS Release 7.0(3)I7(1) or higher version. Please upgrade switches to NX-OS Release 7.0(3)I7(1) or higher version to enable PTP in this fabric.

  • For hardware telemetry support in NIR, the PTP configuration is a prerequisite.

  • If you are adding a non-cloud scale device to an existing fabric which contains PTP configuration, the following warning is displayed:

    TTAG is enabled fabric wide, when all devices are cloud scale switches so it cannot be enabled for newly added non cloud scale device(s).

  • If a fabric contains both cloud scale and non-cloud scale devices, the following warning is displayed when you try to enable PTP:

    TTAG is enabled fabric wide, when all devices are cloud scale switches and is not enabled due to non cloud scale device(s).

Support for Super Spine Role in DCNM

Super Spine is a device that is used for interconnecting multiple spine-leaf PODs. Prior to the DCNM Release 11.3(1), it was possible to interconnect multiple VXLAN EVPN Easy fabrics via super spines. However, these super spines had to be part of an external fabric. Within each Easy Fabric, an appropriate IGP is used for underlay connectivity. eBGP between the super spine layer in the external fabric and spine layer in the Easy Fabrics would be the recommended way of interconnecting multiple VXLAN EVPN Easy Fabrics. The eBGP peering can be configured via inter-fabric links or an appropriate mix of interface and eBGP configuration on the respective switches.

From DCNM Release 11.3(1), you have an extra interconnectivity option with super spines. You can have multiple spine-leaf PODs within the same Easy Fabric that are interconnected via super spines such that the same IGP domain extends across all the PODs, including the super spines. Within such a deployment, the BGP RRs and RPs (if applicable) are provisioned on the super spine layer. The spine layer becomes a pseudo interconnect between the leafs and super spines. VTEPs may be optionally hosted on the super spines if they have the border functionality.

The following Super Spine roles are supported in DCNM:

  • Super Spine

  • Border Super Spine

  • Border Gateway Super Spine

A border super spine handles multiple functionalities including the functionalities of a super spine, RR, RP (optionally), and a border leaf. Similarly, a border gateway super spine serves a super spine, RR, RP (optional), and a border gateway. It’s not recommended to overload border functionality on the super spine or RR layer. Instead, attach border leafs or border gateways to the super spine layer for external connectivity. The super spine layer serves as the interconnect with the RR or RP functionality.

The following are the characteristics of super spine switch roles in DCNM:

  • Supported only for the Easy_Fabric_11_1 template.

  • Can only connect to spines and borders. The valid connections are:

    • Spines to super spines

    • Spines to border super spines and border gateway super spines

    • Super spines to border leafs and border gateway leafs

  • RR or RP (if applicable) functionality is always be configured on super spines if they are present in a fabric. The maximum number of 4 RRs and RPs are supported even with Super Spines.

  • Border Super Spine and Border Gateway Super Spine roles are supported for inter-fabric connections.

  • vPC configurations aren’t supported on super spines.

  • Super spines don’t support IPv6 underlay configuration.

  • During the Brownfield import of switches, if a switch has the super spine role, the following error is displayed:

    Serial number: [super spine/border super spine/border gateway superspine] Role isn’t supported with preserved configuration yes option.

Supported Topologies for Super Spine Switches

DCNM supports the following topologies with super spine switches.

Topology 1: Super Spine Switches in a Spine Leaf Topology

In this topology, leaf switches are connected to spines, and spines are then connected to Super Spines switches which can be super spines, border super spines, border gateway super spines.

Topology 2: Super Spine Switches Connected to Border

In this topology, there are four leaf switches connecting to the Spine switches, which are connected to the two Super Spine switches. These Super Spine switches are connected to the border or border gateway leaf switches.

Adding a Super Spine Switch to an Existing VXLAN BGP EVPN Fabric
Procedure

Step 1

Navigate to Control > Fabric Builder.

Step 2

From the Fabric Builder window, click Add Switches in the actions panel.

For more information, see Adding Switches to a Fabric.

Step 3

Right-click an existing switch or the newly added switch, and use the Set role option to set the appropriate super spine role.

Note

 
  • If the Super Spine role is present in the fabric, then the other super spine roles that you can assign for any new device are border super spine and border gateway super spine.

  • If Super Spine or any of its variation role is not present in the fabric, you may assign the role to any new device provided that the same is connected to a non-border spine in the fabric. After a Save & Deploy, you will receive an error that can be resolved by clicking on the Resolve button as shown in the below steps.

Step 4

Click Save & Deploy.

An error is displayed saying:

Adding new switch with Super Spine role is not allowed, if save&deploy has already been performed in the fabric without any super spine role switch.

Step 5

Click the error, and click the Resolve button.

A confirmation dialog box is displayed asking whether you want to continue. If you click Yes, the following actions are performed by DCNM:

  • Invalid connections are converted to hosts ports.

  • Removes existing BGP neighborship between spines to leafs.

  • Removes RRs or RPs from all the spine switches.

You should not add a device(s) with super spine, border super spine, or border gateway super spine role if the same will be connected to a border spine or border gateway spine that is already present in the fabric. This action will result in the below error after clicking Save & Deploy. If you want to use the existing device(s) with border spine roles, you need to remove the same and add them again with the appropriate role (spine or super spine and its variants) and valid connections.


Changing the TCAM Configuration on a Device

If you are onboarding the Cisco Nexus 9300 Series switches and Cisco Nexus 9500 Series switches with X9500 line cards using the bootstrap feature with POAP, DCNM pushes the following policies depending on the switch models:

  • Cisco Nexus 9300 Series Switches: tcam_pre_config_9300 and tcam_pre_config_vxlan

  • Cisco Nexus 9500 Series Switches: tcam_pre_config_9500 and tcam_pre_config_vxlan

Perform the following steps to change the TCAM carving of a device in DCNM.

  1. Choose Control > Fabrics > Fabric Builder.

  2. Click the fabric containing the specified switches that have been onboarded using the bootstrap feature.

  3. Click Tabular View under the Actions menu in the Fabric Builder window.

  4. Select all the specified switches and click the View/Edit Policies icon.

  5. Search for tcam_pre_config policies.

  6. If the TCAM config is incorrect or not applicable, select all these policies and click the Delete icon to delete policies.

  7. Add one or multiple tcam_config policies and provide the correct TCAM configuration. For more information about how to add a policy, see Adding PTIs for Multiple Switches.

  8. Reload the respective switches.

If the switch is used as a leaf, border leaf, border gateway leaf, border spine, or border gateway spine, add the tcam_config policy with the following command and deploy.


hardware access-list tcam region racl 1024

This config is required on the switches so that the NGOAM and VXLAN Suppress ARP features are functional.

Make sure that the priority of this tcam_config policy is higher than the tcam_pre_config_vxlan policy so that the config policy with racl 1024 is configured before the tcam_pre_config_vxlan policy.


Note


The tcam_pre_config_vxlan policy contains the config: hardware access-list tcam region arp-ether 256 double-wide.


Preselecting Switches as Route-Reflectors and Rendezvous-Points

This task shows how to preselect switches as Route-Reflectors (RRs) and Rendezvous-Points (RPs) before the first Save & Deploy operation.


Note


This scenario is applicable when you have more than 2 spines and you want to control the preselection of RRs and RPs before the first Save & Deploy operation.


Procedure

Step 1

Import switches successfully.

Step 2

Create the rr_state or rp_state policies using View/Edit Policies on the spines or super spine switches, which should be preselected as RR or RP.

Note

 
  • If there are more than 2 spines and the maximum number of RRs or RPs in the fabric settings is set to 2, then it’s recommended to distribute RR and RP on different spines.

  • If there are more than 4 spines and the maximum number of RRs or RPs in the fabric settings is set to 4, then it’s recommended to distribute RR and RP on different spines.

Step 3

Click Save & Deploy, and then click Deploy Config.

The spines that have rr_state policies become RR and spines that have rp_state policies become RP.

Step 4

After Save & Deploy, if you want to replace the preselected RRs and RPs with a new set of devices, then old RR and RP devices should be removed from the fabric before performing the same steps.


Adding a vPC L3 Peer Keep-Alive Link

This procedure shows how to add a vPC L3 peer keep-alive link.


Note


  • vPC L3 Peer Keep-Alive link is not supported with fabric vPC peering.

  • In Brownfield migration, You need to manually create a vPC pairing when the L3 keep alive is configured on the switches. Otherwise, the vPC configuration is automatically picked up from the switches.


Procedure

Step 1

From DCNM, navigate to Control > Template Library.

Step 2

Search for the vpc_serial_simulated policy, select it, and click the Edit icon.

Step 3

Edit the template properties and set the Template Sub Type to Device so that this policy appears in View/Edit Policies.

Step 4

Navigate to the Fabric Builder window and click on the fabric containing the vPC pair switches.

Step 5

Click Tabular View and select the vPC pair switches, and then click View/Edit Policies.

You can also right-click the switches individually in the topology and select View/Edit Policies.

Step 6

Click + to add policies.

Step 7

From the Policy drop-down list, select vpc_serial_simulated policy and add priority. Click Save.

Note that if both switches are selected, then this policy will be created on both vPC pair switches.

Step 8

Navigate back to Tabular View and click the Links tab.

Step 9

Select the link between vPC pair, which has to be a vPC peer keep alive and click Edit.

Step 10

From the Link Template drop-down list, select int_intra_vpc_peer_keep_alive_link_11_1.

Enter values for the remaining fields. Make sure to leave the field empty for the default VRF and click Save.

Step 11

Click Save & Deploy, and click Preview Config for one of the switches.

If VRF is non-default, use switch_freeform to create the respective VRF.

Navigate to the topology and click the vPC pair switch to see the details.


Changing the Local Authentication to AAA Authentication for Switches in a Fabric

Procedure

Step 1

Log in to DCNM and navigate to Control > Fabric Builder.

Step 2

Click the Edit icon for a fabric and add the AAA authentication commands in the AAA Freeform Config field under the Manageability tab.

Step 3

In the Fabric Builder topology window, click Add Switches. Use the AAA credentials in this window to add switches into the DCNM.

Step 4

If you are importing switches in to the fabric via POAP, you need to have the AAA configs on the switch.

Navigate to the fabric settings and add the relevant commands in Bootstrap Freeform Config.

Step 5

In the Fabric Builder topology window, click Add Switches. In the PowerON Auto Provisioning (POAP) tab, click the Add discovery credentials icon and enter the discovery credentials.

Click Save & Deploy after you complete adding switches.


IPv6 Underlay Support for Easy Fabric

From Cisco DCNM Release 11.3(1), you can create an Easy fabric with IPv6 only underlay. The IPv6 underlay is supported only for the Easy_Fabric_11_1 template. For more information, see Configuring a VXLAN Fabric with IPv6 Underlay.

Brownfield Deployment-Transitioning VXLAN Fabric Management to DCNM

DCNM supports Brownfield deployments, wherein you transition your VXLAN BGP EVPN fabric management to DCNM. The transition involves migrating existing network configurations to DCNM. For information, see Managing a Brownfield VXLAN BGP EVPN Fabric.

Creating an External Fabric

In DCNM 11.1(1) release, you can add switches to the external fabric. Generic pointers:

  • An external fabric is a monitor-only or managed mode fabric. DCNM supports only the monitor mode for Cisco IOS-XR family devices.

  • You can import, remove, and delete switches for an external fabric.

  • For Inter-Fabric Connection (IFC) cases, you can choose Cisco 9000, 7000 and 5600 Series switches as destination switches in the external fabric.

  • You can use non-existing switches as destination switches.

  • The template that supports an external fabric is External_Fabric.

  • If an external fabric is an MSD fabric member, then the MSD topology screen displays the external fabric with its devices, along with the member fabrics and their devices.

    When viewed from an external fabric topology screen, any connections to non-DCNM managed switches are represented by a cloud icon labeled as Undiscovered.

  • You can set up a Multi-Site or a VRF-lite IFC by manually configuring the links for the border devices in the VXLAN fabric or by using an automatic Deploy Border Gateway Method or VRF Lite IFC Deploy Method. If you are configuring the links manually for the border devices, we recommend using the Core Router role to set up a Multi-Site eBGP underlay from a Border Gateway device to a Core Router and the Edge Router role to set up a VRF-lite Inter-Fabric Connection (IFC) from a Border device to an Edge device.

  • If you are using the Cisco Nexus 7000 Series Switch with Cisco NX-OS Release 6.2(24a) on the LAN Classic or External fabrics, make sure to enable AAA IP Authorization in the fabric settings.

  • You can discover the following non-Nexus devices in an external fabric:

    • IOS-XE family devices: Cisco CSR 1000v, Cisco IOS XE Gibraltar 16.10.x, Cisco ASR 1000 Series routers, and Cisco Catalyst 9000 Series Switches

    • IOS-XR family devices: ASR 9000 Series Routers, IOS XR Release 6.5.2 and Cisco NCS 5500 Series Routers, IOS XR Release 6.5.3

    • Arista 4.2 (Any model)

  • Configure all the non-Nexus devices, except Cisco CSR 1000v, before adding them to the external fabric.

  • From Cisco DCNM Release 11.4(1), you can configure non-Nexus devices as borders. You can create an IFC between a non-Nexus device in an external fabric and a Cisco Nexus device in an easy fabric. The interfaces supported for these devices are:

    • Routed

    • Subinterface

    • Loopback

  • From Cisco DCNM, Release 11.4(1), you can configure a Cisco ASR 1000 Series routers and Cisco Catalyst 9000 Series switches as edge routers, set up a VRF-lite IFC and connect it as a border device with an easy fabric.

  • Before a VDC reload, discover Admin VDC in the fabric. Otherwise, the reload operation does not occur.

  • You can connect a Cisco data center to a public cloud using Cisco CSR 1000v. See the Connecting Cisco Data Center and a Public Cloud chapter for a use case.

  • In an external fabric, when you add the switch_user policy and provide the username and password, the password must be an encrypted string that is displayed in the show run command.

    For example:

    
    username admin password 5 $5$I4sapkBh$S7B7UcPH/iVTihLKH5sgldBeS3O2X1StQsvv3cmbYd1  role network-admin

    In this case, the entered password should be 5$5$I4sapkBh$S7B7UcPH/iVTihLKH5sgldBeS3O2X1StQsvv3cmbYd1.

  • For the Cisco Network Insights for Resources (NIR) Release 2.1 and later, and flow telemetry, feature lldp command is one of the required configuration.

    Cisco DCNM pushes feature lldp on the switches only for the Easy Fabric deployments, that is, for the eBGP routed fabric or VXLAN EVPN fabric.

    Therefore, NIR users need to enable feature lldp on all the switches in the following scenarios:

    • External fabric in Monitored or Managed Mode

    • LAN Classic fabric in Monitored or Managed Mode (Applicable for DCNM 11.4(1) or later)

Creating External Fabric from Fabric Builder

Follow these steps to create an external fabric from Fabric Builder.

  1. Click Control > Fabric Builder. The Fabric Builder page comes up.

  2. Click the Create Fabric button. The Add Fabric screen comes up. The fields in this screen are:

    Fabric Name - Enter the name of the external fabric.

    Fabric Template - Choose External_Fabric.

    When you choose the fabric template, the fabric creation screen for creating an external fabric comes up.

  3. Fill up the General tab as shown below.

    BGP AS # - Enter the BGP AS number.

    Fabric Monitor Mode – Clear the check box if you want DCNM to manage the fabric. Keep the check box selected to enable a monitor only external fabric. DCNM supports only the monitor mode for Cisco IOS-XR family devices.

    When you create an Inter-Fabric Connection from a VXLAN fabric to this external fabric, the BGP AS number is referenced as the external or neighbor fabric AS Number.

    When an external fabric is set to Fabric Monitor Mode Only, you cannot deploy configurations on its switches. If you click Save & Deploy in the fabric topology screen, it displays an error message.

    The configurations must be pushed for non-Nexus devices before you discover them in the fabric. You cannot push configurations in the monitor mode.

    However, the following settings (available when you right-click the switch icon) are allowed:

  4. Enter values in the fields under the Advanced tab.

    ext-fab-adv-tab

    vPC Peer Link VLAN - The vPC peer link VLAN ID is autopopulated. Update the field to reflect the correct value.

    Power Supply Mode - Choose the appropriate power supply mode.

    Enable MPLS Handoff: Select the check box to enable the MPLS Handoff feature. For more information, see the Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - MPLS SR and LDP Handoff chapter.

    Underlay MPLS Loopback Id: Specifies the underlay MPLS loopback ID. The default value is 101.

    Enable AAA IP Authorization - Enables AAA IP authorization, when IP Authorization is enabled in the AAA Server

    Enable DCNM as Trap Host - Select this check box to enable DCNM as a trap host.

    Enable CDP for Bootstrapped Switch - Select the check box to enable CDP for bootstrapped switch.

    Enable NX-API - Specifies enabling of NX-API on HTTPS. This check box is unchecked by default.

    Enable NX-API on HTTP - Specifies enabling of NX-API on HTTP. This check box is unchecked by default. Enable this check box and the Enable NX-API check box to use HTTP. If you uncheck this check box, the applications that use NX-API and supported by Cisco DCNM, such as Endpoint Locator (EPL), Layer 4-Layer 7 services (L4-L7 services), VXLAN OAM, and so on, start using the HTTPS instead of HTTP.


    Note


    If you check the Enable NX-API check box and the Enable NX-API on HTTP check box, applications use HTTP.


    Inband Mgmt: For External and Classic LAN Fabrics, this knob enables DCNM to import and manage of switches with inband connectivity (reachable over switch loopback, routed, or SVI interfaces) , in addition to management of switches with out-of-band connectivity (aka reachable over switch mgmt0 interface). The only requirement is that for Inband managed switches, there should be IP reachability from DCNM to the switches via the eth2 aka inband interface. For this purpose, static routes may be needed on the DCNM, that in turn can be configured via the Administration->Customization->Network Preferences option. After enabling Inband management, during discovery, provide the IPs of all the switches to be imported using Inband Management and set maximum hops to 0. DCNM has a pre-check that validates that the Inband managed switch IPs are reachable over the eth2 interface. Once the pre-check has passed, DCNM then discovers and learns about the interface on that switch that has the specified discovery IP in addition to the VRF that the interface belongs to. As part of the process of switch import/discovery, this information is captured in the baseline intent that is populated on the DCNM. For more information, see Inband Management in External Fabrics and LAN Classic Fabrics.


    Note


    Bootstrap or POAP is only supported for switches that are reachable over out-of-band connectivity, that is, over switch mgmt0. The various POAP services on the DCNM are typically bound to the eth1 or out-of-band interface. In scenarios, where DCNM eth0/eth1 interfaces reside in the same IP subnet, the POAP services are bound to both interfaces.


    Enable Precision Time Protocol (PTP): Enables PTP across a fabric. When you select this check box, PTP is enabled globally and on core-facing interfaces. Additionally, the PTP Source Loopback Id and PTP Domain Id fields are editable. For more information, see Precision Time Protocol for External Fabrics and LAN Classic Fabrics.

    PTP Source Loopback Id: Specifies the loopback interface ID Loopback that is used as the Source IP Address for all PTP packets. The valid values range from 0 to 1023. The PTP loopback ID cannot be the same as RP, Phantom RP, NVE, or MPLS loopback ID. Otherwise, an error will be generated. The PTP loopback ID can be the same as BGP loopback or user-defined loopback which is created from DCNM. If the PTP loopback ID is not found during Save & Deploy, the following error is generated: Loopback interface to use for PTP source IP is not found. Please create PTP loopback interface on all the devices to enable PTP feature.

    PTP Domain Id: Specifies the PTP domain ID on a single network. The valid values range from 0 to 127.

    Fabric Freeform: You can apply configurations globally across all the devices discovered in the external fabric using this freeform field. The devices in the fabric should belong to the same device-type and the fabric should not be in monitor mode. The different device types are:

    • NX-OS

    • IOS-XE

    • IOS-XR

    • Others

    Depending on the device types, enter the configurations accordingly. If some of the devices in the fabric do not support these global configurations, they will go out-of-sync or fail during the deployment. Hence, ensure that the configurations you apply are supported on all the devices in the fabric or remove the devices that do not support these configurations.

  5. Fill up the Resources tab as shown below.

    Subinterface Dot1q Range - The subinterface 802.1Q range and the underlay routing loopback IP address range are autopopulated.

    Underlay Routing Loopback IP Range - Specifies loopback IP addresses for the protocol peering.

    Underlay MPLS Loopback IP Range: Specifies the underlay MPLS SR or LDP loopback IP address range.

    The IP range should be unique, that is, it should not overlap with IP ranges of the other fabrics.

    Enable AAA IP Authorization - Enables AAA IP authorization, when IP Authorization is enabled in the AAA Server

    Enable DCNM as Trap Host - Select this check box to enable DCNM as a trap host.

  6. Fill up the Configuration Backup tab as shown below.

    The fields on this tab are:

    Hourly Fabric Backup: Select the check box to enable an hourly backup of fabric configurations and the intent.

    You can enable an hourly backup for fresh fabric configurations and the intent as well. If there is a configuration push in the previous hour, DCNM takes a backup. In case of the external fabric, the entire configuration on the switch is not converted to intent on DCNM as compared to the VXLAN fabric. Therefore, for the external fabric, both intent and running configuration are backed up.

    Intent refers to configurations that are saved in DCNM but yet to be provisioned on the switches.

    The hourly backups are triggered during the first 10 minutes of the hour.

    Scheduled Fabric Backup: Check the check box to enable a daily backup. This backup tracks changes in running configurations on the fabric devices that are not tracked by configuration compliance.

    Scheduled Time: Specify the scheduled backup time in a 24-hour format. This field is enabled if you check the Scheduled Fabric Backup check box.

    Select both the check boxes to enable both back up processes.

    The backup process is initiated after you click Save.

    The scheduled backups are triggered exactly at the time you specify with a delay of up to two minutes. The scheduled backups are triggered regardless of the configuration deployment status.

    You can also initiate the fabric backup in the fabric topology window. Click Backup Now in the Actions pane.

    Pointers for hourly and scheduled backup:

    • The backups contain running configuration and intent pushed by DCNM. Configuration compliance forces the running config to be the same as the DCNM config. Note that for the external fabric, only some configurations are part of intent and the remaining configurations are not tracked by DCNM. Therefore, as part of backup, both DCNM intent and running config from switch are captured.

  7. Click the Bootstrap tab.

    ext-fab-adv-tab

    Enable Bootstrap - Select this check box to enable the bootstrap feature. After you enable bootstrap, you can enable the DHCP server for automatic IP address assignment using one of the following methods:

    • External DHCP Server: Enter information about the external DHCP server in the Switch Mgmt Default Gateway and Switch Mgmt IP Subnet Prefix fields.

    • Local DHCP Server: Enable the Local DHCP Server check box and enter details for the remaining mandatory fields.

    Enable Local DHCP Server - Select this check box to initiate enabling of automatic IP address assignment through the local DHCP server. When you select this check box, all the remaining fields become editable.

    DHCP Version – Select DHCPv4 or DHCPv6 from this drop-down list. When you select DHCPv4, the Switch Mgmt IPv6 Subnet Prefix field is disabled. If you select DHCPv6, the Switch Mgmt IP Subnet Prefix is disabled.


    Note


    Cisco DCNM IPv6 POAP is not supported with Cisco Nexus 7000 Series Switches. Cisco Nexus 9000 and 3000 Series Switches support IPv6 POAP only when switches are either L2 adjacent (eth1 or out-of-band subnet must be a /64) or they are L3 adjacent residing in some IPv6 /64 subnet. Subnet prefixes other than /64 are not supported.


    If you do not select this check box, DCNM uses the remote or external DHCP server for automatic IP address assignment.

    DHCP Scope Start Address and DHCP Scope End Address - Specifies the first and last IP addresses of the IP address range to be used for the switch out of band POAP.

    Switch Mgmt Default Gateway - Specifies the default gateway for the management VRF on the switch.

    Switch Mgmt IP Subnet Prefix - Specifies the prefix for the Mgmt0 interface on the switch. The prefix should be between 8 and 30.

    DHCP scope and management default gateway IP address specification - If you specify the management default gateway IP address 10.0.1.1 and subnet mask 24, ensure that the DHCP scope is within the specified subnet, between 10.0.1.2 and 10.0.1.254.

    Switch Mgmt IPv6 Subnet Prefix - Specifies the IPv6 prefix for the Mgmt0 interface on the switch. The prefix should be between 112 and 126. This field is editable if you enable IPv6 for DHCP.

    Enable AAA Config - Select this check box to include AAA configs from Advanced tab during device bootup.

    Bootstrap Freeform Config - (Optional) Enter other commands as needed. For example, if you are using AAA or remote authentication-related configurations, add these configurations in this field to save the intent. After the devices boot up, they contain the intent defined in the Bootstrap Freeform Config field.

    Copy-paste the running-config to a freeform config field with correct indentation, as seen in the running configuration on the NX-OS switches. The freeform config must match the running config. For more information, see Resolving Freeform Config Errors in Switches.

    DHCPv4/DHCPv6 Multi Subnet Scope - Specifies the field to enter one subnet scope per line. This field is editable after you check the Enable Local DHCP Server check box.

    The format of the scope should be defined as:

    DHCP Scope Start Address, DHCP Scope End Address, Switch Management Default Gateway, Switch Management Subnet Prefix

    For example: 10.6.0.2, 10.6.0.9, 10.6.0.1, 24

  8. Click ThousandEyes Agent tab. This feature is supported on Cisco DCNM Release 11.5(3) only. For more information, refer to Configuring Global Settings for ThousandEyes Enterprise Agent.

    The fields on this tab are:


    Note


    The fabric settings for ThousandEyes Agent overwrites the global settings and applies the same configuration for all the ThousandEyes Agent installed on switches in that fabric.


    • Enable Fabric Override for ThousandEyes Agent Installation: Select the check box to enable the ThousandEyes Enterprise Agent on the fabric.

    • ThousandEyes Account Group Token: Specifies ThousandEyes Enterprise Agent account group token for installation.

    • VRF on Switch for ThousandEyes Agent Collector Reachability: Specifies the VRF data which provides internet reachability.

    • DNS Domain: Specifies the switch DNS domain configuration.

    • DNS Server IPs: Specifies the comma separated list of IP addresses (v4/v6) of Domain Name System (DNS) server. You can enter a maximum of three IP addresses for the DNS Server.

    • NTP Server IPs: Specifies comma separated list of IP addresses (v4/v6) of Network Time Protocol (NTP) server. You can enter a maximum of three IP addresses for the NTP Server.

    • Enable Proxy for Internet Access: Select the check box to enable the proxy setting for NX-OS switch internet access.

    • Proxy Information: Specifies the proxy server port information.

    • Proxy Bypass: Specifies the server list for which proxy is bypassed.

  9. Click Save.

    After the external fabric is created, the external fabric topology page comes up.

After creating the external fabric, add switches to it.

Add Switches to the External Fabric

  1. Click Add switches. The Inventory Management screen comes up.

    You can also add switches by clicking Tabular View > Switches > + .

  2. Enter the IP address (Seed IP) of the switch.

  3. Choose the device type from the Device Type drop-down list.

    The options are NX-OS, IOS XE, IOS XR, and Other.

    • Choose NX-OS to discover a Cisco Nexus switch.

    • Choose IOS XE to discover a CSR device.

    • Choose IOS XR to discover an ASR device.

    • Choose Other to discover non-Cisco devices.

    Click the appropriate radio button. Refer the Connecting Cisco Data Center and a Public Cloud chapter for more information on adding Cisco CSR 1000v.

    Refer the Adding non-Nexus Devices to External Fabrics section for more information on adding other non-Nexus devices.

    Config compliance is disabled for all non-Nexus devices except for Cisco CSR 1000v.

  4. Enter the administrator username and password of the switch.

  5. Click Start discovery at the bottom part of the screen. The Scan Details section comes up shortly. Since the Max Hops field was populated with 2, the switch with the specified IP address and switches two hops from it are populated.

  6. Select the check boxes next to the concerned switches and click Import into fabric.

    You can discover multiple switches at the same time. The switches must be properly cabled and connected to the DCNM server and the switch status must be manageable.

    The switch discovery process is initiated. The Progress column displays the progress. After DCNM discovers the switch, the screen closes and the fabric screen comes up again. The switch icons are seen at the centre of the fabric screen.

  7. Click Refresh topology to view the latest topology view.

  8. External Fabric Switch Settings - The settings for external fabric switches vary from the VXLAN fabric switch settings. Right-click on the switch icon and set or update switch options.

    The options are:

    Set Role – By default, no role is assigned to an external fabric switch. The allowed roles are Edge Router and Core Router. Assign the Core Router role for a Multi-Site Inter-Fabric Connection (IFC) and the Edge Router role for a VRF Lite IFC between the external fabric and VXLAN fabric border devices.


    Note


    Changing of switch role is allowed only before executing Save & Deploy.


    Modes – Active/Operational mode.

    vPC Pairing – Select a switch for vPC and then select its peer.

    Manage Interfaces – Deploy configurations on the switch interfaces.

    Straight-through FEX, Active/Active FEX, and breakout of interfaces are not supported for external fabric switch interfaces.

    View/edit Policies – Add, update, and delete policies on the switch. The policies you add to a switch are template instances of the templates available in the template library. After creating policies, deploy them on the switch using the Deploy option available in the View/edit Policies screen.

    History – View per switch deployment history.

    Preview Config - View the pending configuration and the side-by-side comparison of the running and expected configuration.

    Deploy Config – Deploy per switch configurations.

    Discovery - You can use this option to update the credentials of the switch, reload the switch, rediscover the switch, and remove the switch from the fabric.

  9. Click Save & Deploy at the top right part of the screen. The template and interface configurations form the configuration provisioning on the switches.

    When you click Save & Deploy, the Configuration Deployment screen comes up.

  10. Click Deploy Config at the bottom part of the screen to initiate pending configuration onto the switch.

  11. Close the screen after deployment is complete.


    Note


    If a switch in an external fabric does not accept default credentials, you should perform one of the following actions:

    • Remove the switch in the external fabric from inventory, and then rediscover.

    • LAN discovery uses both SNMP and SSH, so both passwords need to be the same. You need to change the SSH password to match the SNMP password on the switch. If SNMP authentication fails, discovery is stopped with authentication error. If SNMP authentication passes but SSH authentication fails, DCNM discovery continues, but the switch status shows a warning for the SSH error.


Move an External Fabric Under an MSD Fabric

You should go to the MSD fabric page to associate an external fabric as its member.

  1. Click Control > Fabric Builder to go to the Fabric Builder screen.

  2. Click within the MSD-Parent-Fabric box to go to its topology screen.

  3. In the topology screen, go to the Actions panel and click Move Fabrics.

    The Move Fabric screen comes up. It contains a list of fabrics. The external fabric is displayed as a standalone fabric.

  4. Select the radio button next to the external fabric and click Add.

    Now, in the Scope drop-down box at the top right, you can see that the external fabric appears under the MSD fabric.

  5. Click ← at the top left part of the screen to go to the Fabric Builder screen. In the MSD fabric box’s Member Fabrics field, the external fabric is displayed.

External Fabric Depiction in an MSD Fabric Topology

The MSD topology screen displays MSD member fabrics and external fabrics together. The external fabric External65000 is displayed as part of the MSD topology.


Note


When you deploy networks or VRFs for the VXLAN fabric, the deployment page (MSD topology view) shows the VXLAN and external fabrics that are connected to each other.


External Fabric Switch Operations

In the external fabric topology screen, click Tabular view option in the Actions panel, at the left part of the screen. The Switches | Links screen comes up.

The Switches tab is for managing switch operations and the Links tab is for viewing fabric links. Each row represents a switch in the external fabric, and displays switch details, including its serial number.

The buttons at the top of the table are explained, from left to right direction. Some options are also available when you right-click the switch icon. However, the Switches tab enables you to provision configurations on multiple switches (for adding and deploying policies, and so on) simultaneously.

  • Add switches to the fabric. This option is also available in the topology page (Add switches option in Actions panel).

  • Initiate the switch discovery process by DCNM afresh.

  • Update device credentials such as authentication protocol, username, and password.

  • Reload the switch.

  • Remove the switch from the fabric.

  • View/edit Policies – Add, update, and delete a policy on multiple switches simultaneously. The policies are template instances of templates in the template library. After creating a policy, deploy it on the switches using the Deploy option available in the View/edit Policies screen.


    Note


    If you select multiple switches and deploy a policy instance, then it will be deployed on all the selected switches.


  • Manage Interfaces – Deploy configurations on the switch interfaces.

  • History – View deployment history on the selected switch.

  • Deploy – Deploy switch configurations.

External Fabric Links

You can only view and delete external fabric links. You cannot create links or edit them.

To delete a link in the external fabric, do the following:

  1. Go to the topology screen and click the Tabular view option in the Actions panel, at the left part of the screen.

    The Switches | Links screen comes up.

  2. Choose one or more check boxes and click the Delete icon at the top left.

    The links are deleted.

Move Neighbor Switch to External Fabric

  1. Click Add switches. The Inventory Management screen comes up.

  2. Click Move Neighbor Switches tab.

  3. Select the switch and click Move Neighbor.

    To delete a neighbor, select a switch and click Delete Neighbor.

Discovering New Switches

To discover new switches, perform the following steps:
Procedure

Step 1

Power on the new switch in the external fabric after ensuring that it is cabled to the DCNM server.

Boot the Cisco NX-OS and setup switch credentials.

Step 2

Execute the write, erase, and reload commands on the switch.

Choose Yes to both the CLI commands that prompt you to choose Yes or No.

Step 3

On the DCNM UI, choose Control > Fabric Builder.

The Fabric Builder screen is displayed. It contains a list of fabrics wherein a rectangular box represents each fabric.

Step 4

Click Edit Fabric icon at the top right part of the fabric box.

The Edit Fabric screen is displayed.

Step 5

Click the Bootstrap tab and update the DHCP information.

Step 6

Click Save at the bottom right part of the Edit Fabric screen to save the settings.

Step 7

In the Fabric Builder screen, click within the fabric box.

The fabric topology screen appears.

Step 8

In the fabric topology screen, from the Actions panel at the left part of the screen, click Add switches.

The Inventory Management screen comes up.

Step 9

Click the POAP tab.

In an earlier step, the reload command was executed on the switch. When the switch restarts to reboot, DCNM retrieves the serial number, model number, and version from the switch and displays them on the Inventory Management along screen. Also, an option to add the management IP address, hostname, and password are made available. If the switch information is not retrieved, refresh the screen using the Refresh icon at the top right part of the screen.

Note

 
At the top left part of the screen, export and import options are provided to export and import the .csv file that contains the switch information. You can pre-provision a device using the import option too.

Select the checkbox next to the switch and add switch credentials: IP address and host name.

Based on the IP address of your device, you can either add the IPv4 or IPv6 address in the IP Address field.

Beginning with Release 11.2(1), you can provision devices in advance. To pre-provision devices, refer to Pre-provisioning a Device.

Step 10

In the Admin Password and Confirm Admin Password fields, enter and confirm the admin password.

This admin password is applicable for all the switches displayed in the POAP window.

Note

 

If you do not want to use admin credentials to discover switches, you can instead use the AAA authentication, that is, RADIUS or TACACS credentials for discovery only.

Step 11

(Optional) Use discovery credentials for discovering switches.

  1. Click the Add Discovery Credentials icon to enter the discovery credentials for switches.

  2. In the Discovery Credentials window, enter the discovery credentials such as discovery username and password.

    Click OK to save the discovery credentials.

    If the discovery credentials are not provided, DCNM uses the admin user and password to discover switches.

    Note

     
    • The discovery credentials that can be used are AAA authentication based credentials, that is, RADIUS or TACACS.

    • The discovery credential is not converted as commands in the device configuration. This credential is mainly used to specify the remote user (or other than the admin user) to discover the switches. If you want to add the commands as part of the device configuration, add them in the Bootstrap Freeform Config field under the Bootstrap tab in the fabric settings. Also, you can add the respective policy from View/Edit Policies window.

Step 12

Click Bootstrap at the top right part of the screen.

DCNM provisions the management IP address and other credentials to the switch. In this simplified POAP process, all ports are opened up.

Step 13

After the bootstrapping is complete, close the Inventory Management screen to go to the fabric topology screen.

Step 14

In the fabric topology screen, from the Actions panel at the left part of the screen, click Refresh Topology.

After the added switch completes POAP, the fabric builder topology screen displays the added switch with some physical connections.

Step 15

Monitor and check the switch for POAP completion.

Step 16

Click Save & Deploy at the top right part of the fabric builder topology screen to deploy pending configurations (such as template and interface configurations) onto the switches.

Note

 
  • If there is a sync issue between the switch and DCNM, the switch icon is displayed in red color, indicating that the fabric is Out-Of-Sync. For any changes on the fabric that results in the out-of-sync, you must deploy the changes. The process is the same as explained in the Discovering Existing Switches section.

  • The discovery credential is not converted as commands in the device configuration. This credential is mainly used to specify the remote user (or other than the admin user) to discover the switches. If you want to add the commands as part of the device configuration, add them in the Bootstrap Freeform Config field under the Bootstrap tab in the fabric settings. Also, you can add the respective policy from View/Edit Policies window.

During fabric creation, if you have entered AAA server information (in the Manageability tab), you must update the AAA server password on each switch. Else, switch discovery fails.

Step 17

After the pending configurations are deployed, the Progress column displays 100% for all switches.

Step 18

Click Close to return to the fabric builder topology.

Step 19

Click Refresh Topology to view the update.

All switches must be in green color indicating that they are functional.

The switch and the link are discovered in DCNM. Configurations are built based on various policies (such as fabric, topology, and switch generated policies). The switch image (and other required) configurations are enabled on the switch.

Step 20

Right-click and select History to view the deployed configurations.

Click the Success link in the Status column for more details. An example:

Step 21

On the DCNM UI, the discovered switches can be seen in the fabric topology.

Up to this step, the POAP is completed with basic settings. All the interfaces are set to trunk ports. You must setup interfaces through the Control > Interfaces option for any additional configurations, but not limited to the following:

  • vPC pairing.

  • Breakout interfaces

    Support for breakout interfaces is available for 9000 Series switches.

  • Port channels, and adding members to ports.

Note

 
After discovering a switch (new or existing), at any point in time you can provision configurations on it again through the POAP process. The process removes existing configurations and provision new configurations. You can also deploy configurations incrementally without invoking POAP.

Adding non-Nexus Devices to External Fabrics

You can discover non-Nexus devices in an external fabric. Refer the Cisco DCNM Compatibility Matrix to see the non-Nexus devices supported by Cisco DCNM.

Only Cisco Nexus switches support SNMP discovery by default. Hence, configure all the non-Nexus devices before adding it to the external fabric. Configuring the non-Nexus devices includes configuring SNMP views, groups, and users. See the Configuring non-Nexus Devices for Discovery section for more information.

Cisco CSR 1000v is discovered using SSH. Cisco CSR 1000v does not need SNMP support because it can be installed in clouds where SNMP is blocked for security reasons. See the Connecting Cisco Data Center and a Public Cloud chapter to see a use case to add Cisco CSR 1000v, Cisco IOS XE Gibraltar 16.10.x to an external fabric.

However, Cisco DCNM can only access the basic device information like system name, serial number, model, version, interfaces, up time, and so on. Cisco DCNM does not discover non-Nexus devices if the hosts are part of CDP or LLDP.

The settings that are not applicable for non-Nexus devices appear blank, even if you get many options when you right-click a non-Nexus device in the fabric topology window. You cannot add or edit interfaces for ASR 9000 Series Routers and Arista switches.

From Cisco DCNM, Release 11.4(1), you can add IOS-XE devices like Cisco Catalyst 9000 Series switches and Cisco ASR 1000 Series Routers as well to external fabrics.

Configuring non-Nexus Devices for Discovery

Before discovering any non-Nexus device in Cisco DCNM, configure it on the switch console.

Configuring IOS-XE Devices for Discovery

Before you discover the Cisco IOS-XE devices in DCNM, perform the following steps:

Procedure

Step 1

Run the following SSH commands on the switch console.

switch (config)# hostname <hostname>
switch (config)# ip domain name <domain_name>
switch (config)# crypto key generate rsa
switch (config)# ip ssh time-out 90
switch (config)# ip ssh version 2
switch (config)# line vty 1 4
switch (config-line)# transport input ssh
switch (config)# username admin privilege secret <password>
switch (config)# aaa new-model
switch (config)# aaa authentication login default local
switch (config)# aaa authorization exec default local none

Step 2

Run the following command in DCNM console to perform an SNMP walk.

snmpbulkwalk -v3 -u admin -A <password> -l AuthNoPriv -a MD5 ,switch-mgmt-IP>
        .1.3.6.1.2.1.2.2.1.2

Step 3

Run the following SNMP command on the switch console.

snmp-server user username group-name [remote host {v1 | v2c | v3 [encrypted] [auth {md5 | sha} auth-password]} [priv des 256 privpassword] vrf vrf-name [access access-list]

Configuring Arista Devices for Discovery

Enable Privilege Exec mode using the following command:

switch> enable
switch#

switch# show running confiruation | grep aaa        /* to view the authorization*/
aaa authorization exec default local
Run the following commands in the switch console to configure Arista devices:
switch# configure terminal
switch (config)# username dcnm privilege 15 role network-admin secret cisco123
snmp-server view view_name SNMPv2 included
snmp-server view view_name SNMPv3 included
snmp-server view view_name default included
snmp-server view view_name entity included
snmp-server view view_name if included
snmp-server view view_name iso included
snmp-server view view_name lldp included
snmp-server view view_name system included
snmp-server view sys-view default included
snmp-server view sys-view ifmib included
snmp-server view sys-view system included
snmp-server community private ro
snmp-server community public ro
snmp-server group group_name v3 auth read view_name
snmp-server user username group_name v3 auth md5 password priv aes password 

Note


SNMP password should be same as the password for username.


You can verify the configuration by running the show run command, and view the SNMP view output by running the show snmp view command.

Show Run Command
switch (config)# snmp-server engineID local f5717f444ca824448b00
snmp-server view view_name SNMPv2 included
snmp-server view view_name SNMPv3 included
snmp-server view view_name default included
snmp-server view view_name entity included
snmp-server view view_name if included
snmp-server view view_name iso included
snmp-server view view_name lldp included
snmp-server view view_name system included
snmp-server view sys-view default included
snmp-server view sys-view ifmib included
snmp-server view sys-view system included
snmp-server community private ro
snmp-server community public ro
snmp-server group group_name v3 auth read view_name
snmp-server user user_name group_name v3 localized f5717f444ca824448b00 auth md5 be2eca3fc858b62b2128a963a2b49373 priv aes be2eca3fc858b62b2128a963a2b49373
!
spanning-tree mode mstp
!
service unsupported-transceiver labs f5047577
!
aaa authorization exec default local
!
no aaa root
!
username admin role network-admin secret sha512 $6$5ZKs/7.k2UxrWDg0$FOkdVQsBTnOquW/9AYx36YUBSPNLFdeuPIse9XgyHSdEOYXtPyT/0sMUYYdkMffuIjgn/d9rx/Do71XSbygSn/
username cvpadmin role network-admin secret sha512 $6$fLGFj/PUcuJT436i$Sj5G5c4y9cYjI/BZswjjmZW0J4npGrGqIyG3ZFk/ULza47Kz.d31q13jXA7iHM677gwqQbFSH2/3oQEaHRq08.
username dcnm privilege 15 role network-admin secret sha512 $6$M48PNrCdg2EITEdG$iiB880nvFQQlrWoZwOMzdt5EfkuCIraNqtEMRS0TJUhNKCQnJN.VDLFsLAmP7kQBo.C3ct4/.n.2eRlcP6hij/ 
Show SNMP View Command
configure terminal# show snmp view
view_name SNMPv2 - included
view_name SNMPv3 - included
view_name default - included
view_name entity - included
view_name if - included
view_name iso - included
view_name lldp - included
view_name system - included
sys-view default - included
sys-view ifmib - included
sys-view system - included
leaf3-7050sx#show snmp user

User name : user_name
Security model : v3
Engine ID : f5717f444ca824448b00
Authentication : MD5
Privacy : AES-128
Group : group_name 
Configuring Cisco IOS-XR Devices for Discovery

Run the following commands in the switch console to configure IOS-XR devices:

switch# configure terminal
switch (config)# snmp-server view view_name cisco included
snmp-server view view_name mib-2 included
snmp-server group group_name v3 auth read view_name write view_name
snmp-server user user_name group_name v3 auth md5 password priv des56 password SystemOwner

Note


SNMP password should be same as password for username.


You can verify the configuration by running the show run command.

Configuration and Verification of Cisco IOS-XR Devices
RP/0/RSP0/CPU0:ios(config)#snmp-server view view_name cisco included
RP/0/RSP0/CPU0:ios(config)#snmp-server view view_name mib-2 included
RP/0/RSP0/CPU0:ios(config)#snmp-server group group_name v3 auth read view_name write view_name
RP/0/RSP0/CPU0:ios(config)#snmp-server user user_name group_name v3 auth md5 password priv des56 password SystemOwner
RP/0/RSP0/CPU0:ios(config)#commit Day MMM DD HH:MM:SS Timezone
RP/0/RSP0/CPU0:ios(config)#
RP/0/RSP0/CPU0:ios(config)#show run snmp-server Day MMM DD HH:MM:SS Timezone snmp-server user user_name group1 v3 auth md5 encrypted 10400B0F3A4640585851 priv des56 encrypted 000A11103B0A59555B74 SystemOwner
snmp-server view view_name cisco included
snmp-server view view_name mib-2 included
snmp-server group group_name v3 auth read view_name write view_name 
Discovering non-Nexus Devices in an External Fabric

To add non-Nexus devices to an external fabric in the fabric topology window, perform the following steps:

Before you begin

Ensure that the configurations are pushed for non-Nexus devices before adding them to an external fabric. You cannot push configurations in a fabric in the monitor mode.

Procedure

Step 1

Click Add switches in the Actions pane.

The Inventory Management dialog box appears.

Step 2

Enter values for the following fields under the Discover Existing Switches tab:

Field

Description

Seed IP

Enter the IP address of the switch.

You can import more than one switch by providing the IP address range. For example: 10.10.10.40-60

The switches must be properly cabled and connected to the DCNM server and the switch status must be manageable.

Device Type

  • Choose IOS XE from the drop-down list for adding Cisco CSR 1000v, Cisco ASR 1000 Series routers, or Cisco Catalyst 9000 Series Switches.

  • Choose IOS XR from the drop-down list for adding Cisco NCS 5500 Series Routers, IOS XR Release 6.5.3.

  • Choose Other from the drop-down list for adding non-Cisco devices, like Arista switches.

Username

Enter the username.

Password

Enter the password.

Note

 

An error message appears if you try to discover a device that is already discovered.

Set the password of the device in the LAN Credentials window if the password is not set. To navigate to the LAN Credentials window from the Cisco DCNM Web UI, choose Administration > LAN Credentials.

Step 3

Click Start Discovery.

The Scan Details section appears with the switch details populated.

Step 4

Check the check boxes next to the switches you want to import.

Step 5

Click Import into fabric.

The switch discovery process is initiated. The Progress column displays the progress.

Discovering devices takes some time. A pop-up message appears at the bottom-right about the device discovery after the discovery progress is 100%, or done. For example: <ip-address> added for discovery.

Step 6

Click Close.

The fabric topology window appears with the switches.

Step 7

(Optional) Click Refresh topology to view the latest topology view.

Step 8

(Optional) Click Tabular view in the Actions pane.

The switches and links window appears, where you can view the scan details. The discovery status is discovering in red with a warning icon next to it if the discovery is in progress.

Step 9

(Optional) View the details of the device.

After the discovery of the device:

  • The discovery status changes to ok in green with a check box checked next to it.

  • The value of the device under the Fabric Status column changes to In-Sync.

Note

 

When a switch is in Unreachable discovery status, the last available information of the switch is retained in other columns.


What to do next
Set the appropriate role. Right-click the device, choose Set role.

Pre-provisioning a Device

From Cisco DCNM Release 11.2, you can provision devices in advance.

Note


Ensure that you enter DHCP details in the Bootstrap tab in the fabric settings.
  • The pre-provisioned devices support the following configurations in DCNM:

    • Base management

    • vPC Pairing

    • Intra-Fabric links

    • Ethernet ports

    • Port-channel

    • vPC

    • ST FEX

    • AA FEX

    • Loopback

    • Overlay network configurations

  • The pre-provisioned devices do not support the following configurations in DCNM:

    • Inter-Fabric links

    • Sub-interface

    • Interface breakout configuration

  • When a device is being pre-provisioned has breakout links, you need to specify the corresponding breakout command along with the switch's model and gateway in the Data field in the Add a new device to pre-provisioning window in order to generate the breakout PTI.

    Note the following guidelines:

    • Multiple breakout commands can be separated by a semicolon (;).

    • The definitions of the fields in the data JSON object are as follows:

      • modulesModel: (Mandatory) Specifies the switch module’s model information.

      • gateway: (Mandatory) Specifies the default gateway for the management VRF on the switch. This field is required to create the intent to pre-provision devices. You must enter the gateway even if it is in the same subnet as DCNM to create the intent as part of pre-provisioning a device.

      • breakout: (Optional) Specifies the breakout command provided in the switch.

      • portMode: (Optional) Specifies the port mode of the breakout interface.

    The examples of the values in the Data field are as follows:

    • {"modulesModel": ["N9K-C93180LC-EX"], "gateway": "10.1.1.1/24"}

    • {"modulesModel": ["N9K-C93180LC-EX"],"breakout": "interface breakout module 1 port 1 map 10g-4x", "portMode": "hardware profile portmode 4x100G+28x40G", "gateway": "172.22.31.1/24" }

    • {"modulesModel": ["N9K-X9736C-EX", "N9K-X9732C-FX", "N9K-C9516-FM-E2", "N9K-C9516-FM-E2", "N9K-C9516-FM-E2", "N9K-C9516-FM-E2", "N9K-SUP-B+", "N9K-SC-A", "N9K-SC-A"], "gateway": "172.22.31.1/24"}

    • {"breakout":"interface breakout module 1 port 50 map 10g-4x" , "gateway": "172.16.1.1/24", "modulesModel": ["N9K-C93180YC-EX "]}

    • {"modulesModel": ["N9K-X9732C-EX", "N9K-X9732C-EX", "N9K-C9504-FM-E", "N9K-C9504-FM-E", "N9K-SUP-B", "N9K-SC-A", "N9K-SC-A"], "gateway": "172.29.171.1/24", "breakout":"interface breakout module 1 port 1,11,19 map 10g-4x; interface breakout module 1 port 7 map 25g-4x"}

    • {"modulesModel": ["N9K-C93180LC-EX"], "gateway": "10.1.1.1/24","breakout": "interface breakout module 1 port 1-4 map 10g-4x","portMode": "hardware profile portmode 48x25G + 2x100G + 4x40G"}

Procedure

Step 1

Click Control > Fabric Builder.

The Fabric Builder screen is displayed.

Step 2

Click within the fabric box.

Step 3

From the Actions panel, click the Add switches option.

The Inventory Management screen is displayed.

Step 4

Click the POAP tab.

Step 5

In the POAP tab, do the following:

  1. Click + from the top left part of the screen.

    The Add a new device screen comes up.

  2. Fill up the device details as shown in the screenshot.

  3. Click Save.

IP Address: Specify the IPv4 or IPv6 address of the new device.

Serial Number: The serial number for the new device. Serial number is found in the Cisco Build of Material Purchase and you can refer to these values while using the pre-provisioning feature.

For information about the Data field, see the examples provided in guidelines.

The device details appear in the POAP screen. You can add more devices for pre-provisioning.

At the top left part of the window, Export and Import icons are provided to export and import the .csv file that contains the switch information.

Using the Import option, you can pre-provision multiple devices.

Add new devices’ information in the .csv file with all the mandatory fields (SerialNumber, Model, version, IpAddress, Hostname, and Data fields [JSON Object]).

The Data column consists of the model name of the module to identify the hardware type from the fabric template. A .csv file screenshot:

Step 6

Enter the administration password in the Admin Password and Confirm Admin Password fields.

Step 7

Select the device(s) and click Bootstrap at the top right part of the screen.

The leaf1 device appears in the fabric topology.

From the Actions panel, click Tabular View. You cannot deploy the fabric till the status of all the pre-provisioned switch(es) are displayed as ok under the Discovery Status column.

Note

 

When a switch is in Unreachable discovery status, the last available information of the switch is retained in other columns.

When you connect leaf1 to the fabric, the switch is provisioned with the IP address 10.1.1.1.

Step 8

Navigate to Fabric Builder and set roles for the device.

Create intra-link policy using one of the templates:

  • int_pre_provision_intra_fabric_link to automatically generate intra fabric interface configuration with DCNM allocated IP addresses

  • int_intra_fabric_unnum_link_11_1 if you are using unnumbered links

  • int_intra_fabric_num_link_11_1 if you want to manually assign IP addresses to intra-links

Click Save & Deploy.

Configuration for the switches are captured in corresponding PTIs and can be seen in the View/Edit Policies window.

Step 9

To bring in the physical device, you can follow the manual RMA or POAP RMA procedure.

For more information, see Return Material Authorization (RMA).

If you use the POAP RMA procedure, ignore the error message of failing to put the device into maintenance mode due to no connectivity since it is expected to have no connectivity to a non-existing device.

You need to click Save & Deploy in the fabric after one or more switches are online to provision the host ports. This action must be performed before overlays are provisioned for the host port attachment.


Pre-provisioning an Ethernet Interface

From DCNM Release 11.4(1), you can pre-provision Ethernet interfaces in the Interface window. This pre-provisioning feature is supported in the Easy, External, and eBGP fabrics. You can add Ethernet interfaces to only pre-provisioned devices before they are discovered in DCNM.


Note


Before attaching a network/VRF, you must pre-provision the Ethernet interface before adding it to Port-channels, vPCs, ST FEX, AA FEX, loopback, subinterface, tunnel, ethernet, and SVI configurations.


Before you begin
Make sure that you have a preprovisioned device in your fabric. For information, see Pre-provisioning a Device.
Procedure

Step 1

Navigate to the fabric containing the pre-provisioned device from the Fabric Builder window.

Step 2

Right click the pre-provisioned device and select Manage Interfaces.

You can also navigate to the Interfaces window by selecting Control > Fabrics > Interfaces. From the Scope drop-down list, select the fabric containing the pre-provisioned device.

Step 3

Click Add.

Step 4

Enter all the required details in the Add Interface window.

Type: Select Ethernet from this drop-down list.

Select a device: Select the pre-provisioned device.

Note

 

You cannot add an Ethernet interface to an already managed device in DCNM.

Enter Interface Name: Enter a valid interface name based on the module type. For example, Ethernet1/1, eth1/1, or e1/1. The interface with same name should be available on the device after it is added.

Policy: Select a policy that should be applied on the interface.

For more information, see Adding Interfaces.

Step 5

Click Save.

Step 6

Click Preview to check the expected configuration that will be deployed to the switch after it is added.

Note

 

The Deploy button is disabled for Ethernet interfaces since the devices are pre-provisioned.


Creating a vPC Setup

You can create a vPC setup for a pair of switches in the external fabric. Ensure that the switches are of the same role and connected to each other.
Procedure

Step 1

Right-click one of the two designated vPC switches and choose vPC Pairing.

The Select vPC peer dialog box comes up. It contains a list of potential peer switches. Ensure that the Recommended column for the vPC peer switch is updated as true.

Note

 

Alternatively, you can also navigate to the Tabular view from the Actions pane. Choose a switch in the Switches tab and click vPC Pairing to create, edit, or unpair a vPC pair. However, you can use this option only when you choose a Cisco Nexus switch.

Step 2

Click the radio button next to the vPC peer switch and choose vpc_pair from the vPC Pair Template drop-down list. Only templates with the VPC_PAIR template sub type are listed here.

The vPC Domain and vPC Peerlink tabs appear. You must fill up the fields in the tabs to create the vPC setup. The description for each field is displayed at the extreme right.

vPC Domain tab: Enter the vPC domain details.

vPC+: If the switch is part of a FabricPath vPC + setup, enable this check box and enter the FabricPath switch ID field.

Configure VTEPs: Check this check box to enter the source loopback IP addresses for the two vPC peer VTEPs and the loopback interface secondary IP address for NVE configuration.

NVE interface: Enter the NVE interface. vPC pairing will configure only the source loopback interface. Use the freeform interface manager for additional configuration.

NVE loopback configuration: Enter the IP address with the mask. vPC pairing will only configure primary and secondary IP address for loopback interface. Use the freeform interface manager for additional configuration.

vPC Peerlink tab: Enter the vPC peer-link details.

Switch Port Mode: Choose trunk or access or fabricpath.

If you select trunk, then corresponding fields (Trunk Allowed VLANs and Native VLAN) are enabled. If you select access, then the Access VLAN field is enabled. If you select fabricpath, then the trunk and access port related fields are disabled.

Step 3

Click Save.

The fabric topology window appears. The vPC setup is created.

To update vPC setup details, do the following:

  1. Right-click a vPC switch and choose vPC Pairing.

    The vPC peer dialog box comes up.

  2. Update the field(s) as needed.

    When you update a field, the Unpair icon changes to Save.

  3. Click Save to complete the update.


Undeploying a vPC Setup

Procedure

Step 1

Right-click a vPC switch and choose vPC Pairing.

The vPC peer screen comes up.

Step 2

Click Unpair at the bottom right part of the screen.

The vPC pair is deleted and the fabric topology window appears.

Step 3

Click Save & Deploy.

The Config Deployment dialog box appears.

Step 4

(Optional) Click the value under the Preview Config column.

View the pending configuration in the Config Preview dialog box. The following configuration details are deleted on the switch when you unpair: vPC feature, vPC domain, vPC peerlink, vPC peerlink member ports, loopback secondary IPs, and host vPCs. However, the host vPCs and port channels are not removed. Delete these port channels from the Interfaces window if required.

Note

 

Resync the fabric if it is out of sync.

When you unpair, only PTIs are deleted for following features, but the configuration is not cleared on the switch during Save & Deploy: NVE configuration, LACP feature, fabricpath feature, nv overlay feature, loopback primary ID. In case of host vPCs, port channels and their member ports are not cleared. You can delete these port channels from the Interfaces window if required. You can continue using these features on the switch even after unpairing.

If you are migrating from fabricpath to VXLAN, you need to clear the configuration on the device before deploying the VXLAN configuration.


Multi-Site Domain for VXLAN BGP EVPN Fabrics

A Multi-Site Domain (MSD) is a multifabric container that is created to manage multiple member fabrics. An MSD is a single point of control for definition of overlay networks and VRFs that are shared across member fabrics. When you move fabrics (that are designated to be part of the multifabric overlay network domain) under the MSD as member fabrics, the member fabrics share the networks and VRFs created at the MSD-level. This way, you can consistently provision network and VRFs for different fabrics, at one go. It significantly reduces the time and complexity involving multiple fabric provisionings.

Since server networks and VRFs are shared across the member fabrics (as one stretched network), the new networks and VRFs provisioning function is provided at the MSD fabric level. Any new network and VRF creation is only allowed for the MSD. All member fabrics inherit any new network and VRF created for the MSD.

In DCNM 11.1(1) release, in addition to member fabrics, the topology view for the MSD fabric is introduced. This view displays all member fabrics, and how they are connected to each other, in one view.

Also, a deployment view is introduced for the MSD fabric. You can deploy overlay networks (and VRFs) on member fabrics from a single topology deployment screen, instead of visiting each member fabric deployment screen separately and deploying.


Note


  • vPC support is added for BGWs in the DCNM 11.1(1) release.

  • The MSD feature is unsupported on the switches with the Cisco NX-OS Release 7.0(3)I4(8b) and 7.0(4)I4(x) images.

  • The VXLAN OAM feature in Cisco DCNM is only supported on a single fabric or site.

  • After you unpair a BGW vPC, perform a Save & Deploy on the member fabric followed by a Save & Deploy of the MSD fabric.


A few fabric-specific terms:

  • Standalone fabric: A fabric that is not part of an MSD is referred as a standalone fabric from the MSD perspective. Before the MSD concept, all fabrics were considered standalone, though two or more such fabrics can be connected with each other.

  • Member fabrics: Fabrics that are part of an MSD are called member fabrics or members. Create a standalone fabric (of the type Easy_Fabric) first and then move it within an MSD as a member fabric.

When a standalone fabric is added to the MSD, the following actions take place:

  • The standalone fabric's relevant attributes and the network and VRF definitions are checked against that of the MSD. If there is a conflict, then the standalone fabric addition to the MSD fails. If there are no conflicts, then the standalone fabric becomes a member fabric for the MSD. If there is a conflict, the exact conflicts are logged in the pending errors log for the MSD fabric. You can remedy the conflicts and then attempt to add the standalone fabric to the MSD again.

  • All the VRFs and networks definitions from the standalone fabric that do not have presence in the MSD are copied over to the MSD and in turn inherited to each of its other existing member fabrics.

  • The VRFs (and their definitions) from the MSD (such as the MSD's VRF, and L2 and L3 VNI parameters that do not have presence in the standalone fabric) are inherited into the standalone fabric that just became a member.

Fabric and Switch Instance Variables

While the MSD provisions a global range of network and VRF values, some parameters are fabric-specific and some parameters are switch-specific. The parameters are called fabric instance and switch instance variables.

Fabric instance values can only be edited or updated in the fabric context from the VRFs and Networks window. The appropriate fabric should be selected in the SCOPE drop-down list to edit the fabric instance values. Some of the examples of fabric instance variables are BGP ASN, Multicast group per network or VRF, etc. For information about editing multicast group address, see Editing Networks in the Member Fabric.

Switch instance values can be edited on deployment of the network on the switch. For example, VLAN ID.

MSD and Member Fabric Process Flow

An MSD has multiple sites (and hence, multiple member fabrics under an MSD). VRFs and networks are created for the MSD and get inherited by the member fabrics. For example, VRF-50000 (and L3 network with ID 50000), and L2 networks with IDs 30000 and 30001 are created for the MSD, in one go.

A high-level flow chart of the MSD and member fabric creation and MSD-to-member fabric inheritance process:

The sample flow explained the inheritance from the MSD to one member. An MSD has multiple sites (and hence, multiple member fabrics under an MSD). A sample flow from an MSD to multiple members:

In this example, VRF-50000 (and L3 network with ID 50000), and L2 networks with IDs 30000 and 30001 are created in one go. Networks and VRFs are deployed on the member fabric switches, one after another, as depicted in the image.

In DCNM 11.1(1), you can provision overlay networks through a single MSD deployment screen.


Note


If you move a standalone fabric with existing networks and VRFs to an MSD, DCNM does appropriate validation. This is explained in detail in an upcoming section.


Upcoming sections in the document explain the following:

  • Creation of an MSD fabric.

  • Creation of a standalone fabric (as a potential member) and its movement under the MSD as a member.

  • Creation of networks and VRFs in the MSD and their inheritance to the member fabrics.

  • Deployment of networks and VRFs from the MSD and member fabric topology views.

  • Other scenarios for fabric movement:

    • Standalone fabric with existing networks and VRFs to an MSD fabric.

    • Member fabric from one MSD to another.

Creating an MSD Fabric and Associating Member Fabrics to It

The process is explained in two steps:

  1. Create an MSD fabric.

  2. Create a new standalone fabric and move it under the MSD fabric as a member fabric.

Creating an MSD Fabric

  1. Click Control > Fabric Builder.

    The Fabric Builder screen comes up. When you view the screen for the first time, the Fabrics section has no entries. After you create a fabric, it is displayed on the Fabric Builder screen, wherein a rectangular box represents each fabric.

    A standalone or member fabric contains Switch_Fabric in the Type field, its AS number in the ASN field and mode of replication, Multicast or Ingress Replication, in the Replication Mode field. Since no device or network traffic is associated with an MSD fabric as it is a container, it does not have these fields.

  2. Click the Create Fabric button. The Add Fabric screen comes up. The fields are:

    Fabric Name - Enter the name of the fabric.

    Fabric Template - This field has template options for creating specific types of fabric. Choose MSD_Fabric. The MSD screen comes up.

    The fields in the screen are explained:

    In the General tab, all fields are autopopulated with data. The fields consist of the Layer 2 and Layer 3 VXLAN segment identifier range, the default network and VRF templates, and the anycast gateway MAC address. Update the relevant fields as needed.

    Layer 2 VXLAN VNI Range - Layer 2 VXLAN segment identifier range.

    Layer 3 VXLAN VNI Range - Layer 3 VXLAN segment identifier range.

    VRF Template - Default VRF template.

    Network Template - Default network template.

    VRF Extension Template - Default VRF extension template.

    Network Extension Template - Default network extension template.

    Anycast-Gateway-MAC - Anycast gateway MAC address.

    Multisite Routing Loopback Id – The multicast routing loopback ID is populated in this field.

    ToR Auto-deploy Flag - Select this check box to enable automatic deployment of the networks and VRFs in the Easy Fabric to the ToR switches in the External Fabric when you click Save & Deploy in the MSD fabric.

  3. Click the DCI tab.

    The fields are:

    Multi-Site Overlay IFC Deploy Method – Choose how you will connect the data centers through the BGW, manually, in a back-to-back fashion or through a route server.

    If you choose to connect them through a route server, you should enter the route server details.

    Multi-Site Route Server List – Specify the IP addresses of the route server. If you specify more than one, separate the IP addresses by a comma.

    Multi-Site Route Server BGP ASN List – Specify the BGP AS Number of the router server. If you specify more than one route server, separate the AS Numbers by a comma.

    Multi-Site Underlay IFC Auto Deployment Flag - Check the check box to enable auto configuration. Uncheck the check box for manual configuration.

    Delay Restore Time - Specifies the Multi-Site underlay and overlay control planes convergence time. The minimum value is 30 seconds and the maximum value is 1000 seconds.

    Multi-Site CloudSec – Enables CloudSec configurations on border gateways. If you enable this field, the remaining three fields for CloudSec are editable. For more information, see Support for CloudSec in Multi-Site Deployment.

    Enable Multi-Site eBGP Password - Enables eBGP password for Multi-Site underlay/overlay IFCs.

    eBGP Password - Specifies the encrypted eBGP Password Hex String.

    eBGP Authentication Key Encryption Type - Specifies the BGP key encryption type. It is 3 for 3DES and 7 for Cisco.

  4. Click the Resources tab.

    MultiSite Routing Loopback IP Range – Specify the Multi-Site loopback IP address range used for the EVPN Multi-Site function.

    A unique loopback IP address is assigned from this range to each member fabric because each member site must have a Loopback 100 IP address assigned for overlay network reachability. The per-fabric loopback IP address is assigned on all the BGWs in a specific member fabric.

    DCI Subnet IP Range and Subnet Target Mask – Specify the Data Center Interconnect (DCI) subnet IP address and mask.

  5. Click the Configuration Backup tab.

    Scheduled Fabric Backup: Check the check box to enable a daily backup. This backup tracks changes in running configurations on the fabric devices that are not tracked by configuration compliance.

    Scheduled Time: Specify the scheduled backup time in a 24-hour format. This field is enabled if you check the Scheduled Fabric Backup check box.

    Select both the check boxes to enable both back up processes.

    The backup process is initiated after you click Save.

    The backup configuration files are stored in the following path in DCNM: /usr/local/cisco/dcm/dcnm/data/archive

  6. Click Save.

    A message appears briefly at the bottom right part of the screen, indicating that you have created a new MSD fabric. After fabric creation, the fabric page comes up. The fabric name MSD-Parent-Fabric appears at the top left part of the screen.


    Note


    From Cisco DCNM Release 11.4(1), when you update the MSD fabric settings, only switches with roles relevant to MSD are updated.


    Since the MSD fabric is a container, you cannot add a switch to it. The Add Switches button that is available in the Actions panel for member and standalone fabrics is not available for the MSD fabric.

    When a new MSD is created, the newly created MSD fabric instance appears (as a rectangular box) on the Fabric Builder page. To go to the Fabric Builder page, click the button at the top left part of the MSD-Parent-Fabric page.

    An MSD fabric is displayed as MSD in the Type field, and it contains the member fabric names in the Member Fabrics field. When no member fabric is created, None is displayed.

The steps for creation of an MSD fabric and moving member fabrics under it are:

  1. Create an MSD fabric.

  2. Create a new standalone fabric and move it under the MSD fabric as a member fabric.

Step 1 is completed. Step 2 is explained in the next section.

Creating and Moving a New Fabric Under the MSD Fabric as a Member

A new fabric is created as a standalone fabric. After you create a new fabric, you can move it under an MSD as a member. As a best practice, when you create a new fabric that is a potential member fabric (of an MSD), do not add networks and VRFs to the fabric. Move the fabric under the MSD and then add networks and VRFs for the MSD. That way, there will not be any need for validation (or conflict resolution) between the member and MSD fabric network and VRF parameters.

New fabric creation is explained in the Easy Fabric creation process. In the MSD document, fabric movement is covered. However, some pointers about a standalone (potential member) fabric:

The values that are displayed in the screen are automatically generated. The VXLAN VNI ID ranges (in the L2 Segment ID Range and L3 Partition ID Range fields) allocated for new network and VRF creation are values from the MSD fabric segment ID range. If you want to update the VXLAN VNI ranges or the VRF and Network VLAN ranges, ensure the following:

  • If you update a range of values, ensure that it does not overlap with other ranges.

  • You must update one range of values at a time. If you want to update more than one range of values, do it in separate instances. For example, if you want to update L2 and L3 ranges, you should do the following:

    1. Update the L2 range and click Save.

    2. Click the Edit Fabric option again, update the L3 range and click Save.

Ensure that the Anycast Gateway MAC, the Network Template and the VRF Template field values are the same as the MSD fabric. Else, member fabric movement to the MSD fail.

Other pointers:

  • Ensure that the Anycast Gateway MAC, the Network Template and the VRF Template field values are the same as the MSD fabric. Else, member fabric movement to the MSD fail.

  • The member fabric should have a Site ID configured and the Site ID must be unique among the members.

  • The BGP AS number should be unique for a member fabric.

  • The underlay subnet range for loopback0 should be unique.

  • The underlay subnet range for loopback1 should be unique.

After you click Save, a note appears at the bottom right part of the screen indicating that the fabric is created. When a fabric is created, the fabric page comes up. The fabric name appears at the top left part of the screen.

Simultaneously, the Fabric Builder page also displays the newly created fabric, Member1.

Simultaneously, the Fabric Builder page also displays the newly created fabric, Member1.

Moving the Member1 Fabric Under MSD-Parent-Fabric

You should go to the MSD fabric page to associate a member fabric under it.

If you are on the Fabric Builder page, click within the MSD-Parent-Fabric box to go to the MSD-Parent-Fabric page.

[If you are in the Member1 fabric page, you should go to the MSD-Parent-Fabrics-Docs fabric page. Click <- above the Actions panel. You will reach the Fabric Builder page. Click within the MSD-Parent-Fabric box].

  1. In the MSD-Parent-Fabric page, go to the Actions panel and click Move Fabrics.

    The Move Fabric screen comes up. It contains a list of fabrics.

    Member fabrics of other MSD container fabrics are not displayed here.

    The Member1 fabric is still a standalone fabric. A fabric is considered a member fabric of an MSD fabric only when you associate it with the MSD fabric. Also, each standalone fabric is a candidate for being an MSD fabric member, until you associate it to one of the MSD fabrics.

  2. Since Member1 fabric is to be associated with the MSD fabric, select the Member1 radio button. The Add button is enabled.

  3. Click Add.

    Immediately, a message appears at the top of the screen indicating that the Member1 fabric is now associated with the MSD fabric MSD-Parent-Fabric. Now, the MSD-Parent-Fabric fabric page appears again.

  4. Click the Move Fabrics option to check the fabric status. You can see that the fabric status has changed from standalone to member.

  5. Close this screen.

  6. Click above the Actions panel to go to the Fabric Builder page.

    You can see that Member1 is now added to MSD fabric and is displayed in the Member Fabrics field.

MSD Fabric Topology View Pointers

  • MSD fabric topology view - Member fabrics and their switches are displayed. A boundary defines each member fabric. All fabric devices of the fabric are confined to the boundary.

    All links are displayed, including intra-fabric links and Multi-Site (underlay and overlay), and VRF Lite links to remote fabrics.

  • Member fabric topology view - A member fabric and its switches are displayed. In addition, the connected external fabric is displayed.

  • A boundary defines a standalone VXLAN fabric, and each member fabric in an MSD fabric. A fabric’s devices are confined to the fabric boundary. You can move a switch icon by dragging it. For a better user experience, in addition to switches, DNCM 11.2(1) release allows you to move an entire fabric. To move a fabric, place the cursor within the fabric boundary (but not on a switch icon), and drag it in the desired direction.

Adding and Editing Links

To add a link, right-click anywhere in the topology and use the Add Link option. To edit a link, right-click on the link and use the Edit Link option.

Alternatively, you can use the Tabular view option in the Actions panel.

To know how to add links between border switches of different fabrics (inter-fabric links) or between switches in the same fabric (intra-fabric links), refer the Fabric Links topic.

Creating and Deploying Networks and VRFs in an MSD Fabric

In standalone fabrics, networks and VRFs are created for each fabric. In an MSD fabric, networks and VRFs should be created at the MSD fabric level. The networks and VRFs are inherited by all the member networks. You cannot create or delete networks and VRFs for member fabrics. However, you can edit them.

For example, consider an MSD fabric with two member fabrics. If you create three networks in the MSD fabric, then all three networks will automatically be available for deployment in both the member fabrics.

Though member fabrics inherit the MSD fabric's networks and VRFs, you have to deploy the networks and VRFs distinctly, for each fabric.

In DCNM 11.1(1) release, a deployment view is introduced for the MSD, in addition to the per-fabric deployment view. In this view, you can view and provision overlay networks for all member fabrics within the MSD, at once. However, you still have to apply and save network and VRF configurations distinctly, for each fabric.


Note


Networks and VRFs are the common identifiers (represented across member fabrics) that servers (or end hosts) are grouped under so that traffic can be sent between the end hosts based on the network and VRF IDs, whether they reside in the same or different fabrics. Since they have common representation across member fabrics, networks and VRFs can be provisioned at one go. As the switches in different fabrics are physically and logically distinct, you have to deploy the same networks and VRFs separately for each fabric.

For example, if you create networks 30000 and 30001 for an MSD that contains two member fabrics, the networks are automatically created for the member fabrics and are available for deployment.

In DCNM 11.1(1) release, you can deploy 30000 and 30001 on the border devices of all member fabrics through a single (MSD fabric) deployment screen. Prior to this, you had to access the first member fabric deployment screen, deploy 30000 and 300001 on the fabric's border devices, and then access the second member fabric deployment screen and deploy again.

Networks and VRFs are created in the MSD and deployed in the member fabrics. The steps are explained below:

  1. Create networks and VRFs in the MSD fabric.

  2. Deploy the networks and VRFs in the member fabric devices, one fabric at a time.

Creating Networks in the MSD Fabric

  1. Click Control > Networks (under Fabrics submenu).

    The Networks screen comes up.

  2. Choose the correct fabric from SCOPE. When you select a fabric, the Networks screen refreshes and lists networks of the selected fabric.

  3. Select MSD-Parent-Fabric from the list and click Continue at the top right part of the screen.

    The Networks page comes up. This lists the list of networks created for the MSD fabric. Initially, this screen has no entries.

  4. Click the + button at the top left part of the screen (under Networks) to add networks to the MSD fabric. The Create Network screen comes up. Most of the fields are autopopulated.

    The fields in this screen are:

    Network ID and Network Name - Specifies the Layer 2 VNI and name of the network. The network name should not contain any white spaces or special characters except underscore (_) and hyphen (-).

    VRF Name - Allows you to select the Virtual Routing and Forwarding (VRF).

    When no VRF is created, this field is blank. If you want to create a new VRF, click the + button. The VRF name should not contain any white spaces or special characters except underscore (_), hyphen (-), and colon (:).


    Note


    You can also create a VRF by clicking the VRF View button on the Networks page.


    Layer 2 Only - Specifies whether the network is Layer 2 only.

    Network Template - Allows you to select a network template.

    Network Extension Template - This template allows you to extend the network between member fabrics.

    VLAN ID - Specifies the corresponding tenant VLAN ID for the network.

    Network Profile section contains the General and Advanced tabs, explained below.

    General tab

    IPv4 Gateway/NetMask - Specifies the IPv4 address with subnet.

    IPv6 Gateway/Prefix - Specifies the IPv6 address with subnet.

    VLAN Name - Enter the VLAN name.

    If the VLAN is mapped to more than one subnet, enter the anycast gateway IP addresses for those subnets.

    Interface Description - Specifies the description for the interface.

    MTU for the L3 interface - Enter the MTU for Layer 3 interfaces.

    IPv4 Secondary GW1 - Enter the gateway IP address for the additional subnet.

    IPv4 Secondary GW2 - Enter the gateway IP address for the additional subnet.

    Advanced tab - Optionally, specify the advanced profile settings by clicking the Advanced tab. The options are:

    • ARP Suppression

    • DHCPv4 Server 1 and DHCPv4 Server 2 - Enter the DHCP relay IP address of the first and second DHCP servers.

    • DHCPv4 Server VRF - Enter the DHCP server VRF ID.

    • Loopback ID for DHCP Relay interface - Enter the loopback ID of the DHCP relay interface.

    • Routing Tag – The routing tag is autopopulated. This tag is associated with each gateway IP address prefix.

    • TRM enable – Select the check box to enable TRM.

      For more information, see Overview of Tenant Routed Multicast.

    • L2 VNI Route-Target Both Enable - Select the check box to enable automatic importing and exporting of route targets for all L2 virtual networks.


      Note


      From Cisco DCNM Release 11.5(1), the Enable L3 Gateway on Border field is not available as part of the MSD network settings. You can enable a Layer 3 gateway on the border switches at a fabric level. For more information, see Creating Networks for the Standalone Fabric.


      In the MSD fabric level, if the Enable L3 Gateway on Border check box is selected and you are upgrading to Cisco DCNM Release 11.5(1), then it is automatically removed from the MSD fabric level during upgrade.

    • A sample of the Create Network screen:

  5. Click Create Network. A message appears at the bottom right part of the screen indicating that the network is created. The new network (MyNetwork_30000) appears on the Networks page that comes up.

Editing Networks in the MSD Fabric

  1. In the Networks screen of the MSD fabric, select the network you want to edit and click the Edit icon at the top left part of the screen.

    The Edit Network screen comes up.

    You can edit the Network Profile part (General and Advanced tabs) of the MSD fabric network.

  2. Click Save at the bottom right part of the screen to save the updates.

Network Inheritance from MSD-Parent-Fabric to Member1

MSD-Parent-Fabric fabric contains one member fabric, Member1. Go to the Select a Fabric page to access the Member1 fabric.

  1. Click Control > Networks (under Fabrics submenu).

    The Networks screen comes up.

  2. Choose the correct fabric from SCOPE. When you select a fabric, the Networks screen refreshes and lists networks of the selected fabric.

Editing Networks in the Member Fabric

An MSD can contain multiple fabrics. These fabrics forward BUM traffic via Multicast or Ingress replication. Even if all the fabrics use multicast for BUM traffic, the multicast groups within these fabrics need not be the same.

When you create a network in MSD, it is inherited by all the member fabrics. However, the multicast group address is a fabric instance variable. To edit the multicast group address, you need to navigate to the member fabric and edit the network. For more information about the Multicast Group Address field, see Creating Networks for the Standalone Fabric.

  1. Select the network and click the Edit option at the top left part of the window. The Edit Network window comes up.

  2. Update the multicast group address in one of the following ways:

    • Under Network Profile, click the Generate Multicast IP button to generate a new multicast group address for the selected network, and click Save.

    • Click the Advanced tab in the Network Profile section, update the multicast group address, and click Save.


Note


The Generate Multicast IP option is only available for member fabric networks and not MSD networks.


Deleting Networks in the MSD and Member Fabrics

You can only delete networks from the MSD fabric, and not member fabrics. To delete networks and corresponding VRFs in the MSD fabric, follow this order:

  1. Undeploy the networks on the respective fabric devices before deletion.

  2. Delete the networks from the MSD fabric. To delete networks, use the delete (X) option at the top left part of the Networks screen. You can delete multiple networks at once.


    Note


    When you delete networks from the MSD fabric, the networks are automatically removed from the member fabrics too.


  3. Undeploy the VRFs on the respective fabric devices before deletion.

  4. Delete the VRFs from the MSD fabric by using the delete (X) option at the top left part of the screen. You can delete multiple VRF instances at once.

Creating VRFs in the MSD Fabric

  1. From the MSD fabric's Networks page, click the VRF View button at the top right part of the screen to create VRFs.

    1. Choose the correct fabric from SCOPE. When you select a fabric, the VRFs screen refreshes and lists VRFs of the selected fabric.

    2. Choose the MSD fabric (MSD-Parent-Fabric) from the drop-down box and click Continue. The Networks page comes up.

    3. Click VRF View at the top right part of the Networks page].

    The VRFs page comes up. This lists the list of VRFs created for the MSD fabric. Initially, this screen has no entries.

  2. Click the + button at the top left part of the screen to add VRFs to the MSD fabric. The Create VRF screen comes up. Most of the fields are autopopulated.

    The fields in this screen are:

    VRF ID and VRF Name - The ID and name of the VRF.

    The VRF ID is the VRF VNI or the L3 VNI of the tenant.


    Note


    For ease of use, the VRF creation option is also available while you create a network.


    VRF Template - This is populated with the Default_VRF template.

    VRF Extension Template - This template allows you to extend the VRF between member fabrics.

  3. General tab – Enter the VLAN ID of the VLAN associated with the VRF, the corresponding Layer 3 virtual interface, and the VRF ID.

  4. Advanced tab

    Routing Tag – If a VLAN is associated with multiple subnets, then this tag is associated with the IP prefix of each subnet. Note that this routing tag is associated with overlay network creation too.

    Redistribute Direct Route Map – Specifies the route map name for redistribution of routes in the VRF.

    Max BGP Paths and Max iBGP Paths – Specifies the maximum BGP and iBGP paths.

    TRM Enable – Select the check box to enable TRM.

    If you enable TRM, then the RP address, and the underlay multicast address must be entered.

    For more information, see Overview of Tenant Routed Multicast.

    Is RP External – Enable this checkbox if the RP is external to the fabric. If this field is unchecked, RP is distributed in every VTEP.

    RP Address – Specifies the IP address of the RP.

    RP Loopback ID – Specifies the loopback ID of the RP, if Is RP External is not enabled.

    Underlay Multicast Address – Specifies the multicast address associated with the VRF. The multicast address is used for transporting multicast traffic in the fabric underlay.

    Note


    The multicast address in the Default MDT Address for TRM VRFs field in the fabric settings screen is auto-populated in this field. You can override this field if a different multicast group address should be used for this VRF.

    Overlay Multicast Groups – Specifies the multicast group subnet for the specified RP. The value is the group range in “ip pim rp-address” command. If the field is empty, 224.0.0.0/24 is used as default.

    Enable IPv6 link-local Option - Select the check box to enable the IPv6 link-local option under the VRF SVI. If this check box is unchecked, IPv6 forward is enabled.

    Advertise Host Routes - Select the checkbox to control advertisement of /32 and /128 routes to Edge Routers.

    Advertise Default Route - Select the checkbox to control advertisement of default routes within the fabric.

    A sample screenshot:

    Advanced tab:

  5. Click Create VRF.

    The MyVRF_50000 VRF is created and appears on the VRFs page.

Editing VRFs in the MSD Fabric

  1. In the VRFs screen of the MSD fabric, select the VRF you want to edit and click the Edit icon at the top left part of the screen.

    The Edit VRF screen comes up.

    You can edit the VRF Profile part (General and Advanced tabs).

  2. Click Save at the bottom right part of the screen to save the updates.

VRF Inheritance from MSD-Parent-Fabric to Member1

MSD-Parent-Fabric contains one member fabric, Member1. Do the following to access the member fabric page.

  1. Choose the correct fabric from SCOPE. When you select a fabric, the VRFs screen refreshes and lists VRFs of the selected fabric.

  2. Click the VRF View button. On the VRFs page, you can see that the VRF created for the MSD is inherited to its member.

Deleting VRFs in the MSD and Member Fabrics

You can only delete networks from the MSD fabric, and not member fabrics. To delete networks and corresponding VRFs in the MSD fabric, follow this order:

  1. Undeploy the networks on the respective fabric devices before deletion.

  2. Delete the networks from the MSD fabric.

  3. Undeploy the VRFs on the respective fabric devices before deletion.

  4. Delete the VRFs from the MSD fabric by using the delete (X) option at the top left part of the screen. You can delete multiple VRF instances at once.


    Note


    When you delete VRFs from the MSD fabric, they are automatically removed from the member fabrics too.


Editing VRFs in the Member Fabric

You cannot edit VRF parameters at the member fabric level. Update VRF settings in the MSD fabric. All member fabrics are automatically updated.

Deleting VRFs in the Member Fabric

You cannot delete VRFs at the member fabric level. Delete VRFs in the MSD fabric. The deleted VRFs are automatically removed from all member fabrics.

Step 1 of the following is explained. Step 2 information is mentioned in the next subsection.

  1. Create networks and VRFs in the MSD fabric.

  2. Deploy the networks and VRFs in the member fabric devices, one fabric at a time.

Deployment and Undeployment of Networks and VRFs in Member Fabrics

Before you begin, ensure that you have created networks at the MSD fabric level since the member fabric inherits networks and VRFs created for the MSD fabric.


Note


The deployment (and undeployment) of networks and VRFs in member fabrics are the same as explained for standalone fabrics. Refer Creating and Deploying Networks and VRFs.


Support for CloudSec in Multi-Site Deployment

CloudSec feature allows secured data center interconnect in a multi-site deployment by supporting source-to-destination packet encryption between border gateway devices in different fabrics.

CloudSec feature is supported on Cisco Nexus 9000 Series FX2 platform with Cisco NX-OS Release 9.3(5) or later. The border gateways, border gateway spines, and border gateway superspines that are FX2 platforms, and run Cisco NX-OS Release 9.3(5) or later are referred as CloudSec capable switches.

Cisco DCNM Release 11.4(1) provides an option to enable CloudSec in an MSD fabric.


Note


The CloudSec session is point to point over DCI between border gateways (BGWs) on two different sites. All communication between sites uses Multi-Site PIP instead of VIP. Enabling CloudSec requires a switch from VIP to PIP, which could cause traffic disruption for data flowing between sites. Therefore, it is recommended to enable or disable CloudSec during a maintenance window.


You can also watch the video that demonstrates how to configure the CloudSec feature. See Video: Configuring CloudSec in Cisco DCNM.

Enabling CloudSec in MSD

Navigate to Control > Fabrics > Fabric Builder. You can either create a new MSD fabric by clicking Create Fabric or edit the existing MSD fabric by clicking Edit Fabric.

Under the DCI tab, you can specify the CloudSec configuration details.

Multi-Site CloudSec – Enables CloudSec configurations on border gateways. If you enable this field, the remaining CloudSec fields are editable.

Multi-Site CloudSec – Enables CloudSec configurations on border gateways. If you enable this field, the remaining three fields for CloudSec are editable.

When Cloudsec is enabled at MSD level, DCNM also enables dci-advertise-pip under evpn multisite border-gateway and tunnel-encryption on the uplinks for all Cloudsec capable gateways.

When you click Save & Deploy, you can verify theses configs in the Preview Config window for the border gateway switches.

Note – CloudSec isn’t supported if the border gateway has vPC or TRM is enabled on it, that is, TRM enabled on multisite overlay IFC. If CloudSec is enabled in this scenario, appropriate warning or error messages are generated.

CloudSec Key String – Specifies the hex key string. Enter a 66 hexadecimal string if you choose AES_128_CMAC or enter a 130 hexadecimal string if you choose AES_256_CMAC.

CloudSec Cryptographic Algorithm – Choose AES_128_CMAC or AES_256_CMAC.

CloudSec Enforcement – Specifies whether the CloudSec enforcement should be strict or loose.

strict – Deploys the CloudSec configuration to all the border gateways in fabrics in MSD. If there are any border gateways that don’t support CloudSec, then an error message is generated, and the configuration isn’t pushed to any switch.

If strict is chosen, the tunnel-encryption must-secure CLI is pushed to the CloudSec enabled gateways within MSD.

loose – Deploys the CloudSec configuration to all the border gateways in fabrics in MSD. If there are any border gateways that don’t support CloudSec, then a warning message is generated. In this case, the CloudSec config is only deployed to the switches that support CloudSec. If loose is chosen, the tunnel-encryption must-secure CLI is removed if it exists.


Note


There should be at least two fabrics in MSD with border gateways that support CloudSec. If there’s only one fabric with a CloudSec capable device, then the following error message is generated:

CloudSec needs to have at least 2 sites that can support CloudSec.


To remove this error, meet the criteria of having at least two sites that can support CloudSec or disable CloudSec.

CloudSec Status Report Timer – Specifies the CloudSec Operational Status periodic report timer in minutes. This value specifies how often the DCNM polls the CloudSec status data from the switch. The default value is 5 minutes and the range is from 5 to 60 minutes.

Using the CloudSec feature in DCNM, you can have all the gateways within the MSD to use the same keychain (and have only one key string) and policy. You can provide one key chain string for DCNM to form the key chain policy. DCNM forms the encryption-policy by taking all default values. DCNM pushes the same key chain policy, the same encryption-policy, and encryption-peer policies to each CloudSec capable gateways. On each gateway, there’s one encryption-peer policy for each remote gateway that is CloudSec capable, using the same keychain and same key policy.

If you don’t want to use the same key for the whole MSD fabric or want to enable CloudSec on a subset of all sites, you can use switch_freeform to manually push the CloudSec config to the switches.

Capture all the CloudSec config in switch_freeform.

For example, the below config is included in the switch_freeform policy:

feature tunnel-encryption
evpn multisite border-gateway 600
  dci-advertise-pip
tunnel-encryption must-secure-policy
tunnel-encryption policy CloudSec_Policy1
tunnel-encryption source-interface loopback20
key chain CloudSec_Key_Chain1 tunnel-encryption
  key 1000
    key-octet-string 7 075e731f1a5c4f524f43595f507f7d73706267714752405459070b0b0701585440 cryptographic-algorithm AES_128_CMA
tunnel-encryption peer-ip 192.168.0.6
  keychain CloudSec_Key_Chain1 policy CloudSec_Policy1

Add tunnel-encryption in the Freeform Config of the uplink interface policy which will generate config like the following:

interface ethernet1/13
  no switchport
  ip address 192.168.1.14/24 tag 54321
  evpn multisite dci-tracking
  tunnel-encryption
  mtu 9216
  no shutdown

For more information, see Enabling Freeform Configurations on Fabric Switches.

When CloudSec configuration is added to or removed from the switch, the DCI uplinks will flap, which will trigger multisite BGP session flapping. For multisite with existing cross site traffic, there will be traffic disruption during this transition. Therefore, it is recommended to make the transition during a maintenance window.

If you’re migrating an MSD fabric with the CloudSec configuration into DCNM, the Cloudsec related configuration is captured in switch_freeform and interface freeform config. You do not need to turn on Multi-Site Cloudsec in the MSD fabric setting. If you want to add more fabrics and establish CloudSec tunnels which share the same CloudSec policy including key as the existing one, then you can enable the CloudSec config in the MSD fabric settings. The CloudSec parameters in the MSD fabric setting need to match the existing CloudSec configuration on the switch. The CloudSec configuration is already captured in the freeform config, and enabling CloudSec in MSD will also generate config intents. Therefore, there’s a double intent. For example, if you want to change the CloudSec key in the MSD settings, you need to remove the CloudSec freeform config because DCNM won’t modify config in switch_freeform. Otherwise, the key in the MSD fabric settings is a conflict with the key in the freeform config.

Viewing CloudSec Operational State

From Cisco DCNM 11.5(1), you can use CloudSec Operational View to check the operational status of the CloudSec sessions if CloudSec is enabled on the MSD fabric.

Procedure

Step 1

Choose an MSD fabric.

The fabric topology window appears.

Step 2

Click Tabular view in the Actions pane.

Step 3

Choose the CloudSec Operational View tab.

Step 4

If CloudSec is disabled, the CloudSec Operational View tab isn’t displayed.

The Operational View tab has the following fields and descriptions.

Fields Descriptions
Fabric Name Specifies the fabrics that have a CloudSec session.
Session Specifies the fabrics and border gateway switches involved in the CloudSec session.
Link State

Specifies the status of the CloudSec session. It can be in one of the following states:

  • Up: The CloudSec session is successfully established between the switches.

  • Down: The CloudSec session isn’t operational.

Uptime Specifies the duration of the uptime for the CloudSec session. Specifically, it's the uptime since the last Rx and Tx sessions flapped, and the smaller value among the 2 sessions is displayed.
Oper Reason Specifies the down reason for the CloudSec session state.

All these columns are sortable.

Note

 

After CloudSec is enabled on a fabric, the operational status may not be available until after sessions are created, and the next status poll occurs.


Troubleshooting a CloudSec Session

If a CloudSec session is down, you can find more information about it using Programmable Report.

Procedure

Step 1

Navigate to Applications > Programmable report.

Step 2

Click the Create Report icon.

Step 3

Specify a report name, select the MSD fabric on which the report job should be run, and click Next.

Step 4

From the Template drop-down list, select fabric_cloudsec_oper_status and click Create Job.

The status will change to a green tick indicating Success after the report has been successfully generated.

Step 5

Click the report to view it. This report is similar to the CloudSec Operation View tab.

Step 6

Click View Details to view more information about the CloudSec session status.

Step 7

Click the operational status for a session to view the detailed info about the CloudSec session for each peer fabric and device.


Removing a Fabric From an MSD

To remove a fabric from an MSD fabric, perform the following steps:

Before you begin
Make sure that there are no VRFs deployed on the border switches in the fabric that you want to remove. For more information, see Deployment and Undeployment of Networks and VRFs in Member Fabrics.

Note


From Cisco DCNM Release 11.4(1), after removing an individual fabric from MSD, underlay and overlay IFCs are deleted. If IFCs are extended, an error is reported to disallow the fabric remove.


Procedure

Step 1

From the Fabric Builder window, click an MSD fabric.

Step 2

Click Move Fabric in the Actions menu.

Step 3

In the Move Fabric window, select the respective radio button of the fabric that you want to remove and click Remove.

In the fabric removal notification window, click Close.

Step 4

Click Save & Deploy for the MSD in the Fabric Builder window.

Step 5

Click Deploy Config in the Config Deployment window.

Click Close.

Step 6

Navigate to the fabric that you removed from MSD and click Save & Deploy.

Step 7

Click Deploy Config in the Config Deployment window.

Click Close.


Moving a Standalone Fabric (With Existing Networks and VRFs) to an MSD Fabric

If you move a standalone fabric with existing networks and VRFs to an MSD fabric as a member, ensure that common networks (that is, L2 VNI and L3 VNI information), anycast gateway MAC, and VRF and network templates are the same across the fabric and the MSD. DCNM validates the standalone fabric (network and VRF information) against the (network and VRF information) of the MSD fabric to avoid duplicate entries. An example of duplicate entries is two common network names with a different network ID. After validation for any conflicts, the standalone fabric is moved to the MSD fabric as a member fabric. Details:

  • The MSD fabric inherits the networks and VRFs of the standalone fabric that do not exist in the MSD fabric. These networks and VRFs are in turn inherited by the member fabrics.

  • The newly created member fabric inherits the networks and VRFs of the MSD fabric (that do not exist in the newly created member fabric).

  • If there are conflicts between the standalone and MSD fabrics, validation ensures that an error message is displayed. After the updation, when you move the member fabric to the MSD fabric, the move will be successful. A message comes up at the top of the page indicating that the move is successful.

If you move back a member fabric to standalone status, then the networks and VRFs remain as they are, but they remain relevant as in an independent fabric, outside the purview of an MSD fabric.

Managing Switches Using LAN Classic Templates

From Cisco DCNM Release 11.4(1), you can use the LAN_Classic and Fabric_Group templates to manage the switches that you used to previously manage in the DCNM Classic LAN deployment.

The LAN_Classic fabric template is a generic fabric template to manage Cisco Nexus switches.

Guidelines and Limitations

  • Fabrics using the LAN_Classic fabric template can be changed to use the External_Fabric_11_1 fabric template and then use all its associated functionalities. Note that this is the only supported fabric template conversion and it’s nonreversible.

  • The LAN_Classic fabric can be added as a member of an MSD fabric.

  • Only Cisco Nexus switches are supported in the LAN_Classic fabric.

  • The TOR Auto-Deploy functionality is supported in the LAN_Classic member fabric when a switch with the ToR role is in the fabric. For more information, see Configuring ToR Switches and Deploying Networks.

  • If you are using the Cisco Nexus 7000 Series Switch with Cisco NX-OS Release 6.2(24a) on the LAN Classic or External fabrics, make sure to enable AAA IP Authorization in the fabric settings.

  • The following features in the LAN_Classic template provide the same support as they do for the External_Fabric_11_1 template:

    The following features are supported:

    • Configuration compliance

    • Backup or restore of fabric

    • Network Insights

    • Performance monitoring

    • VMM

    • Topology view

    • Kubernetes visualization

    • RBAC

    For more information, refer to the feature specific sections.

Creating a LAN Classic Fabric

Procedure

Step 1

Navigate to Control > Fabrics > Fabric Builder.

Step 2

Click Create Fabric.

Step 3

Enter the fabric name and choose LAN_Classic from Fabric Template drop - down list.

Step 4

The General tab is displayed by default. The field in this tab is:

Fabric Monitor Mode – Uncheck the check box if you want DCNM to manage the fabric. Keep the check box selected to enable only monitoring of the fabric. In this state, you can’t deploy configurations on its switches.

The configurations must be pushed for devices before you discover them in the fabric. You can’t push configurations in the monitor mode.

Step 5

Click Advanced tab. The fields in this tab are:

vPC Peer Link VLAN - The vPC peer link VLAN ID is autopopulated. Update the field to reflect the correct value.

Power Supply Mode - Choose the appropriate power supply mode.

Enable MPLS Handoff - Select the check box to enable the MPLS Handoff feature. For more information, see the Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - MPLS SR and LDP Handoff chapter.

Underlay MPLS Loopback Id: Specifies the underlay MPLS loopback ID. The default value is 101.

Enable AAA IP Authorization - Enables AAA IP authorization, when IP Authorization is enabled in the AAA Server.

Enable DCNM as Trap Host - Select this check box to enable DCNM as a trap host.

Enable CDP for Bootstrapped Switch - Enables CDP on management interface.

Enable NX-API - Specifies enabling of NX-API. This check box is unchecked by default.

Enable NX-API on HTTP port - Specifies enabling of NX-API on HTTP. This check box is unchecked by default. Enable this check box and the Enable NX-API check box to use HTTP. If you uncheck this check box, the applications that use NX-API and supported by Cisco DCNM, such as Layer 4-Layer 7 services (L4-L7 services), VXLAN OAM, and so on, start using the HTTPS instead of HTTP.

Note

 

If you check the Enable NX-API check box and the Enable NX-API on HTTP check box, applications use HTTP.

Inband Mgmt: For External and Classic LAN Fabrics, this knob enables DCNM to import and manage of switches with inband connectivity (reachable over switch loopback, routed, or SVI interfaces) , in addition to management of switches with out-of-band connectivity (aka reachable over switch mgmt0 interface). The only requirement is that for Inband managed switches, there should be IP reachability from DCNM to the switches via the eth2 aka inband interface. For this purpose, static routes may be needed on the DCNM, that in turn can be configured via the Administration->Customization->Network Preferences option. After enabling Inband management, during discovery, provide the IPs of all the switches to be imported using Inband Management and set maximum hops to 0. DCNM has a pre-check that validates that the Inband managed switch IPs are reachable over the eth2 interface. Once the pre-check has passed, DCNM then discovers and learns about the interface on that switch that has the specified discovery IP in addition to the VRF that the interface belongs to. As part of the process of switch import/discovery, this information is captured in the baseline intent that is populated on the DCNM. For more information, see Inband Management in External Fabrics and LAN Classic Fabrics.

Note

 

Bootstrap or POAP is only supported for switches that are reachable over out-of-band connectivity, that is, over switch mgmt0. The various POAP services on the DCNM are typically bound to the eth1 or out-of-band interface. In scenarios, where DCNM eth0/eth1 interfaces reside in the same IP subnet, the POAP services are bound to both interfaces.

Enable Precision Time Protocol (PTP): Enables PTP across a fabric. When you select this check box, PTP is enabled globally and on core-facing interfaces. Additionally, the PTP Source Loopback Id and PTP Domain Id fields are editable. For more information, see Precision Time Protocol for External Fabrics and LAN Classic Fabrics.

Fabric Freeform - You can apply configurations globally across all the devices discovered in the external fabric using this freeform field.

AAA Freeform Config – Specifies the AAA freeform configs.

Step 6

Click the Resources tab. The fields in this tab are:

Subinterface Dot1q Range - The subinterface 802.1Q range and the underlay routing loopback IP address range are autopopulated.

Underlay Routing Loopback IP Range - Specifies loopback IP addresses for the protocol peering.

Underlay MPLS Loopback IP Range - Specifies the underlay MPLS SR or LDP loopback IP address range.

The IP range should be unique, that is, it shouldn’t overlap with IP ranges of the other fabrics.

Step 7

Click Configuration tab. The fields in this tab are:

Hourly Fabric Backup: Select the check box to enable an hourly backup of fabric configurations and the intent.

Scheduled Fabric Backup: Check the check box to enable a daily backup.

Scheduled Time: Specify the scheduled backup time in a 24-hour format. This field is enabled if you check the Scheduled Fabric Backup check box.

Note

 

Hourly or scheduled backup runs only after the next CC hourly run. Backup will run only after scheduled time is elapsed and whenever CC run happens after the elapsed time.

The backup and restore process is similar to that of an external fabric. For more information about backing up and restoring external fabrics, see Fabric Backup and Restore.

Step 8

Click Bootstrap tab. The fields in this tab are:

Enable Bootstrap (For NX-OS Switches Only) - Select this check box to enable the bootstrap feature for only Cisco Nexus switches.

After you enable bootstrap, you can enable the DHCP server for automatic IP address assignment using one of the following methods:

  • External DHCP Server: Enter information about the external DHCP server in Switch Mgmt Default Gateway and Switch Mgmt IP Subnet Prefix fields.

  • Local DHCP Server: Enable Local DHCP Server check box and enter details for the remaining mandatory fields.

Enable Local DHCP Server - Select this check box to initiate enabling of automatic IP address assignment through the local DHCP server. When you select this check box, all the remaining fields become editable.

DHCP Version – Select DHCPv4 or DHCPv6 from this drop-down list. When you select DHCPv4, the Switch Mgmt IPv6 Subnet Prefix field is disabled. If you select DHCPv6, the Switch Mgmt IP Subnet Prefix is disabled.

Note

 

Cisco DCNM IPv6 POAP isn’t supported with Cisco Nexus 7000 Series Switches. Cisco Nexus 9000 and 3000 Series Switches support IPv6 POAP only when switches are either L2 adjacent (eth1 or out-of-band subnet must be a /64) or they are L3 adjacent residing in some IPv6 /64 subnet. Subnet prefixes other than /64 aren’t supported.

If you don’t select this check box, DCNM uses the remote or external DHCP server for automatic IP address assignment.

DHCP Scope Start Address and DHCP Scope End Address - Specifies the first and last IP addresses of the IP address range to be used for the switch out of band POAP.

Switch Mgmt Default Gateway - Specifies the default gateway for the management VRF on the switch.

Switch Mgmt IP Subnet Prefix - Specifies the prefix for the Mgmt0 interface on the switch. The prefix should be between 8 and 30.

DHCP scope and management default gateway IP address specification - If you specify the management default gateway IP address 10.0.1.1 and subnet mask 24, ensure that the DHCP scope is within the specified subnet, between 10.0.1.2 and 10.0.1.254.

Switch Mgmt IPv6 Subnet Prefix - Specifies the IPv6 prefix for the Mgmt0 interface on the switch. The prefix should be between 112 and 126. This field is editable if you enable IPv6 for DHCP.

Enable AAA Config - Enables AAA configure. It includes AAA configs from the Advanced tab during device bootup.

Bootstrap Freeform Config - (Optional) Enter extra commands as needed. For example, if you’re using AAA or remote authentication-related configurations, add these configurations in this field to save the intent. After the devices boot up, they contain the intent defined in the Bootstrap Freeform Config field.

Copy-paste the running-config to a freeform config field with correct indentation, as seen in the running configuration on the NX-OS switches. The freeform config must match the running config. For more information, see Resolving Freeform ConfigErrors in Switches.

DHCPv4/DHCPv6 Multi Subnet Scope - Specifies the field to enter one subnet scope per line. This field is editable after you check the Enable Local DHCP Server check box.

The format of the scope should be defined as:

DHCP Scope Start Address, DHCP Scope End Address, Switch Management Default Gateway, Switch Management Subnet Prefix

For example: 10.6.0.2, 10.6.0.9, 10.6.0.1, 24

After the fabric is created, the fabric topology page comes up.

Step 9

Click ThousandEyes Agent tab. This feature is supported on Cisco DCNM Release 11.5(3) only. For more information, refer to Configuring Global Settings for ThousandEyes Enterprise Agent.

The fields on this tab are:

Note

 

The fabric settings for ThousandEyes Agent overwrites the global settings and applies the same configuration for all the ThousandEyes Agent installed on switches in that fabric.

  • Enable Fabric Override for ThousandEyes Agent Installation: Select the check box to enable the ThousandEyes Enterprise Agent on the fabric.

  • ThousandEyes Account Group Token: Specifies ThousandEyes Enterprise Agent account group token for installation.

  • VRF on Switch for ThousandEyes Agent Collector Reachability: Specifies the VRF data which provides internet reachability.

  • DNS Domain: Specifies the switch DNS domain configuration.

  • DNS Server IPs: Specifies the comma separated list of IP addresses (v4/v6) of Domain Name System (DNS) server. You can enter a maximum of three IP addresses for the DNS Server.

  • NTP Server IPs: Specifies comma separated list of IP addresses (v4/v6) of Network Time Protocol (NTP) server. You can enter a maximum of three IP addresses for the NTP Server.

  • Enable Proxy for Internet Access: Select the check box to enable the proxy setting for NX-OS switch internet access.

  • Proxy Information: Specifies the proxy server port information.

  • Proxy Bypass: Specifies the server list for which proxy is bypassed.


Adding Switches to LAN Classic Fabric

Procedure

Step 1

Click Add switches. The Inventory Management window comes up.

You can also add switches by clicking Tabular View > Switches > + .

Step 2

Enter IP address (Seed IP) of the switch.

Step 3

Enter the administrator username and password of the switch.

Step 4

Click Start discovery at the bottom part of the screen. The Scan Details section comes up shortly. Since the Max Hops field was populated with 2, the switch with the specified IP address and switches two hops from it are populated.

Step 5

Select the check boxes next to the concerned switches and click Import into fabric.

You can discover multiple switches at the same time. The switches must be properly cabled and connected to the DCNM server and the switch status must be manageable.

The switch discovery process is initiated. The Progress column displays the progress. After DCNM discovers the switch, the screen closes and the fabric screen comes up again. The switch icons are seen at the centre of the fabric screen.

Step 6

Click Refresh topology to view the latest topology view.

For more information, see:


Creating a Fabric Group and Associating Member Fabrics

This procedure shows how to a create a Fabric_Group and add LAN_Classic fabrics. The Fabric_Group template is used for grouping LAN_Classic fabrics for visualization.

The following functionalities aren't supported in a Fabric_Group:

  • Fabric backup or restore

  • VXLAN overlay or IFC deployment

  • Changing fabric template to and from any other fabric template

  • Since Fabric_Group doesn't manage any configurations, clicking Save & Deploy reports an error.

Procedure

Step 1

Navigate to Control > Fabrics > Fabric Builder.

Step 2

Click Create Fabric.

Step 3

Enter the fabric name and choose Fabric_Group from the Fabric Template drop - down list.

Step 4

Click Save.

Step 5

In the Actions panel, click Move Fabrics.

Step 6

Select a LAN_Classic fabric in the Move Fabric window.

Note

 

You can select and add only a LAN_Classic fabric in a fabric group.

Step 7

Click Add.

Similarly, you can remove a member fabric by selecting it and clicking Remove.


Support for Inter-Fabric Connection in LAN Classic Fabric Template

The LAN_Classic fabric supports VRF-Lite, Multi-Site, and MPLS IFCs with these conditions:

  • The LAN_Classic fabric as a destination for DCI/VRF-Lite and Multi-Site IFCs is supported, but you can only manually create them by providing the required information. They won’t be automatically created even when the auto deployment options are enabled in the Easy_Fabric_11_1 and MSD_Fabric_11_1 fabrics.

  • You can’t add nonexistent (meta) switches to a LAN_Classic fabric. A meta switch is a placeholder for a switch or device that DCNM can’t discover.

  • The base BGP configurations for the 'Edge Router' and 'Core Router' switch roles aren’t auto generated. Configure them using the switch_freeform policies or other suitable means.

  • If MPLS Handoff is enabled in the fabric settings, MPLS base configurations are auto generated for the 'Edge Router' and 'Core Router' switch roles.

Inband Management in External Fabrics and LAN Classic Fabrics

From Release 11.5(1), Cisco DCNM allows you to import or discover switches with inband connectivity for External and LAN Classic fabrics in Brownfield deployments only. Enable inband management, per fabric, while configuring or editing the Fabric settings. You cannot import or discover switches with inband connectivity using POAP.

After configuration, the Fabric tries to discover switches based on the VRF of the inband management. The fabric template determines the VRF of inband switch using seed IP. If there are multiple VRFs for same seed IP, then no intent will be learnt for seed interfaces. You must create intent/configuration manually.

After configuring\editing the Fabric settings, you must Save and Deploy. You cannot change the Inband Mgmt settings after you import inband managed switches to the Fabric. If you uncheck the checkbox, the following error message is generated.

Inband IP <<IP Address>> cannot be used to import the switch, 
please enable Inband Mgmt in fabric settings and retry.

After the switches are imported to the Fabric, you must manage the interfaces to create intent. Create the intent for the interfaces that you’re importing the switch. Edit\update the Interface configuration. When you try to change the Interface IP, for this inband managed switch, an error message is generated:

Interface <<interface_name>> is used as seed or next-hop egress interface 
for switch import in inband mode. 
IP/Netmask Length/VRF changes are not allowed for this interface.
While managing the interfaces, for switches imported using inband management, you cannot change the seed IP for the switch. The following error will be generated:
<<switch-name>>: Mgmt0 IP Address (<ip-address>) cannot be changed, 
when is it used as seed IP to discover the switch.

Create a policy for next-hop interfaces. Routes to DCNM from 3rd party devices may contain multiple interfaces, known as ECMP routes. Find the next-hop interface and create an intent for the switch. Interface IP and VRF changes are not allowed.

If inband management is enabled, during Image management, eth2 IP address is used to copy images on the switch, in ISSU, EPLD, RPM & SMU installations flows.

If you imported the switches using inband connectivity in the fabric, and later disable the inband Mgmt in the Fabric settings after deployment, the following error message is generated:
The fabric <<fabric name>> was updated with below message:
Fabric Settings cannot be changed for Inband Mgmt, when switches are already imported 
using inband Ip. Please remove the existing switches imported using Inband Ip from the fabric, 
then change the Fabric Setttings.

However, the same fabric can contain switches imported using both inband and out-of-band connectivity.

Precision Time Protocol for External Fabrics and LAN Classic Fabrics

From Release 11.5(1), in the fabric settings for the External_Fabric_11_1 or LAN_Classic template, select the Enable Precision Time Protocol (PTP) check box to enable PTP across a fabric. When you select this check box, PTP is enabled globally and on core-facing interfaces. Additionally, the PTP Loopback Id and PTP Domain Id fields are editable.

The PTP feature is supported with Cisco Nexus 9000 Series cloud-scale switches, with NX-OS version 7.0(3)I7(1) or later. Warnings are displayed if there are non-cloud scale devices in the fabric, and PTP is not enabled. Examples of the cloud-scale devices are Cisco Nexus 93180YC-EX, Cisco Nexus 93180YC-FX, Cisco Nexus 93240YC-FX2, and Cisco Nexus 93360YC-FX2 switches. For more information, refer to https://www.cisco.com/c/en/us/products/switches/nexus-9000-series-switches/index.html#~products.


Note


PTP global configuration is supported with Cisco Nexus 3000 Series switches; however, PTP and ttag configurations are not supported.

For more information, see the Configuring PTP chapter in Cisco Nexus 9000 Series NX-OS System Management Configuration Guide and Cisco Network Insights for Resources Application for Cisco DCNM User Guide.

For External and LAN Classic fabric deployments, you have to enable PTP globally, and also enable PTP on core-facing interfaces. The interfaces could be configured to the external PTP server like a VM or Linux-based machine. Therefore, the interface should be edited to have a connection with the grandmaster clock. For PTP and TTAG configurations to be operational on External and LAN Classic Fabrics, you must sync up of Switch Configs to DCNM using the host_port_resync policy. For more information, see Sync up Out-of-Band Switch Interface Configurations with DCNM.

It is recommended that the grandmaster clock should be configured outside of Easy Fabric and it is IP reachable. The interfaces toward the grandmaster clock need to be enabled with PTP via the interface freeform config.

All core-facing interfaces are auto-enabled with the PTP configuration after you click Save & Deploy. This action ensures that all devices are PTP synced to the grandmaster clock. Additionally, for any interfaces that are not core-facing, such as interfaces on the border devices and leafs that are connected to hosts, firewalls, service-nodes, or other routers, the ttag related CLI must be added. The ttag is added for all traffic entering the VXLAN EVPN fabric and the ttag must be stripped when traffic is exiting this fabric.

Here is the sample PTP configuration:feature ptp

feature ptp

ptp source 100.100.100.10 -> IP address of the loopback interface (loopback0) 
that is already created, or user-created loopback interface in the fabric settings

ptp domain 1 -> PTP domain ID specified in fabric settings

interface Ethernet1/59 -> Core facing interface
  ptp
 
interface Ethernet1/50 -> Host facing interface
  ttag
  ttag-strip

The following guidelines are applicable for PTP:

  • The PTP feature can be enabled in a fabric when all the switches in the fabric have Cisco NX-OS Release 7.0(3)I7(1) or a higher version. Otherwise, the following error message is displayed:

    PTP feature can be enabled in the fabric, when all the switches have 
    NX-OS Release 7.0(3)I7(1) or higher version. Please upgrade switches to 
    NX-OS Release 7.0(3)I7(1) or higher version to enable PTP in this fabric. 
  • For hardware telemetry support in NIR, the PTP configuration is a prerequisite.

  • If you are adding a non-cloud scale device to an existing fabric which contains PTP configuration, the following warning is displayed:

    TTAG is enabled fabric wide, when all devices are cloud-scale switches 
    so it cannot be enabled for newly added non cloud-scale device(s). 
  • If a fabric contains both cloud-scale and non-cloud scale devices, the following warning is displayed when you try to enable PTP:

    TTAG is enabled fabric wide when all devices are cloud-scale switches 
    and is not enabled due to non cloud-scale device(s).
  • TTAG configuration is generated for all the devices if host configuration sync up is performed on all the devices. Ttag configuration will not be generated for any newly added devices if host configuration sync up is not performed on all newly added devices.

    If the configuration is not synced, the following warning is displayed:

    TTAG on interfaces with PTP feature can only be configured for cloud-scale devices.
    It will not be enabled on any newly added switches due to the presence of non cloud-scale devices. 
  • PTP and TTAG configurations are deployed on host interfaces.

  • PTP and TTAG Configurations are supported between switches in the same fabric (intra-fabric links). PTP is created for inter-fabric links, and ttag is created for the inter-fabric link if the other fabric (Switch) is not managed by DCNM. Inter-fabric links do not support PTP or ttag configurations if both fabrics are managed by DCNM.

  • TTAG configuration is configured by default after the breakout. After the links are discovered and connected post breakout, perform Save & Deploy to generate the correct configuration based on the type of port (host, intra-fabric link, or inter fabric link).

Sync up Out-of-Band Switch Interface Configurations with DCNM

From DCNM release 11.5(1), any interface level configuration made outside of DCNM (via CLI) can be synced to DCNM and then managed from DCNM. Also, the vPC pair configurations are automatically detected and paired. This applies to the External_Fabric_11_1 and LAN_Classic fabrics only. The vPC pairing is performed with the vpc_pair policy.


Note


When DCNM is managing switches, ensure that all configuration changes are initiated from DCNM and avoid making changes directly on the switch.


When the interface config is synced up to the DCNM intent, the switch configs are considered as the reference, that is, at the end of the sync up, the DCNM intent reflects what is present on the switch. If there were any undeployed intent on DCNM for those interfaces before the resync operation, they will be lost.

Guidelines

  • Supported in fabrics using the following templates: Easy_Fabric_11_1, External_Fabric_11_1, and LAN_Classic.

  • Supported for Cisco Nexus switches only.

  • Supported for interfaces that don’t have any fabric underlay related policy associated with them prior to the resync. For example, IFC interfaces and intra fabric links aren’t subjected to resync.

  • Supported for interfaces that do not have any custom policy (policy template that isn’t shipped with Cisco DCNM) associated with them prior to resync.

  • Supported for interfaces where the intent is not exclusively owned by a Cisco DCNM feature and/or application prior to resync.

  • Supported on switches that don’t have Interface Groups associated with them.

  • Interface mode (switchport to routed, trunk to access, and so on) changes aren’t supported with overlays attached to that interface.

The sync up functionality is supported for the following interface modes and policies:

Interface Mode Policies
trunk (standalone, po, and vPC PO)
  • int_trunk_host_11_1

  • int_port_channel_trunk_host_11_1

  • int_vpc_trunk_host_11_1

access (standalone, po, and vPC PO)
  • int_access_host_11_1

  • int_port_channel_access_host_11_1

  • int_vpc_access_host_11_1

dot1q-tunnel
  • int_dot1q_tunnel_host_11_1

  • int_port_channel_dot1q_tunnel_host_11_1

  • int_vpc_ dot1q_tunnel_host_11_1

routed

int_routed_host_11_1

loopback

int_freeform

sub-interface

int_subif_11_1

FEX (ST, AA)
  • int_port_channel_fex_11_1

  • int_port_channel_aa_fex_11_1

breakout

interface_breakout

nve int_freeform (only in External_Fabric_11_1/LAN_Classic)
SVI int_freeform (only in External_Fabric_11_1/LAN_Classic)
mgmt0 int_mgmt_11_1

In an Easy fabric, the interface resync will automatically update the network overlay attachments based on the access VLAN or allowed VLANs on the interface.

After the resync operation is completed, the switch interface intent can be managed using normal DCNM procedures.

Syncing up Switch Interface Configurations to DCNM

Before you begin
  • We recommend taking a fabric backup before attempting the interface resync.

  • In External_Fabric_11_1 and LAN_Classic fabrics, for the vPC pairing to work correctly, both the switches must be in the fabric and must be functional.

  • Ensure that the switches are In-Sync and not in Migration-mode or Maintenance-mode.

Procedure

Step 1

In DCNM, navigate to Control > Fabric Builder and click a fabric.

Step 2

Ensure that switches are present in the fabric and vPC pairings are completed, and they are shown in the Topology view. Click Tabular view in the Actions panel.

Step 3

From Tabular view, select one or more switches where the interface intent resync is needed, and click Policies.

Note

 
  • If a pair of switches is already paired with either no_policy or vpc_pair, select only one switch of the pair.

  • If a pair of switches is not paired, then select both the switches.

Step 4

In the Policies window, click the Add Policy icon.

Step 5

In the Add Policy window, select host_port_resync from the Policy drop-down list. Click Save.

Step 6

Check the Mode column for the switches to ensure that they report Migration. For a vPC pair, both switches are in the Migration-mode.

  • After this step, the switches in the Topology view are in Migration-mode.

  • Both the switches in a vPC pair are in the migration mode even if one of the switches is placed into this mode.

  • If switch(es) are unintentionally put into the resync mode, they can be moved back to the normal mode by identifying the host_port_resync policy instance and deleting it from the Policies window.

Step 7

After the configuration changes are ready to sync up to DCNM, navigate to the Tabular view, select the required switches, and click Rediscover switch to ensure that DCNM is aware of any new interfaces and other changes.

Step 8

Click Save & Deploy to start the resync process.

Note

 

This process might take some time to complete based on the size of the switch configuration and the number of switches involved.

Step 9

The Config Deployment window is displayed if no errors are detected during the resync operation. The interface intent is updated in DCNM.

Note

 

If the External_Fabric_11_1 or LAN_Classic fabric is in Monitored Mode, an error message indicating that the fabric is in the read-only mode is displayed. This error message can be ignored and doesn’t mean that the resync process has failed.

Close the Config Deployment window, and you can see that the switches are automatically moved out of the Migration-mode. Switches in a vPC pair that were not paired or paired with no_policy show up as paired and associated with the vpc_pair policy.

Note

 

The host_port_resync policy that was created for the switch is automatically deleted after the resync process is completed successfully.


What to do next

The following limitations are applicable after Syncing up Switch Interface Configurations to DCNM:

  • The port channel membership (once the policy exists) is not supported.

  • Changing the interface mode (trunk to access etc.) that have overlays attached is not supported.

  • Resync for interfaces that belong to Interface Groups are not supported.

  • The vPC pairing in External_Fabric_11_1 and LAN_Classic templates must be updated with the vpc_pair policy.

  • Changing the interface mode that have overlays attached is not supported.

  • In Easy_Fabric fabrics, VXLAN overlay interface attachments are performed automatically based on the allowed VLANs.

MACsec Support in Easy Fabric and eBGP Fabric

From Cisco DCNM Release 11.5(1), MACsec is supported in the Easy Fabric and eBGP Fabric on intra-fabric links. You should enable MACsec on the fabric and on each required intra-fabric link to configure MACsec. Unlike CloudSec, auto-configuration of MACsec is not supported.

MACsec is supported on switches with minimum Cisco NX-OS Releases 7.0(3)I7(8) and 9.3(5).


Note


Support for MACsec is a preview feature in the Cisco DCNM Release 11.5(1).


Guidelines

  • If MACsec cannot be configured on the physical interfaces of the link, an error is displayed when you click Save. MACsec cannot be configured on the device and link due to the following reasons:

    • The minimum NX-OS version is not met.

    • The interface is not MACsec capable.

  • MACsec global parameters in the fabric settings can be changed at any time.

  • MACsec and CloudSec can coexist on a BGW device.

  • MACsec is not supported on Border Leaf.

  • MACsec status of a link with MACsec enabled is displayed on the Links window.

  • Brownfield migration of devices with MACsec configured is supported using switch and interface freeform configs.

    For more information about MACsec configuration, which includes supported platforms and releases, see the Configuring MACsec chapter in Cisco Nexus 9000 Series NX-OS Security Configuration Guide.

The following sections show how to enable and disable MACsec in DCNM:

Enabling MACsec

Procedure

Step 1

Navigate to Control > Fabrics > Fabric Builder.

Step 2

Click Create Fabric to create a new fabric or click Edit Fabric on an existing Easy or eBGP fabric.

Step 3

Click the Advanced tab and specify the MACsec details.

Enable MACsec – Select the check box to enable MACsec for the fabric.

MACsec Primary Key String – Specify a Cisco Type 7 encrypted octet string that is used for establishing the primary MACsec session. For AES_256_CMAC, the key string length must be 130 and for AES_128_CMAC, the key string length must be 66. If these values are not specified correctly, an error is displayed when you save the fabric.

Note

 

The default key lifetime is infinite.

MACsec Primary Cryptographic Algorithm – Choose the cryptographic algorithm used for the primary key string. It can be AES_128_CMAC or AES_256_CMAC. The default value is AES_128_CMAC.

You can configure a fallback key on the device to initiate a backup session if the primary session fails.

MACsec Fallback Key String - Specify a Cisco Type 7 encrypted octet string that is used for establishing a fallback MACsec session. For AES_256_CMAC, the key string length must be 130 and for AES_128_CMAC, the key string length must be 66. If these values are not specified correctly, an error is displayed when you save the fabric.

MACsec Fallback Cryptographic Algorithm - Choose the cryptographic algorithm used for the fallback key string. It can be AES_128_CMAC or AES_256_CMAC. The default value is AES_128_CMAC.

MACsec Cipher Suite – Choose one of the following MACsec cipher suites for the MACsec policy:

  • GCM-AES-128

  • GCM-AES-256

  • GCM-AES-XPN-128

  • GCM-AES-XPN-256

The default value is GCM-AES-XPN-256.

Note

 

The MACsec configuration is not deployed on the switches after the fabric deployment is complete. You need to enable MACsec on intra-fabric links to deploy the MACsec configuration on the switch.

MACsec Status Report Timer - Specifies MACsec operational status periodic report timer in minutes.

Step 4

Click a fabric, click Tabular View in the Actions panel, and then click Links.

Step 5

Choose an intra-fabric link on which you want to enable MACsec and click Update Link.

Step 6

In the Link Management – Edit Link window, click Advanced in the Link Profile section, and select the Enable MACsec check box.

If MACsec is enabled on the intra fabric link but not in the fabric settings, an error is displayed when you click Save.

When MACsec is configured on the link, the following configurations are generated:

  • Create MACsec global policies if this is the first link that enables MACsec.

  • Create MACsec interface policies for the link.

Step 7

Click Save and then click Save & Deploy to deploy the MACsec configuration.


Disabling MACsec

To disable MACsec on an intra-fabric link, navigate to the Link Management – Edit Link window, unselect the Enable MACsec check box, click Save, and then click Save & Deploy. This action performs the following:

  • Deletes MACsec interface policies from the link.

  • If this is the last link where MACsec is enabled, MACsec global policies are also deleted from the device.

Only after disabling MACsec on links, navigate to the Fabric Settings and unselect the Enable MACsec check box under the Advanced tab to disable MACsec on the fabric. If there’s an intra-fabric link in the fabric with MACsec enabled, an error is displayed when you click Save & Deploy.

Overview of Tenant Routed Multicast

Tenant Routed Multicast (TRM) enables multicast forwarding on the VXLAN fabric that uses a BGP-based EVPN control plane. TRM provides multi-tenancy aware multicast forwarding between senders and receivers within the same or different subnet local or across VTEPs.

With TRM enabled, multicast forwarding in the underlay is leveraged to replicate VXLAN encapsulated routed multicast traffic. A Default Multicast Distribution Tree (Default-MDT) is built per-VRF. This is an addition to the existing multicast groups for Layer-2 VNI Broadcast, Unknown Unicast, and Layer-2 multicast replication group. The individual multicast group addresses in the overlay are mapped to the respective underlay multicast address for replication and transport. The advantage of using a BGP-based approach allows the VXLAN BGP EVPN fabric with TRM to operate as fully distributed Overlay Rendezvous-Point (RP), with the RP presence on every edge-device (VTEP).

A multicast-enabled data center fabric is typically part of an overall multicast network. Multicast sources, receivers, and multicast rendezvous points might reside inside the data center but also might be inside the campus or externally reachable via the WAN. TRM allows a seamless integration with existing multicast networks. It can leverage multicast rendezvous points external to the fabric. Furthermore, TRM allows for tenant-aware external connectivity using Layer-3 physical interfaces or subinterfaces.

For more information, see the following:

Overview of Tenant Routed Multicast with VXLAN EVPN Multi-Site

Tenant Routed Multicast with Multi-Site enables multicast forwarding across multiple VXLAN EVPN fabrics connected via Multi-Site.

The following two use cases are supported:

  • Use Case 1: TRM provides Layer 2 and Layer 3 multicast services across sites for sources and receivers across different sites.

  • Use Case 2: Extending TRM functionality from VXLAN fabric to sources receivers external to the fabric.

TRM Multi-Site is an extension of BGP-based TRM solution that enables multiple TRM sites with multiple VTEPs to connect to each other to provide multicast services across sites in most efficient possible way. Each TRM site is operating independently and border gateway on each site allows stitching across each site. There can be multiple Border Gateways for each site. In a given site, the BGW peers with Route Sever or BGWs of other sites to exchange EVPN and MVPN routes. On the BGW, BGP will import routes into the local VRF/L3VNI/L2VNI and then advertise those imported routes into the Fabric or WAN depending on where the routes were learnt from.

Tenant Routed Multicast with VXLAN EVPN Multi-Site Operations

The operations for TRM with VXLAN EVPN Multi-Site are as follows:

  • Each Site is represented by Anycast VTEP BGWs. DF election across BGWs ensures no packet duplication.

  • Traffic between Border Gateways uses ingress replication mechanism. Traffic is encapsulated with VXLAN header followed by IP header.

  • Each Site will only receive one copy of the packet.

  • Multicast source and receiver information across sites is propagated by BGP protocol on the Border Gateways configured with TRM.

  • BGW on each site receives the multicast packet and re-encapsulate the packet before sending it to the local site.

For information about guidelines and limitations for TRM with VXLAN EVPN Multi-Site, see Configuring Tenant Routed Multicast.

Configuring TRM for Single Site Using Cisco DCNM

This section is assumes that a VXLAN EVPN fabric has already been provisioned using Cisco DCNM.

Procedure

Step 1

Enable TRM for the selected Easy Fabric. If the fabric template is Easy_Fabric_11_1, click the Fabric settings, navigate to the Replication tab, and check the Enable Tenant Routed Multicast (TRM) field. In addition, the default MDT multicast group field is auto-populated with a default value.

Enable Tenant Routed Multicast (TRM): Select the check box to enable Tenant Routed Multicast (TRM) that allows overlay multicast traffic to be supported over EVPN/MVPN in the VXLAN BGP EVPN fabric.

Default MDT Address for TRM VRFs: The multicast address for Tenant Routed Multicast traffic is populated. By default, this address is from the IP prefix specified in the Multicast Group Subnet field. When you update either field, ensure that the TRM address is chosen from the IP prefix specified in Multicast Group Subnet.

Click Save to save the fabric settings. At this point, all the switches turn “Blue” as it will be in the pending state. Click Save and Deploy to enable the following:

  • Enable feature ngmvpn: Enables the Next-Generation Multicast VPN (ngMVPN) control plane for BGP peering.

  • Configure ip multicast multipath s-g-hash next-hop-based: Multipath hashing algorithm for the TRM enabled VRFs.

  • Configure ip igmp snooping vxlan: Enables IGMP Snooping for VXLAN VLANs.

  • Configure ip multicast overlay-spt-only: Enables the MVPN Route-Type 5 on all MPVN enabled Cisco Nexus 9000 switches.

  • Configure and Establish MVPN BGP AFI Peering: This is necessary for the peering between BGP RR and the Leaves.

For VXLAN EVPN fabric created using Easy_Fabric_eBGP fabric template, Enable Tenant Routed Multicast (TRM) field and Default MDT Address for TRM VRFs field can be found on the fabric settings' EVPN tab.

Step 2

Enable TRM for the VRF.

Navigate to Control > VRFs and edit the selected VRF. Navigate to the Advanced Tab and edit the following TRM settings:

TRM Enable – Select the check box to enable TRM. If you enable TRM, then the RP address and the underlay multicast address must be entered.

Is RP External – Enable this check box if the RP is external to the fabric. If this field is unchecked, RP is distributed in every VTEP.

Note

 

If the RP is external, then select the appropriate option. If the RP is external, then RP loopback ID is greyed out.

RP Address – Specifies the IP address of the RP.

RP Loopback ID – Specifies the loopback ID of the RP, if Is RP External is not enabled.

Underlay Multicast Address – Specifies the multicast address associated with the VRF. The multicast address is used for transporting multicast traffic in the fabric underlay.

Note

 

The multicast address in the Default MDT Address for TRM VRFs field in the fabric settings screen is auto-populated in this field. User can override this field if a different multicast group address should be used for this VRF.

Overlay Multicast Groups – Specifies the multicast group subnet for the specified RP. The value is the group range in “ip pim rp-address” command. If the field is empty, 224.0.0.0/24 is used as default.

Click Save to save the settings. The switches go into the pending state, that is, blue color. These settings enable the following:

  • Enable PIM on L3VNI SVI.

  • Route-Target Import and Export for MVPN AFI.

  • RP and other multicast configuration for the VRF.

  • Loopback interface using the above RP address and RP loopback id for the distributed RP.

Step 3

Enable TRM for the network.

Navigate to Control > Networks. Edit the selected network and navigate to the Advanced tab. Edit the following TRM setting:

TRM enable – Select the check box to enable TRM.

Click Save to save the settings. The switches go into the pending state, that is, the blue color. The TRM settings enable the following:

  • Enable PIM on the L2VNI SVI.

  • Create a PIM policy none to avoid PIM neighborship with PIM Routers within a VLAN. The none keyword is a configured route map to deny any ipv4 addresses to avoid establishing PIM neighborship policy using anycast IP.


Configuring TRM for Multi-Site Using Cisco DCNM

This section assumes that a Multi-Site Domain (MSD) has already been deployed by Cisco DCNM and TRM needs to be enabled.

Procedure

Step 1

Enable TRM on the BGWs.

Navigate to Control > VRFs. Make sure that the right DC Fabric is selected under the Scope and edit the VRF. Navigate to the Advanced tab. Edit the TRM settings. Repeat this process for every DC Fabric and its VRFs.

TRM Enable – Select the check box to enable TRM. If you enable TRM, then the RP address and the underlay multicast address must be entered.

Is RP External – Enable this check box if the RP is external to the fabric. If this field is unchecked, RP is distributed in every VTEP.

Note

 

If the RP is external, then select the appropriate option. If the RP is external, then RP loopback ID is greyed out.

RP Address – Specifies the IP address of the RP.

RP Loopback ID – Specifies the loopback ID of the RP, if Is RP External is not enabled.

Underlay Multicast Address – Specifies the multicast address associated with the VRF. The multicast address is used for transporting multicast traffic in the fabric underlay.

Note

 

The multicast address in the Default MDT Address for TRM VRFs field in the fabric settings screen is auto-populated in this field. User can override this field if a different multicast group address should be used for this VRF.

Overlay Multicast Groups – Specifies the multicast group subnet for the specified RP. The value is the group range in “ip pim rp-address” command. If the field is empty, 224.0.0.0/24 is used as default.

Enable TRM BGW MSite - Select the check box to enable TRM on Border Gateway Multi-Site.

Click on Save to save the settings. The switches go into the pending state, that is, blue color. These settings enable the following:

  • Enable feature ngmvpn: Enables the Next-Generation Multicast VPN (ngMVPN) control plane for BGP peering.

  • Enables PIM on L3VNI SVI.

  • Configures L3VNI Multicast Address.

  • Route-Target Import and Export for MVPN AFI.

  • RP and other multicast configuration for the VRF.

  • Loopback interface for the distributed RP.

  • Enable Multi-Site BUM ingress replication method for extending the Layer 2 VNI

Step 2

Establish MVPN AFI between the BGWs.

Navigate to Control > Fabrics. Select the MSD fabric. Click Tabular view and click Links. Filter it by the policy - Overlays.

Select and edit each overlay peering to enable TRM by checking the Enable TRM check box.

Click Save to save the settings. The switches go into the pending state, that is, the blue color. The TRM settings enable the MVPN peering’s between the BGWs, or BGWs and Route Server.


SSH Key RSA Handling

Bootstrap scenario

If the switch has the ssh key rsa command with the key-length variable value other than 1024 in the running configuration, the ssh key rsa key-length force command needs to be added to the bootstrap freeform configuration with the required value (any value other than 1024) during bootstrap.

Greenfield and Brownfield scenarios

Use the ssh key rsa key-length force command to change the key-length variable to a value other than 1024.

However, on Cisco Nexus 9000 Releases 9.3(1) and 9.3(2), the ssh key rsa key-length force command fails while the device is booting up during the ASCII replay process. For more information, refer CSCvs40704.

The configurations are considered to be in-sync when both the intent and switch running configurations have the same command. For example, the status is considered to be in-sync when the ssh key rsa 2048 command is present in both in the intent and the running configuration. However, consider a scenario in which the ssh key rsa 2040 command was pushed to the switch as an Out-Of-Band change. While the intent has a key-length value of 2048, the device has a key-length value of 2040. In such instances, the switch will be marked as out-of-sync.

The diff shown in the Pending Config tab (in both Strict Config-Compliance and non-Strict Config-Compliance mode) cannot be deployed onto the switch from DCNM as the feature ssh command has to be used to disable the SSH feature before making any change to the ssh key rsa command. This would lead to a dropped connection to DCNM. In such a scenario, the diff can be resolved by modifying the intent such that there is no diff.

With Strict Config-Compliance mode:

- Delete the Policy Template Instance (PTI) that has the ssh key rsa 2048 force command by clicking View/Edit Policies in the Tabular View of the Fabric Builder window.

- Create a new PTI with the ssh key rsa 2040 force command by clicking View/Edit Policies.

Without Strict Config-Compliance mode:

- Delete the PTI with the ssh key rsa 2048 force command in the intent by clicking View/Edit Policies in the Tabular View of the Fabric Builder window.

- Create a switch_freeform PTI with the ssh key rsa 2040 force command in the intent to match the Out-Of-Band change from the device.

Switch Operations

To view various options, right-click on switch:

Set Role: Assign a role to the switch. You can assign any one of the following roles to a switch:

  • Spine

  • Leaf (Default role)

  • Border

  • Border Spine

  • Border Gateway

  • Access

  • Aggregation

  • Edge Router

  • Core Router

  • Super Spine

  • Border Super Spine

  • Border Gateway Spine

  • ToR

Alternatively, you can also navigate to the Tabular view from the Actions pane. Choose one or more devices of the same device type and click Set Role to set roles for devices. The device types are:

  • NX-OS

  • IOS XE

  • IOS XR

  • Other


Note


Ensure that you have moved switches from maintenance mode to active mode or operational mode before setting roles.

You can change the switch role only before executing Save & Deploy.


You can assign one of the following roles for non-Nexus devices:

  • Spine

  • Leaf

  • Access (This role is available only for Cisco ASR 1000 Series routers and Cisco Catalyst 9000 Series switches).

  • Edge Router (Use this role for VRF-Lite).

  • Core Router

  • Super Spine

  • Preview Config

  • ToR (This role is available only for Cisco Catalyst 9000 series switches).

From DCNM 11.1(1) release, you can shift the switch role from existing to required role if there are no overlays on the switches. Click Save and Deploy to generate the updated configuration. The following shifts are allowed for the switch role:

  • Leaf to Border

  • Border to Leaf

  • Leaf to Border Gateway

  • Border Gateway to Leaf

  • Border to Border Gateway

  • Border Gateway to Border

  • Spine to Border Spine

  • Border Spine to Spine

  • Spine to Border Gateway Spine

  • Border Gateway Spine to Spine

  • Border Spine to Border Gateway Spine

  • Border Gateway Spine to Border Spine

You cannot change the switch role from any Leaf role to any Spine role and from any Spine role to any Leaf role.

In case the switch role is not changed according to the allowed switch role changes mentioned above for easy fabrics, the following error is displayed after you click Save and Deploy:
Switch[<serial-number>]: Role change from <switch-role> to <switch-role> is not permitted.

You can then change the switch role to the role that was set earlier, or set a new role, and configure the fabric.

If you have not created any policy template instances before clicking Save and Deploy, and there are no overlays, you can change the role of a switch to any other required role.

If you change the switch role of a vPC switch that is part of a vPC pair, the following error appears when you click Save and Deploy:
Switches role should be the same for VPC pairing. peer1 <serial-number>: [<switch-role>], peer2 <serial-number>: [<switch-role>]

To prevent this scenario, change the switch roles of both the switches in the vPC pair to the same role.

Running EXEC Mode Commands in DCNM

When you first log in, the Cisco NX-OS software places you in the EXEC mode. The commands available in the EXEC mode include the show commands that display the device status and configuration information, the clear commands, and other commands that perform actions that you do not save in the device configuration.

The following procedure shows how to run EXEC commands in DCNM:

Procedure


Step 1

From DCNM, navigate to Control > Fabrics > Fabric Builder.

Step 2

Click a fabric and then click Tabular view in the Actions menu.

Step 3

Select one or more switches and click the Play button (Execute Commands).

Step 4

From the Template drop-down list, select exec_freeform.

Step 5

Enter the commands in the Freeform CLI field.

Step 6

Click Deploy to run the EXEC commands.

Step 7

In the CLI Execution Status window, you can check the status of the deployment. Click Detailed Status under the Command column to view details.

Step 8

In the Command Execution Details window, click the info under the CLI Response column to view the output or response.


Fabric Multi Switch Operations

Click Tabular view from the Actions pane in the fabric topology window. The tabular view has the following tabs:

Tabular View - Switches

You can manage switch operations in this tab. Each row represents a switch in the fabric, and displays switch details, including its serial number.

Some of the actions that you can perform from this tab are also available when you right-click a switch in the fabric topology window. However, the Switches tab enables you to provision configurations on multiple switches, like deploying policies, simultaneously.

The Switches tab has following information of every switch you discover in the fabric:

  • Name: Specifies the switch name.

  • IP Address: Specifies the IP address of the switch.

  • Role: Specifies the role of the switch.

  • Serial Number: Specifies the serial number of the switch.

  • Fabric Name: Specifies the name of the fabric, where the switch is discovered.

  • Fabric Status: Specifies the status of the fabric, where the switch is discovered.

  • Discover Status: Specifies the discovery status of the switch.

  • Model: Specifies the switch model.

  • Software Version: Specifies the software version of the switch.

  • ThousandEyes Status: Specifies the status of the ThousandEyes Enterprise Agent.

  • Last Updated: Specifies when the switch was last updated.

  • Mode: Specifies the current mode of the switch.

  • VPC Role: Specifies the vPC role of the switch.

  • VPC Peer: Specifies the vPC peer of the switch.

The Switches tab has the following icons and buttons:

  • Add switches: Click this icon to discover existing or new switches to the fabric. The Inventory Management dialog box appears.

    This option is also available in the fabric topology window. Click Add switches in the Actions pane.

    Refer the following sections for more information:

  • Rediscover switch: Initiate the switch discovery process by DCNM afresh.

  • Update discovery credentials: Update device credentials such as authentication protocol, username and password.

  • Saving config and Reload: Save the configurations and reload the switch.


    Note


    This option is grayed out if the fabric is in freeze mode, that is, if you have disabled deployments on the fabric.


  • Copy running to startup config: From Cisco DCNM, Release 11.4(1), you can perform an on-demand copy running-configuration to startup-configuration operation for one or more switches.


    Note


    This option will be grayed out if the fabric is in freeze mode, that is, if you have disabled deployments on the fabric.


  • Remove switches: Remove the switch from the fabric.


    Note


    This option will be grayed out if the fabric is in freeze mode, that is, if you have disabled deployments on the fabric.


  • Preview: You can preview the pending configurations and the side-by-side comparison of running configurations and expected configurations.

  • Policies: Add, update and delete a policy. The policies are template instances of templates in the template library. After creating a policy, you should deploy it on the switches using the Deploy option available in the Policies window. You can select more than one policy and view them.


    Note


    If you select multiple switches and deploy a policy instance, then it will be deployed on all the selected switches.


  • ThousandEyes Agent: You can start, stop, install, or uninstall ThousandEyes Enterprise Agent on the switch. You can choose single or multiple switches and select required operation from ThousandEyes Agent drop-down list.


    Note


    When you choose multiple switches to perform ThousandEyes Enterprise Agent action, ensure that the status of selected switches are same.


  • Interfaces: Deploy configurations on the switch interfaces.

  • History: View the deployment history and the policy change history using this button. Choose one or more switches and click History.

    The Policy Change History tab lists the history of policies along with the users who made the changes like add, update, or delete.

    Under the Policy Change History tab, for a policy, click Detailed History under the Generated Config column to view the generated config before and after.

    The following table provides the summary of generated config before and after for Policy Template Instances (PTIs).

    PTI Operations Generated Config Before Generated Config After
    Add Empty Contains the config