Fabrics

LAN Fabrics

The following terms are referred to in this document:

  • Greenfield Deployments: Applicable for provisioning new VXLAN EVPN fabrics and eBGP-based routed fabrics.

  • Brownfield Deployments: Applicable for existing VXLAN EVPN fabrics:

    • Migrate CLI-configured VXLAN EVPN fabrics to Nexus Dashboard Fabric Controller using the Easy_Fabric fabric template.

    • NFM migration to Cisco Nexus Dashboard Fabric Controller using the Easy_Fabric fabric template.

Note that in this document the terms switch and device are used interchangeably.

For information about upgrades, refer to the Cisco Nexus Dashboard Fabric Controller Installation and Upgrade Guide for LAN Controller Deployment.

The following table describes the fields that appear on LAN > Fabrics.

Field

Description

Fabric Name

Displays the name of the fabric.

Fabric Technology

Displays the fabric technology based on the fabric template.

Fabric Type

Displays the type of the fabric—Switch Fabric, LAN Monitor, or External

ASN

Displays the ASN for the fabric.

Fabric Health

Displays the health of the fabric.

The following table describes the action items in the Actions menu drop‐down list, that appear on LAN > Fabrics.

Action Item

Description

Create Fabric

From the Actions drop-down list, select Create Fabric. For more instructions, see Create a Fabric.

Edit Fabric

Select a fabric to edit. From the Actions drop-down list, select Edit Fabric. Make the necessary changes and click Save. Click Close to discard the changes.

Delete Fabric

Select a fabric to delete. From the drop-down list, select Delete Fabric. Click Confirm to delete the fabric.

Fabric Summary

Click on a fabric to open the side kick panel. The following sections display the summary of the fabric:

  • Health - Shows the health of the Fabric.

  • Alarms - Displays the alarms based on the categories.

  • Fabric Info - Provides basic about the Fabric.

  • Inventory - Provides information about Switch Configuration and Switch Health.

Click the Launch icon to the right top corner to view the Fabric Overview.

Understanding Fabric Templates

Fabric Templates

The following table provides information about the available fabric templates:

Fabric Template Description Detailed Procedures
Easy_Fabric Fabric template for a VXLAN BGP EVPN deployment with an IGP (OSPF, IS-IS) and iBGP deployment for Nexus 9000 and Nexus 3000 switches. Both IPv4 and IPv6 underlay are supported. Creating a VXLAN EVPN Fabric Using the Easy_Fabric Template
Easy_Fabric_IOS_XE Fabric template for a VXLAN BGP EVPN deployment with Catalyst 9000 switches. Create Easy_Fabric for Cisco Catalyst 9000 Series Switches
Easy_Fabric_eBGP Fabric template for an eBGP based Routed Fabric deployment with Nexus 9000 and Nexus 3000 switches. This template also supports an eBGP VXLAN BGP EVPN deployment with eBGP used as both the underlay and overlay protocol. Creating VXLAN EVPN Fabric with eBGP-based Underlay
External_Fabric

Fabric template that supports Nexus and non-Nexus devices. Non-Nexus device support includes other Cisco devices (IOS-XE, IOS-XR) and third-party switches. This template can manage BGP configuration on core and edge routers.

Use cases for this template include:

  • Classic hierarchical 2/3 Tier vPC or FabricPath like networks

  • VXLAN EVPN deployments for Cisco Nexus switches other than Nexus 9000 or Nexus 3000

  • VRF-Lite on Core/Edge devices

  • Using NDFC in monitored mode, which is useful when you want to eventually move to managed mode after trying the monitor mode

Creating an External Fabric
LAN_Classic Fabric template that allows you to monitor and manage various Cisco Nexus Greenfield and Brownfield deployments, including the traditional 2 or 3-tier, vPC, and FabricPath data center topologies. LAN Fabrics
Fabric_Group Fabric template that contains other LAN_Classic fabrics that allows visualization of groups of Classic LAN fabrics and their interconnections. LAN Fabrics
LAN_Monitor Fabric template to support Fabric Discovery persona for monitoring purposes only. Cisco Nexus Dashboard Insights (NDI) can operate on such a fabric. No configuration provisioning or image management is supported. LAN Fabrics
MSD_Fabric Fabric template for a VXLAN BGP EVPN Multi-Site Domain (MSD) deployment that can contain other VXLAN BGP EVPN fabrics with Layer-2/Layer-3 Overlay DCI Extensions. Creating the MSD_Fabric and Associating Member Fabrics
IPFM_Classic Fabric template for existing deployments of IP Fabric for Media (IPFM). IPFM enables media content providers and broadcasters to use a flexible and scalable IP-based infrastructure. Creating an IPFM Classic Fabric
Easy_Fabric_IPFM Fabric template for an easy greenfield deployment of an IP Fabric for Media (IPFM) fabric. IPFM enables media content providers and broadcasters to use a flexible and scalable IP-based infrastructure. Creating an IPFM Easy Fabric

Note


Enhanced Classic LAN is a preview feature in Nexus Dashboard Fabric Controller, Release 12.1.2e. We recommend that you use this feature marked as BETA in your lab setup only. Do not use this features in your production deployment.

To view Enhanced Classic LAN fabrics, you must enable this feature. On Web UI, navigate to Settings > Server Settings > LAN-Fabric, then check the Enable Preview Features check box.


Prerequisites to Creating a Fabric

  • Update the ESXi host settings in the vSphere Client to accept overriding changes in promiscuous mode. For more information, see the Overriding the Changes in Promiscuous Mode section.

  • Configure the persistent IP addresses in Cisco Nexus Dashboard. For more information, see Cluster Configuration section in Cisco Nexus Dashboard User Guide.

Override ESXi Networking for Promiscuous Mode

For NDFC to run on top of the virtual Nexus Dashboard (vND) instance, you must enable promiscuous mode on port groups that are associated with Nexus Dashboard interfaces where External Service IP addresses are specified. vND comprises of Nexus Dashboard management interface and data interface. By default, for LAN deployments, 2 external service IP addresses are required for the Nexus Dashboard management interface subnet. Therefore, you must enable promiscuous mode for the associated port-group. If inband management or Endpoint Locator (EPL) is enabled, you must specify External Service IP addresses in the Nexus Dashboard data interface subnet. You must also enable the promiscuous mode for the Nexus Dashboard data/fabric interface port-group. For NDFC SAN Controller, promiscuous mode must be enabled only on the Nexus Dashboard data interface associated port-group. For NDFC SAN Controller, promiscuous mode only needs to be enabled on the Nexus Dashboard data interface associated port-group. For more information, refer to Cisco Nexus Dashboard Deployment Guide.

From Cisco NDFC Release , you can run NDFC on top of virtual Nexus Dashboard (vND) instance with promiscuous mode that is disabled on port groups that are associated with Nexus Dashboard interfaces where External Service IP addresses are specified. vND comprises Nexus Dashboard management interface and data interface. By default, for fabric controller persona, two external service IP addresses are required for the Nexus Dashboard management interface subnet.

Before the NDFC Release , if Inband management or Endpoint Locator or POAP feature was enabled on NDFC, you must also enable promiscuous mode for the Nexus Dashboard data or fabric interface port-group. This setting was mandatory for traffic flow that is associated for these features.

Enabling promiscuous mode raise risk of security issues in NDFC, it is recommended to set default setting for promiscuous mode.


Note


  • Disabling promiscuous mode is supported from Cisco Nexus Dashboard Release .

  • You can disable promiscuous mode when Nexus Dashboard nodes are layer-3 adjacent on the Data network, BGP is configured, and fabric switches are reachable through the data interface.

  • You can disable promiscuous mode when Nexus Dashboard interfaces are layer-2 adjacent to switch mgmt0 interface.


If Inband management or EPL is enabled, you must specify External Service IP addresses in the Nexus Dashboard data interface subnet. You can disable promiscuous mode for the Nexus Dashboard data or fabric interface port-group. For more information, refer to Cisco Nexus Dashboard Deployment Guide


Note


Default option for promiscuous mode is Reject.


Procedure


Step 1

Log into your vSphere Client.

Step 2

Navigate to the ESXi host.

Step 3

Right-click the host and choose Settings.

A sub-menu appears.

Step 4

Choose Networking > Virtual Switches.

All the virtual switches appear as blocks.

Step 5

Click Edit Settings of the VM Network.

Step 6

Navigate to the Security tab.

Step 7

Update the Promiscuous mode settings as follows:

  • Check the Override check box.

  • Choose Accept from the drop-down list.

Step 8

Click OK.


Create a Fabric

To create a Fabric using Cisco Nexus Dashboard Fabric Controller Web UI, perform the following steps:

Procedure


Step 1

Choose LAN > Fabrics.

Step 2

From the Actions drop-down list, select Create Fabric.

Step 3

Enter the fabric name and click Choose Template.

Step 4

Specify the values for the fabric settings and click Save.


VXLAN EVPN Fabrics Provisioning

Cisco Nexus Dashboard Fabric Controller provides an enhanced “Easy” fabric workflow for unified underlay and overlay provisioning of the VXLAN BGP EVPN configuration on Nexus 9000 and 3000 series of switches. The configuration of the fabric is achieved via a powerful, flexible, and customizable template-based framework. Using minimal user inputs, an entire fabric can be brought up with Cisco-recommended best practice configurations in a short period of time. The set of parameters exposed in the Fabric Settings allow you to tailor the fabric to your preferred underlay provisioning options.

Border devices in a fabric typically provide external connectivity via peering with appropriate edge/core/WAN routers. These edge/core routers may either be managed or monitored by Nexus Dashboard Fabric Controller. These devices are placed in a special fabric called the External Fabric. The same Nexus Dashboard Fabric Controller can manage multiple VXLAN BGP EVPN fabrics while also offering easy provisioning and management of Layer-2 and Layer-3 DCI underlay and overlay configuration among these fabrics using a special construct called a Multi-Site Domain (MSD) fabric.

The Nexus Dashboard Fabric Controller GUI functions for creating and deploying VXLAN BGP EVPN fabrics are as follows:

LAN > Fabrics > LAN Fabrics Create Fabric under Actions drop-down list.

Create, edit, and delete a fabric:

  • Create new VXLAN, MSD, and external VXLAN fabrics.

  • View the VXLAN and MSD fabric topologies, including connections between fabrics.

  • Update fabric settings.

  • Save and deploy updated changes.

  • Delete a fabric (if devices are removed).

Device discovery and provisioning start-up configurations on new switches:

  • Add switch instances to the fabric.

  • Provision start-up configurations and an IP address to a new switch through POAP configuration.

  • Update switch policies, save, and deploy updated changes.

  • Create intra-fabric and inter-fabric links (also called Inter-Fabric Connections [IFCs]).

LAN > Interfaces > LAN Fabrics Create New Interface under Actions drop-down list.

Underlay provisioning:

  • Create, deploy, view, edit, and delete a port-channel, vPC switch pair, Straight Through FEX (ST-FEX), Active-Active FEX (AA-FEX), loopback, subinterface, etc.

  • Create breakout and unbreakout ports.

  • Shut down and bring up interfaces.

  • Rediscover ports and view interface configuration history.

LAN > Switches > LAN Fabrics Add under Actions drop-down list.

Overlay network provisioning.

  • Create new overlay networks and VRFs (from the range specified in fabric creation).

  • Provision the overlay networks and VRFs on the switches of the fabric.

  • Undeploy the networks and VRFs from the switches.

  • Remove the provisioning from the fabric in Nexus Dashboard Fabric Controller.

LAN > Services menu option.

Provisioning of configuration on service leafs to which L4-7 service appliances may be attached. For more information, see L4-L7 Service Basic Workflow.

This chapter mostly covers configuration provisioning for a single VXLAN BGP EVPN fabric. EVPN Multi-Site provisioning for Layer-2/Layer-3 DCI across multiple fabrics using the MSD fabric, is documented in a separate chapter. The deployment details of how overlay Networks and VRFs can be easily provisioned from the Fabric Controller, is covered in the Creating Networks and Creating VRFs in the Networks and VRFs sections.

Guidelines for VXLAN BGP EVPN Fabrics Provisioning

  • For any switch to be successfully imported into Nexus Dashboard Fabric Controller, the user specified for discovery/import, should have the following permissions:

    • SSH access to the switch

    • Ability to perform SNMPv3 queries

    • Ability to run the show commands including show run, show interfaces, etc.

    • Ability to execute the guestshell commands, which are prefixed by run guestshell for the Nexus Dashboard Fabric Controller tracker.

  • The switch discovery user need not have the ability to make any configuration changes on the switches. It is primarily used for read access.

  • When an invalid command is deployed by Nexus Dashboard Fabric Controller to a device, for example, a command with an invalid key chain due to an invalid entry in the fabric settings, an error is generated displaying this issue. This error is not cleared after correcting the invalid fabric entry. You need to manually clean up or delete the invalid commands to clear the error.

    Note that the fabric errors related to the command execution are automatically cleared only when the same failed command succeeds in the subsequent deployment.

  • LAN credentials are required to be set of any user that needs to be perform any write access to the device. LAN credentials need to be set on the Nexus Dashboard Fabric Controller, on a per user per device basis. When a user imports a device into the Easy Fabric, and LAN credentials are not set for that device, Nexus Dashboard Fabric Controller moves this device to a migration mode. Once the user sets the appropriate LAN credentials for that device, a subsequent Save & Deploy retriggers the device import process.

  • The Save & Deploy button triggers the intent regeneration for the entire fabric as well as a configuration compliance check for all the switches within the fabric. This button is required but not limited to the following cases:

    • A switch or a link is added, or any change in the topology

    • A change in the fabric settings that must be shared across the fabric

    • A switch is removed or deleted

    • A new vPC pairing or unpairing is done

    • A change in the role for a device

    When you click Recalculate Config, the changes in the fabric are evaluated, and the configuration for the entire fabric is generated. Click Preview Config to preview the generated configuration, and then deploy it at a fabric level. Therefore, Deploy Config can take more time depending on the size of the fabric.

    When you right-click on a switch icon, you can use the Deploy config to switches option to deploy per switch configurations. This option is a local operation for a switch, that is, the expected configuration or intent for a switch is evaluated against it’s current running configuration, and a config compliance check is performed for the switch to get the In-Sync or Out-of-Sync status. If the switch is out of sync, the user is provided with a preview of all the configurations running in that particular switch that vary from the intent defined by the user for that respective switch.

  • Persistent configuration diff is seen for the command line: system nve infra-vlan int force . The persistent diff occurs if you have deployed this command via the freeform configuration to the switch. Although the switch requires the force keyword during deployment, the running configuration that is obtained from the switch in Nexus Dashboard Fabric Controller doesn’t display the force keyword. Therefore, the system nve infra-vlan int force command always shows up as a diff.

    The intent in Nexus Dashboard Fabric Controller contains the line:

    system nve infra-vlan int force

    The running config contains the line:

    system nve infra-vlan int

    As a workaround to fix the persistent diff, edit the freeform config to remove the force keyword after the first deployment such that it is system nve infra-vlan int .

    The force keyword is required for the initial deploy and must be removed after a successful deploy. You can confirm the diff by using the Side-by-side Comparison tab in the Config Preview window.

    The persistent diff is also seen after a write erase and reload of a switch. Update the intent on Nexus Dashboard Fabric Controller to include the force keyword, and then you need to remove the force keyword after the first deployment.

  • When the switch contains the hardware access-list tcam region arp-ether 256 command, which is deprecated without the double-wide keyword, the below warning is displayed:

    WARNING: Configuring the arp-ether region without "double-wide" is deprecated and can result in silent non-vxlan packet drops. Use the "double-wide" keyword when carving TCAM space for the arp-ether region.

    Since the original hardware access-list tcam region arp-ether 256 command doesn’t match the policies in Nexus Dashboard Fabric Controller, this config is captured in the switch_freeform policy. After the hardware access-list tcam region arp-ether 256 double-wide command is pushed to the switch, the original tcam command that does not contain the double-wide keyword is removed.

    You must manually remove the hardware access-list tcam region arp-ether 256 command from the switch_freeform policy. Otherwise, config compliance shows a persistent diff.

    Here is an example of the hardware access-list command on the switch:

    
    switch(config)# show run | inc arp-ether
    switch(config)# hardware access-list tcam region arp-ether 256
    Warning: Please save config and reload the system for the configuration to take effect
    switch(config)# show run | inc arp-ether
    hardware access-list tcam region arp-ether 256
    switch(config)# 
    switch(config)# hardware access-list tcam region arp-ether 256 double-wide 
    Warning: Please save config and reload the system for the configuration to take effect
    switch(config)# show run | inc arp-ether
    hardware access-list tcam region arp-ether 256 double-wide
    

    You can see that the original tcam command is overwritten.

Creating a VXLAN EVPN Fabric Using the Easy_Fabric Template

This topic describes how to create a new VXLAN EVPN fabric using the Easy_Fabric template and contains descriptions for the IPv4 underlay. For information about the IPv6 underlay, see IPv6 Underlay Support for Easy Fabric.

  1. Navigate to the LAN Fabrics page:

    LAN > Fabrics

  2. Click Actions > Create Fabric.

    The Create Fabric window appears.

  3. Enter a unique name for the fabric in the Fabric Name field, then click Choose Fabric.

    A list of all available fabric templates are listed.

  4. From the available list of fabric templates, choose the Easy_Fabric template, then click Select.

  5. Enter the necessary field values to create a fabric.

    The tabs and their fields in the screen are explained in the following sections. The overlay and underlay network parameters are included in these tabs.


    Note


    If you’re creating a standalone fabric as a potential member fabric of an MSD fabric (used for provisioning overlay networks for fabrics that are connected through EVPN Multi-Site technology), see Multi-Site Domain for VXLAN BGP EVPN Fabrics before creating the member fabric.


  6. When you have completed the necessary configurations, click Save.

    • Click on the fabric to display a summary in the slide-in pane.

    • Click on the Launch icon to display the Fabric Overview.

General Parameters

The General Parameters tab is displayed by default. The fields in this tab are described in the following table.

Field

Description

BGP ASN

Enter the BGP AS number the fabric is associated with. This must be same as existing fabric.

Enable IPv6 Underlay

Enable the IPv6 underlay feature. For information, see IPv6 Underlay Support for Easy Fabric.

Enable IPv6 Link-Local Address

Enables the IPv6 Link-Local address.

Fabric Interface Numbering

Specifies whether you want to use point-to-point (p2p) or unnumbered networks.

Underlay Subnet IP Mask

Specifies the subnet mask for the fabric interface IP addresses.

Underlay Subnet IPv6 Mask

Specifies the subnet mask for the fabric interface IPv6 addresses.

Underlay Routing Protocol

The IGP used in the fabric, OSPF, or IS-IS.

Route-Reflectors (RRs)

The number of spine switches that are used as route reflectors for transporting BGP traffic. Choose 2 or 4 from the drop-down box. The default value is 2.

To deploy spine devices as RRs, Nexus Dashboard Fabric Controller sorts the spine devices based on their serial numbers, and designates two or four spine devices as RRs. If you add more spine devices, existing RR configuration won’t change.

Increasing the count – You can increase the route reflectors from two to four at any point in time. Configurations are automatically generated on the other two spine devices designated as RRs.

Decreasing the count – When you reduce four route reflectors to two, remove the unneeded route reflector devices from the fabric. Follow these steps to reduce the count from 4 to 2.

  1. Change the value in the drop-down box to 2.

  2. Identify the spine switches designated as route reflectors.

    An instance of the rr_state policy is applied on the spine switch if it’s a route reflector. To find out if the policy is applied on the switch, right-click the switch, and choose View/edit policies. In the View/Edit Policies screen, search rr_state in the Template field. It is displayed on the screen.

  3. Delete the unneeded spine devices from the fabric (right-click the spine switch icon and choose Discovery > Remove from fabric).

    If you delete existing RR devices, the next available spine switch is selected as the replacement RR.

  4. Click Deploy Config in the fabric topology window.

You can preselect RRs and RPs before performing the first Save & Deploy operation. For more information, see Preselecting Switches as Route-Reflectors and Rendezvous-Points.

Anycast Gateway MAC

Specifies the anycast gateway MAC address.

Enable Performance Monitoring

Check the check box to enable performance monitoring.

Ensure that you do not clear interface counters from the Command Line Interface of the switches. Clearing interface counters can cause the Performance Monitor to display incorrect data for traffic utilization. If you must clear the counters and the switch has both clear counters and clear counters snmp commands (not all switches have the clear counters snmp command), ensure that you run both the main and the SNMP commands simultaneously. For example, you must run the clear counters interface ethernet slot/port command followed by the clear counters interface ethernet slot/port snmp command. This can lead to a one time spike.

What's next: Complete the configurations in another tab if necessary, or click Save when you have completed the necessary configurations for this fabric.

Replication

The fields in the Replication tab are described in the following table. Most of the fields are automatically generated based on Cisco-recommended best practice configurations, but you can update the fields if needed.

Field

Description

Replication Mode

The mode of replication that is used in the fabric for BUM (Broadcast, Unknown Unicast, Multicast) traffic. The choices are Ingress Replication or Multicast. When you choose Ingress replication, the multicast related fields get disabled.

You can change the fabric setting from one mode to the other, if no overlay profile exists for the fabric.

Multicast Group Subnet

IP address prefix used for multicast communication. A unique IP address is allocated from this group for each overlay network.

The replication mode change isn’t allowed if a policy template instance is created for the current mode. For example, if a multicast related policy is created and deployed, you can’t change the mode to Ingress.

Enable Tenant Routed Multicast (TRM)

Check the check box to enable Tenant Routed Multicast (TRM) that allows overlay multicast traffic to be supported over EVPN/MVPN in the VXLAN BGP EVPN fabric.

Default MDT Address for TRM VRFs

The multicast address for Tenant Routed Multicast traffic is populated. By default, this address is from the IP prefix specified in the Multicast Group Subnet field. When you update either field, ensure that the TRM address is chosen from the IP prefix specified in Multicast Group Subnet.

For more information, see Overview of Tenant Routed Multicast.

Rendezvous-Points

Enter the number of spine switches acting as rendezvous points.

RP mode

Choose from the two supported multicast modes of replication, ASM (for Any-Source Multicast [ASM]) or BiDir (for Bidirectional PIM [BIDIR-PIM]).

When you choose ASM, the BiDir related fields aren’t enabled. When you choose BiDir, the BiDir related fields are enabled.

Note

 

BIDIR-PIM is supported on Cisco's Cloud Scale Family platforms 9300-EX and 9300-FX/FX2, and software release 9.2(1) onwards.

When you create a new VRF for the fabric overlay, this address is populated in the Underlay Multicast Address field, in the Advanced tab.

Underlay RP Loopback ID

The loopback ID used for the rendezvous point (RP), for multicast protocol peering purposes in the fabric underlay.

Underlay Primary RP Loopback ID

Enabled if you choose BIDIR-PIM as the multicast mode of replication.

The primary loopback ID used for the phantom RP, for multicast protocol peering purposes in the fabric underlay.

Underlay Backup RP Loopback ID

Enabled if you choose BIDIR-PIM as the multicast mode of replication.

The secondary loopback ID used for the phantom RP, for multicast protocol peering purposes in the fabric underlay.

Underlay Second Backup RP Loopback Id

Used for the second fallback Bidir-PIM Phantom RP.

Underlay Third Backup RP Loopback Id

Used for the third fallback Bidir-PIM Phantom RP.

What's next: Complete the configurations in another tab if necessary, or click Save when you have completed the necessary configurations for this fabric.

VPC

The fields in the VPC tab are described in the following table. Most of the fields are automatically generated based on Cisco-recommended best practice configurations, but you can update the fields if needed.

Field

Description

vPC Peer Link VLAN

VLAN used for the vPC peer link SVI.

Make vPC Peer Link VLAN as Native VLAN

Enables vPC peer link VLAN as Native VLAN.

vPC Peer Keep Alive option

Choose the management or loopback option. If you want to use IP addresses assigned to the management port and the management VRF, choose management. If you use IP addresses assigned to loopback interfaces (and a non-management VRF), choose loopback.

If you use IPv6 addresses, you must use loopback IDs.

vPC Auto Recovery Time

Specifies the vPC auto recovery time-out period in seconds.

vPC Delay Restore Time

Specifies the vPC delay restore period in seconds.

vPC Peer Link Port Channel ID

Specifies the Port Channel ID for a vPC Peer Link. By default, the value in this field is 500.

vPC IPv6 ND Synchronize

Enables IPv6 Neighbor Discovery synchronization between vPC switches. The check box is enabled by default. Uncheck the check box to disable the function.

vPC advertise-pip

Select the check box to enable the Advertise PIP feature.

You can enable the advertise PIP feature on a specific vPC as well. .

Enable the same vPC Domain Id for all vPC Pairs

Enable the same vPC Domain ID for all vPC pairs. When you select this field, the vPC Domain Id field is editable.

vPC Domain Id

Specifies the vPC domain ID to be used on all vPC pairs.

vPC Domain Id Range

Specifies the vPC Domain Id range to use for new pairings.

Enable QoS for Fabric vPC-Peering

Enable QoS on spines for guaranteed delivery of vPC Fabric Peering communication. .

Note

 

QoS for vPC fabric peering and queuing policies options in fabric settings are mutually exclusive.

QoS Policy Name

Specifies QoS policy name that should be same on all fabric vPC peering spines. The default name is spine_qos_for_fabric_vpc_peering.

What's next: Complete the configurations in another tab if necessary, or click Save when you have completed the necessary configurations for this fabric.

Protocols

The fields in the Protocols tab are described in the following table. Most of the fields are automatically generated based on Cisco-recommended best practice configurations, but you can update the fields if needed.

Field

Description

Underlay Routing Loopback Id

The loopback interface ID is populated as 0 since loopback0 is usually used for fabric underlay IGP peering purposes.

Underlay VTEP Loopback Id

The loopback interface ID is populated as 1 since loopback1 is used for the VTEP peering purposes.

Underlay Anycast Loopback Id

The loopback interface ID is greyed out and used for vPC Peering in VXLANv6 Fabrics only.

Underlay Routing Protocol Tag

The tag defining the type of network.

OSPF Area ID

The OSPF area ID, if OSPF is used as the IGP within the fabric.

Note

 

The OSPF or IS-IS authentication fields are enabled based on your selection in the Underlay Routing Protocol field in the General tab.

Enable OSPF Authentication

Select the check box to enable OSPF authentication. Deselect the check box to disable it. If you enable this field, the OSPF Authentication Key ID and OSPF Authentication Key fields get enabled.

OSPF Authentication Key ID

The Key ID is populated.

OSPF Authentication Key

The OSPF authentication key must be the 3DES key from the switch.

Note

 
Plain text passwords are not supported. Log in to the switch, retrieve the encrypted key and enter it in this field. Refer, Retrieving the Authentication Key section for details.

IS-IS Level

Select the IS-IS level from this drop-down list.

Enable IS-IS Network Point-to-Point

Enables network point-to-point on fabric interfaces which are numbered.

Enable IS-IS Authentication

Select the check box to enable IS-IS authentication. Deselect the check box to disable it. If you enable this field, the IS-IS authentication fields are enabled.

IS-IS Authentication Keychain Name

Enter the Keychain name, such as CiscoisisAuth.

IS-IS Authentication Key ID

The Key ID is populated.

IS-IS Authentication Key

Enter the Cisco Type 7 encrypted key.

Note

 

Plain text passwords are not supported. Log in to the switch, retrieve the encrypted key and enter it in this field. Refer the Retrieving the Authentication Key section for details.

Set IS-IS Overload Bit

When enabled, set the overload bit for an elapsed time after a reload.

IS-IS Overload Bit Elapsed Time

Allows you to clear the overload bit after an elapsed time in seconds.

Enable BGP Authentication

Select the check box to enable BGP authentication. Deselect the check box to disable it. If you enable this field, the BGP Authentication Key Encryption Type and BGP Authentication Key fields are enabled.

Note

 
If you enable BGP authentication using this field, leave the iBGP Peer-Template Config field blank to avoid duplicate configuration.

BGP Authentication Key Encryption Type

Choose the 3 for 3DES encryption type, or 7 for Cisco encryption type.

BGP Authentication Key

Enter the encrypted key based on the encryption type.

Note

 
Plain text passwords are not supported. Log in to the switch, retrieve the encrypted key and enter it in the BGP Authentication Key field. Refer the Retrieving the Authentication Key section for details.

Enable PIM Hello Authentication

Select this check box to enable PIM hello authentication on all the intra-fabric interfaces of the switches in a fabric. This check box is editable only for the Multicast replication mode. Note this check box is valid only for the IPv4 underlay.

PIM Hello Authentication Key

Specifies the PIM hello authentication key. For more information, see Retrieving PIM Hello Authentication Key.

To retrieve the PIM Hello Authentication Key, perform the following steps:

  1. SSH into the switch.

  2. On an unused switch interface, enable the following:

    switch(config)# interface e1/32 
    switch(config-if)# ip pim hello-authentication ah-md5 pimHelloPassword

    In this example, pimHelloPassword is the cleartext password that has been used.

  3. Enter the show run interface command to retrieve the PIM hello authentication key.

    switch(config-if)# show run interface e1/32 | grep pim 
    ip pim sparse-mode 
    ip pim hello-authentication ah-md5 3 d34e6c5abc7fecf1caa3b588b09078e0 

    In this example, d34e6c5abc7fecf1caa3b588b09078e0 is the PIM hello authentication key that should be specified in the fabric settings.

Enable BFD

Check the check box to enable feature bfd on all switches in the fabric. This feature is valid only on IPv4 underlay and the scope is within a fabric.

BFD within a fabric is supported natively. The BFD feature is disabled by default in the Fabric Settings. If enabled, BFD is enabled for the underlay protocols with the default settings. Any custom required BFD configurations must be deployed via the per switch freeform or per interface freeform policies.

The following config is pushed after you select the Enable BFD check box:

feature bfd

For information about BFD feature compatibility, refer your respective platform documentation and for information about the supported software images, see Compatibility Matrix for Cisco Nexus Dashboard Fabric Controller.

Enable BFD for iBGP

Check the check box to enable BFD for the iBGP neighbor. This option is disabled by default.

Enable BFD for OSPF

Check the check box to enable BFD for the OSPF underlay instance. This option is disabled by default, and it is grayed out if the link state protocol is ISIS.

Enable BFD for ISIS

Check the check box to enable BFD for the ISIS underlay instance. This option is disabled by default, and it is grayed out if the link state protocol is OSPF.

Enable BFD for PIM

Check the check box to enable BFD for PIM. This option is disabled by default, and it is be grayed out if the replication mode is Ingress.

Following are examples of the BFD global policies:


router ospf <ospf tag>
   bfd

router isis <isis tag>
  address-family ipv4 unicast
    bfd

ip pim bfd

router bgp <bgp asn>
  neighbor <neighbor ip>
    bfd

Enable BFD Authentication

Check the check box to enable BFD authentication. If you enable this field, the BFD Authentication Key ID and BFD Authentication Key fields are editable.

Note

 

BFD Authentication is not supported when the Fabric Interface Numbering field under the General tab is set to unnumbered. The BFD authentication fields will be grayed out automatically. BFD authentication is valid for only for P2P interfaces.

BFD Authentication Key ID

Specifies the BFD authentication key ID for the interface authentication. The default value is 100.

BFD Authentication Key

Specifies the BFD authentication key.

For information about how to retrieve the BFD authentication parameters. .

iBGP Peer-Template Config

Add iBGP peer template configurations on the leaf switches to establish an iBGP session between the leaf switch and route reflector.

If you use BGP templates, add the authentication configuration within the template and uncheck the Enable BGP Authentication check box to avoid duplicate configuration.

In the sample configuration, the 3DES password is displayed after password 3.

router bgp 65000
    password 3 sd8478fswerdfw3434fsw4f4w34sdsd8478fswerdfw3434fsw4f4w

The following fields can be used to specify different configurations:

  • iBGP Peer-Template Config – Specifies the config used for RR and spines with border role.

  • Leaf/Border/Border Gateway iBGP Peer-Template Config – Specifies the config used for leaf, border, or border gateway. If this field is empty, the peer template defined in iBGP Peer-Template Config is used on all BGP enabled devices (RRs, leafs, border, or border gateway roles).

In a brownfield migration, if the spine and leaf use different peer template names, both iBGP Peer-Template Config and Leaf/Border/Border Gateway iBGP Peer-Template Config fields need to be set according to the switch config. If spine and leaf use the same peer template name and content (except for the “route-reflector-client” CLI), only iBGP Peer-Template Config field in fabric setting needs to be set. If the fabric settings on iBGP peer templates do not match the existing switch configuration, an error message is generated and the migration will not proceed.

What's next: Complete the configurations in another tab if necessary, or click Save when you have completed the necessary configurations for this fabric.

Advanced

The fields in the Advanced tab are described in the following table. Most of the fields are automatically generated based on Cisco-recommended best practice configurations, but you can update the fields if needed.

Field

Description

VRF Template

Specifies the VRF template for creating VRFs.

Network Template

Specifies the network template for creating networks.

VRF Extension Template

Specifies the VRF extension template for enabling VRF extension to other fabrics.

Network Extension Template

Specifies the network extension template for extending networks to other fabrics.

Overlay Mode

VRF/Network configuration using config-profile or CLI, default is config-profile. For more information, see Overlay Mode.

Site ID

The ID for this fabric if you are moving this fabric within an MSD. The site ID is mandatory for a member fabric to be a part of an MSD. Each member fabric of an MSD has a unique site ID for identification.

Intra Fabric Interface MTU

Specifies the MTU for the intra fabric interface. This value should be an even number.

Layer 2 Host Interface MTU

Specifies the MTU for the layer 2 host interface. This value should be an even number.

Unshut Host Interfaces by Default

Check this check box to unshut the host interfaces by default.

Power Supply Mode

Choose the appropriate power supply mode.

CoPP Profile

Choose the appropriate Control Plane Policing (CoPP) profile policy for the fabric. By default, the strict option is populated.

VTEP HoldDown Time

Specifies the NVE source interface hold down time.

Brownfield Overlay Network Name Format

Enter the format to be used to build the overlay network name during a brownfield import or migration. The network name should not contain any white spaces or special characters except underscore (_) and hyphen (-). The network name must not be changed once the brownfield migration has been initiated. See the Creating Networks for the Standalone Fabric section for the naming convention of the network name. The syntax is [<string> | $$VLAN_ID$$] $$VNI$$ [<string>| $$VLAN_ID$$] and the default value is Auto_Net_VNI$$VNI$$_VLAN$$VLAN_ID$$. When you create networks, the name is generated according to the syntax you specify.

The following list describes the variables in the syntax:

  • $$VNI$$: Specifies the network VNI ID found in the switch configuration. This is a mandatory keyword required to create unique network names.

  • $$VLAN_ID$$: Specifies the VLAN ID associated with the network.

    VLAN ID is specific to switches, hence Nexus Dashboard Fabric Controller picks the VLAN ID from one of the switches, where the network is found, randomly and use it in the name.

    We recommend not to use this unless the VLAN ID is consistent across the fabric for the VNI.

  • <string>: This variable is optional and you can enter any number of alphanumeric characters that meet the network name guidelines.

An example overlay network name: Site_VNI12345_VLAN1234

Note

 

Ignore this field for greenfield deployments. The Brownfield Overlay Network Name Format applies for the following brownfield imports:

  • CLI-based overlays

  • Configuration profile-based overlay

Enable CDP for Bootstrapped Switch

Enables CDP on management (mgmt0) interface for bootstrapped switch. By default, for bootstrapped switches, CDP is disabled on the mgmt0 interface.

Enable VXLAN OAM

Enables the VXLAM OAM functionality for devices in the fabric. This is enabled by default. Uncheck the check box to disable VXLAN OAM function.

If you want to enable the VXLAN OAM function on specific switches and disable on other switches in the fabric, you can use freeform configurations to enable OAM and disable OAM in the fabric settings.

Note

 

The VXLAN OAM feature in Cisco Nexus Dashboard Fabric Controller is only supported on a single fabric or site.

Enable Tenant DHCP

Check the check box to enable feature dhcp and associated configurations globally on all switches in the fabric. This is a pre-requisite for support of DHCP for overlay networks that are part of the tenant VRFs.

Note

 

Ensure that Enable Tenant DHCP is enabled before enabling DHCP-related parameters in the overlay profiles.

Enable NX-API

Specifies enabling of NX-API on HTTPS. This check box is checked by default.

Enable NX-API on HTTP Port

Specifies enabling of NX-API on HTTP. Enable this check box and the Enable NX-API check box to use HTTP. This check box is checked by default. If you uncheck this check box, the applications that use NX-API and supported by Cisco Nexus Dashboard Fabric Controller, such as Endpoint Locator (EPL), Layer 4-Layer 7 services (L4-L7 services), VXLAN OAM, and so on, start using the HTTPS instead of HTTP.

Note

 

If you check the Enable NX-API check box and the Enable NX-API on HTTP check box, applications use HTTP.

Enable Policy-Based Routing (PBR)

Check this check box to enable routing of packets based on the specified policy. Starting with Cisco NX-OS Release 7.0(3)I7(1) and later releases, this feature works on Cisco Nexus 9000 Series switches with Nexus 9000 Cloud Scale (Tahoe) ASICs. This feature is used along with the Layer 4-Layer 7 service workflow. For information on Layer 4-Layer 7 service, refer the Layer 4-Layer 7 Service chapter.

Enable Strict Config Compliance

Enable the Strict Config Compliance feature by selecting this check box. It enables bi-directional compliance checks to flag additional configs in the running config that are not in the intent/expected config. By default, this feature is disabled.

Enable AAA IP Authorization

Enables AAA IP authorization, when IP Authorization is enabled in the remote authentication server. This is required to support Nexus Dashboard Fabric Controller in scenarios where customers have strict control of which IP addresses can have access to the switches.

Enable NDFC as Trap Host

Select this check box to enable Nexus Dashboard Fabric Controller as an SNMP trap destination. Typically, for a native HA Nexus Dashboard Fabric Controller deployment, the eth1 VIP IP address will be configured as SNMP trap destination on the switches. By default, this check box is enabled.

Anycast Border Gateway advertise-pip

Enables to advertise Anycast Border Gateway PIP as VTEP. Effective on MSD fabric 'Recalculate Config'.

Greenfield Cleanup Option

Enable the switch cleanup option for switches imported into Nexus Dashboard Fabric Controller with Preserve-Config=No, without a switch reload. This option is typically recommended only for the fabric environments with Cisco Nexus 9000v Switches to improve on the switch clean up time. The recommended option for Greenfield deployment is to employ Bootstrap or switch cleanup with a reboot. In other words, this option should be unchecked.

Enable Precision Time Protocol (PTP)

Enables PTP across a fabric. When you check this check box, PTP is enabled globally and on core-facing interfaces. Additionally, the PTP Source Loopback Id and PTP Domain Id fields are editable. For more information, see Precision Time Protocol for Easy Fabric.

PTP Source Loopback Id

Specifies the loopback interface ID Loopback that is used as the Source IP Address for all PTP packets. The valid values range from 0 to 1023. The PTP loopback ID cannot be the same as RP, Phantom RP, NVE, or MPLS loopback ID. Otherwise, an error will be generated. The PTP loopback ID can be the same as BGP loopback or user-defined loopback which is created from Nexus Dashboard Fabric Controller.

If the PTP loopback ID is not found during Deploy Config, the following error is generated:

Loopback interface to use for PTP source IP is not found. Create PTP loopback interface on all the devices to enable PTP feature.

PTP Domain Id

Specifies the PTP domain ID on a single network. The valid values range from 0 to 127.

Enable MPLS Handoff

Check the check box to enable the MPLS Handoff feature. For more information, see the MPLS SR and LDP Handoff chapter in External/WAN Layer 3 Connectivity for VXLAN BGP EVPN Fabrics.

Underlay MPLS Loopback Id

Specifies the underlay MPLS loopback ID. The default value is 101.

Enable TCAM Allocation

TCAM commands are automatically generated for VXLAN and vPC Fabric Peering when enabled.

Enable Default Queuing Policies

Check this check box to apply QoS policies on all the switches in this fabric. To remove the QoS policies that you applied on all the switches, uncheck this check box, update all the configurations to remove the references to the policies, and save and deploy. Pre-defined QoS configurations are included that can be used for various Cisco Nexus 9000 Series Switches. When you check this check box, the appropriate QoS configurations are pushed to the switches in the fabric. The system queuing is updated when configurations are deployed to the switches. You can perform the interface marking with defined queuing policies, if required, by adding the required configuration to the per interface freeform block.

Review the actual queuing policies by opening the policy file in the template editor. From Cisco Nexus Dashboard Fabric Controller Web UI, choose Operations > Templates. Search for the queuing policies by the policy file name, for example, queuing_policy_default_8q_cloudscale. Choose the file. From the Actions drop-down list, select Edit template content to edit the policy.

See the Cisco Nexus 9000 Series NX-OS Quality of Service Configuration Guide for platform specific details.

N9K Cloud Scale Platform Queuing Policy

Choose the queuing policy from the drop-down list to be applied to all Cisco Nexus 9200 Series Switches and the Cisco Nexus 9000 Series Switches that ends with EX, FX, and FX2 in the fabric. The valid values are queuing_policy_default_4q_cloudscale and queuing_policy_default_8q_cloudscale. Use the queuing_policy_default_4q_cloudscale policy for FEXes. You can change from the queuing_policy_default_4q_cloudscale policy to the queuing_policy_default_8q_cloudscale policy only when FEXes are offline.

N9K R-Series Platform Queuing Policy

Choose the queuing policy from the drop-down list to be applied to all Cisco Nexus switches that ends with R in the fabric. The valid value is queuing_policy_default_r_series.

Other N9K Platform Queuing Policy

Choose the queuing policy from the drop-down list to be applied to all other switches in the fabric other than the switches mentioned in the above two options. The valid value is queuing_policy_default_other.

Enable MACsec

Enables MACsec for the fabric. For more information, see Enabling MACsec.

Freeform CLIs - Fabric level freeform CLIs can be added while creating or editing a fabric. They are applicable to switches across the fabric. You must add the configurations as displayed in the running configuration, without indentation. Switch level freeform configurations should be added via the switch freeform on NDFC. For more information, see Enabling Freeform Configurations on Fabric Switches.

Leaf Freeform Config

Add CLIs that should be added to switches that have the Leaf, Border, and Border Gateway roles.

Spine Freeform Config

Add CLIs that should be added to switches with a Spine, Border Spine, Border Gateway Spine, and Super Spine roles.

Intra-fabric Links Additional Config

Add CLIs that should be added to the intra-fabric links.

What's next: Complete the configurations in another tab if necessary, or click Save when you have completed the necessary configurations for this fabric.

Resources

The fields in the Resources tab are described in the following table. Most of the fields are automatically generated based on Cisco-recommended best practice configurations, but you can update the fields if needed.

Field

Description

Manual Underlay IP Address Allocation

Do not check this check box if you are transitioning your VXLAN fabric management to Nexus Dashboard Fabric Controller.

  • By default, Nexus Dashboard Fabric Controller allocates the underlay IP address resources (for loopbacks, fabric interfaces, etc) dynamically from the defined pools. If you check the check box, the allocation scheme switches to static, and some of the dynamic IP address range fields are disabled.

  • For static allocation, the underlay IP address resources must be populated into the Resource Manager (RM) using REST APIs.

  • The Underlay RP Loopback IP Range field stays enabled if BIDIR-PIM function is chosen for multicast replication.

  • Changing from static to dynamic allocation keeps the current IP resource usage intact. Only future IP address allocation requests are taken from dynamic pools.

Underlay Routing Loopback IP Range

Specifies loopback IP addresses for the protocol peering.

Underlay VTEP Loopback IP Range

Specifies loopback IP addresses for VTEPs.

Underlay RP Loopback IP Range

Specifies the anycast or phantom RP IP address range.

Underlay Subnet IP Range

IP addresses for underlay P2P routing traffic between interfaces.

Underlay MPLS Loopback IP Range

Specifies the underlay MPLS loopback IP address range.

For eBGP between Border of Easy A and Easy B, Underlay routing loopback and Underlay MPLS loopback IP range must be a unique range. It should not overlap with IP ranges of the other fabrics, else VPNv4 peering will not come up.

Underlay Routing Loopback IPv6 Range

Specifies Loopback0 IPv6 Address Range

Underlay VTEP Loopback IPv6 Range

Specifies Loopback1 and Anycast Loopback IPv6 Address Range.

Underlay Subnet IPv6 Range

Specifies IPv6 Address range to assign Numbered and Peer Link SVI IPs.

BGP Router ID Range for IPv6 Underlay

Specifies BGP router ID range for IPv6 underlay.

Layer 2 VXLAN VNI Range

Specifies the overlay VXLAN VNI range for the fabric (min:1, max:16777214).

Layer 3 VXLAN VNI Range

Specifies the overlay VRF VNI range for the fabric (min:1, max:16777214).

Network VLAN Range

VLAN range for the per switch overlay network (min:2, max:4094).

VRF VLAN Range

VLAN range for the per switch overlay Layer 3 VRF (min:2, max:4094).

Subinterface Dot1q Range

Specifies the subinterface range when L3 sub interfaces are used.

VRF Lite Deployment

Specify the VRF Lite method for extending inter fabric connections.

The VRF Lite Subnet IP Range field specifies resources reserved for IP address used for VRF Lite when VRF Lite IFCs are auto-created. If you select Back2Back&ToExternal, then VRF Lite IFCs are auto-created.

Auto Deploy Both

This check box is applicable for symmetric VRF Lite deployment. When you select this check box, it would set the auto deploy flag to true for auto-created IFCs to turn on symmetric VRF Lite configuration.

You can check or uncheck the checkbox when the VRF Lite Deployment field is not set to Manual. This configuration only affects the new auto-created IFCs and does not affect the existing IFCs. You can edit an auto-created IFC and check or uncheck the Auto Generate Configuration for Peer field. This setting takes priority always.

VRF Lite Subnet IP Range and VRF Lite Subnet Mask

These fields are populated with the DCI subnet details. Update the fields as needed.

The values shown in your screen are automatically generated. If you want to update the IP address ranges, VXLAN Layer 2/Layer 3 network ID ranges or the VRF/Network VLAN ranges, ensure the following:

Note

 

When you update a range of values, ensure that it does not overlap with other ranges. You should only update one range of values at a time. If you want to update more than one range of values, do it in separate instances. For example, if you want to update L2 and L3 ranges, you should do the following.

  1. Update the L2 range and click Save.

  2. Click the Edit Fabric option again, update the L3 range and click Save.

Service Network VLAN Range

Specifies a VLAN range in the Service Network VLAN Range field. This is a per switch overlay service network VLAN range. The minimum allowed value is 2 and the maximum allowed value is 3967.

Route Map Sequence Number Range

Specifies the route map sequence number range. The minimum allowed value is 1 and the maximum allowed value is 65534.

What's next: Complete the configurations in another tab if necessary, or click Save when you have completed the necessary configurations for this fabric.

Manageability

The fields in the Manageability tab are described in the following table. Most of the fields are automatically generated based on Cisco-recommended best practice configurations, but you can update the fields if needed.

Field

Description

Inband Management

Enabling this allows the management of the switches over their front panel interfaces. The Underlay Routing Loopback interface is used for discovery. If enabled, switches cannot be added to the fabric over their out-of-band (OOB) mgmt0 interface. To manage easy fabrics through Inband management ensure that you have chosen Data in NDFC Web UI, Settings > Server Settings > Admin. Both inband management and out-of-band connectivity (mgmt0) are supported for this setting. For more information, see Inband Management and Inband POAP in Easy Fabrics.

DNS Server IPs

Specifies the comma separated list of IP addresses (v4/v6) of the DNS servers.

DNS Server VRFs

Specifies one VRF for all DNS servers or a comma separated list of VRFs, one per DNS server.

NTP Server IPs

Specifies comma separated list of IP addresses (v4/v6) of the NTP server.

NTP Server VRFs

Specifies one VRF for all NTP servers or a comma separated list of VRFs, one per NTP server.

Syslog Server IPs

Specifies the comma separated list of IP addresses (v4/v6) IP address of the syslog servers, if used.

Syslog Server Severity

Specifies the comma separated list of syslog severity values, one per syslog server. The minimum value is 0 and the maximum value is 7. To specify a higher severity, enter a higher number.

Syslog Server VRFs

Specifies one VRF for all syslog servers or a comma separated list of VRFs, one per syslog server.

AAA Freeform Config

Specifies the AAA freeform configurations.

If AAA configurations are specified in the fabric settings, switch_freeform PTI with source as UNDERLAY_AAA and description as AAA Configurations will be created.

What's next: Complete the configurations in another tab if necessary, or click Save when you have completed the necessary configurations for this fabric.

Bootstrap

The fields in the Bootstrap tab are described in the following table. Most of the fields are automatically generated based on Cisco-recommended best practice configurations, but you can update the fields if needed.

Field

Description

Enable Bootstrap

Select this check box to enable the bootstrap feature. Bootstrap allows easy day-0 import and bring-up of new devices into an existing fabric. Bootstrap leverages the NX-OS POAP functionality.

Starting from Cisco NDFC Release 12.1.1e, to add more switches and for POAP capability, chose check box for Enable Bootstrap and Enable Local DHCP Server. For more information, see Inband Management and Inband POAP in Easy Fabrics

After you enable bootstrap, you can enable the DHCP server for automatic IP address assignment using one of the following methods:

  • External DHCP Server: Enter information about the external DHCP server in the Switch Mgmt Default Gateway and Switch Mgmt IP Subnet Prefix fields.

  • Local DHCP Server: Enable the Local DHCP Server check box and enter details for the remaining mandatory fields.

Enable Local DHCP Server

Select this check box to initiate enabling of automatic IP address assignment through the local DHCP server. When you select this check box, the DHCP Scope Start Address and DHCP Scope End Address fields become editable.

If you do not select this check box, Nexus Dashboard Fabric Controller uses the remote or external DHCP server for automatic IP address assignment.

DHCP Version

Select DHCPv4 or DHCPv6 from this drop-down list. When you select DHCPv4, the Switch Mgmt IPv6 Subnet Prefix field is disabled. If you select DHCPv6, the Switch Mgmt IP Subnet Prefix is disabled.

Note

 

Cisco Nexus 9000 and 3000 Series Switches support IPv6 POAP only when switches are either Layer-2 adjacent (eth1 or out-of-band subnet must be a /64) or they are L3 adjacent residing in some IPv6 /64 subnet. Subnet prefixes other than /64 are not supported.

DHCP Scope Start Address and DHCP Scope End Address

Specifies the first and last IP addresses of the IP address range to be used for the switch out of band POAP.

Switch Mgmt Default Gateway

Specifies the default gateway for the management VRF on the switch.

Switch Mgmt IP Subnet Prefix

Specifies the prefix for the Mgmt0 interface on the switch. The prefix should be between 8 and 30.

DHCP scope and management default gateway IP address specification - If you specify the management default gateway IP address 10.0.1.1 and subnet mask 24, ensure that the DHCP scope is within the specified subnet, between 10.0.1.2 and 10.0.1.254.

Switch Mgmt IPv6 Subnet Prefix

Specifies the IPv6 prefix for the Mgmt0 interface on the switch. The prefix should be between 112 and 126. This field is editable if you enable IPv6 for DHCP.

Enable AAA Config

Select this check box to include AAA configurations from the Manageability tab as part of the device start-up config post bootstrap.

DHCPv4/DHCPv6 Multi Subnet Scope

Specifies the field to enter one subnet scope per line. This field is editable after you check the Enable Local DHCP Server check box.

The format of the scope should be defined as:

DHCP Scope Start Address, DHCP Scope End Address, Switch Management Default Gateway, Switch Management Subnet Prefix

For example: 10.6.0.2, 10.6.0.9, 10.6.0.1, 24

Bootstrap Freeform Config

(Optional) Enter additional commands as needed. For example, if you require some additional configurations to be pushed to the device and be available post device bootstrap, they can be captured in this field, to save the desired intent. After the devices boot up, they will contain the configuration defined in the Bootstrap Freeform Config field.

Copy-paste the running-config to a freeform config field with correct indentation, as seen in the running configuration on the NX-OS switches. The freeform config must match the running config. For more information, see Enabling Freeform Configurations on Fabric Switches.

What's next: Complete the configurations in another tab if necessary, or click Save when you have completed the necessary configurations for this fabric.

Configuration Backup

The fields in the Configuration Backup tab are described in the following table. Most of the fields are automatically generated based on Cisco-recommended best practice configurations, but you can update the fields if needed.

Field

Description

Hourly Fabric Backup

Select the check box to enable an hourly backup of fabric configurations and the intent.

The hourly backups are triggered during the first 10 minutes of the hour.

Scheduled Fabric Backup

Check the check box to enable a daily backup. This backup tracks changes in running configurations on the fabric devices that are not tracked by configuration compliance.

Scheduled Time

Specify the scheduled backup time in a 24-hour format. This field is enabled if you check the Scheduled Fabric Backup check box.

Select both the check boxes to enable both back up processes.

The backup process is initiated after you click Save.

The scheduled backups are triggered exactly at the time you specify with a delay of up to two minutes. The scheduled backups are triggered regardless of the configuration deployment status.

The number of fabric backups that will be retained on NDFC is decided by the Settings > Server Settings > LAN Fabric > Maximum Backups per Fabric.

The number of archived files that can be retained is set in the # Number of archived files per device to be retained: field in the Server Properties window.

Note

 

To trigger an immediate backup, do the following:

  1. Choose LAN > Topology.

  2. Click within the specific fabric box. The fabric topology screen comes up.

  3. From the Actions pane at the left part of the screen, click Re-Sync Fabric.

You can also initiate the fabric backup in the fabric topology window. Click Backup Now in the Actions pane.

What's next: Complete the configurations in another tab if necessary, or click Save when you have completed the necessary configurations for this fabric.

Flow Monitor

The fields in the Flow Monitor tab are described in the following table. Most of the fields are automatically generated based on Cisco-recommended best practice configurations, but you can update the fields if needed.

Field

Description

Enable Netflow

Check this check box to enable Netflow on VTEPs for this Fabric. By default, Netflow is disabled. On Enable, NetFlow configuration will be applied to all VTEPS that support netflow.

Note

 

When Netflow is enabled on the fabric, you can choose not to have netflow on a particular switch by having a dummy no_netflow PTI.

If netflow is not enabled at the fabric level, an error message is generated when you enable netflow at the interface, network, or vrf level. For information about Netflow support for Cisco NDFC, refer to Netflow Support.

In the Netflow Exporter area, click Actions > Add to add one or more Netflow exporters. This exporter is the receiver of the netflow data. The fields on this screen are:

  • Exporter Name – Specifies the name of the exporter.

  • IP – Specifies the IP address of the exporter.

  • VRF – Specifies the VRF over which the exporter is routed.

  • Source Interface – Enter the source interface name.

  • UDP Port – Specifies the UDP port over which the netflow data is exported.

Click Save to configure the exporter. Click Cancel to discard. You can also choose an existing exporter and select Actions > Edit or Actions > Delete to perform relevant actions.

In the Netflow Record area, click Actions > Add to add one or more Netflow records. The fields on this screen are:

  • Record Name – Specifies the name of the record.

  • Record Template – Specifies the template for the record. Enter one of the record templates names. In Release 12.0.2, the following two record templates are available for use. You can create custom netflow record templates. Custom record templates saved in the template library are available for use here.

    • netflow_ipv4_record – to use the IPv4 record template.

    • netflow_l2_record – to use the Layer 2 record template.

  • Is Layer2 Record – Check this check box if the record is for Layer2 netflow.

Click Save to configure the report. Click Cancel to discard. You can also choose an existing record and select Actions > Edit or Actions > Delete to perform relevant actions.

In the Netflow Monitor area, click Actions > Add to add one or more Netflow monitors. The fields on this screen are:

  • Monitor Name – Specifies the name of the monitor.

  • Record Name – Specifies the name of the record for the monitor.

  • Exporter1 Name – Specifies the name of the exporter for the netflow monitor.

  • Exporter2 Name – (optional) Specifies the name of the secondary exporter for the netflow monitor.

The record name and exporters referred to in each netflow monitor must be defined in "Netflow Record" and "Netflow Exporter".

Click Save to configure the monitor. Click Cancel to discard. You can also choose an existing monitor and select Actions > Edit or Actions > Delete to perform relevant actions.

What's next: Complete the configurations in another tab if necessary, or click Save when you have completed the necessary configurations for this fabric.

Configuring Fabrics with eBGP Underlay

You can use the Easy_Fabric_eBGP fabric template to create a fabric with eBGP underlay. For more information, see Configuring a Fabric with eBGP Underlay.

IPv6 Underlay Support for Easy Fabric

You can create an Easy fabric with IPv6 only underlay. The IPv6 underlay is supported only for the Easy_Fabric template. For more information, see Configuring a VXLANv6 Fabric.

Overview of Tenant Routed Multicast

Tenant Routed Multicast (TRM) enables multicast forwarding on the VXLAN fabric that uses a BGP-based EVPN control plane. TRM provides multi-tenancy aware multicast forwarding between senders and receivers within the same or different subnet local or across VTEPs.

With TRM enabled, multicast forwarding in the underlay is leveraged to replicate VXLAN encapsulated routed multicast traffic. A Default Multicast Distribution Tree (Default-MDT) is built per-VRF. This is an addition to the existing multicast groups for Layer-2 VNI Broadcast, Unknown Unicast, and Layer-2 multicast replication group. The individual multicast group addresses in the overlay are mapped to the respective underlay multicast address for replication and transport. The advantage of using a BGP-based approach allows the VXLAN BGP EVPN fabric with TRM to operate as fully distributed Overlay Rendezvous-Point (RP), with the RP presence on every edge-device (VTEP).

A multicast-enabled data center fabric is typically part of an overall multicast network. Multicast sources, receivers, and multicast rendezvous points might reside inside the data center but also might be inside the campus or externally reachable via the WAN. TRM allows a seamless integration with existing multicast networks. It can leverage multicast rendezvous points external to the fabric. Furthermore, TRM allows for tenant-aware external connectivity using Layer-3 physical interfaces or subinterfaces.

For more information, see the following:

Overview of Tenant Routed Multicast with VXLAN EVPN Multi-Site

Tenant Routed Multicast with Multi-Site enables multicast forwarding across multiple VXLAN EVPN fabrics connected via Multi-Site.

The following two use cases are supported:

  • Use Case 1: TRM provides Layer 2 and Layer 3 multicast services across sites for sources and receivers across different sites.

  • Use Case 2: Extending TRM functionality from VXLAN fabric to sources receivers external to the fabric.

TRM Multi-Site is an extension of BGP-based TRM solution that enables multiple TRM sites with multiple VTEPs to connect to each other to provide multicast services across sites in most efficient possible way. Each TRM site is operating independently and border gateway on each site allows stitching across each site. There can be multiple Border Gateways for each site. In a given site, the BGW peers with Route Sever or BGWs of other sites to exchange EVPN and MVPN routes. On the BGW, BGP will import routes into the local VRF/L3VNI/L2VNI and then advertise those imported routes into the Fabric or WAN depending on where the routes were learnt from.

Tenant Routed Multicast with VXLAN EVPN Multi-Site Operations

The operations for TRM with VXLAN EVPN Multi-Site are as follows:

  • Each Site is represented by Anycast VTEP BGWs. DF election across BGWs ensures no packet duplication.

  • Traffic between Border Gateways uses ingress replication mechanism. Traffic is encapsulated with VXLAN header followed by IP header.

  • Each Site will only receive one copy of the packet.

  • Multicast source and receiver information across sites is propagated by BGP protocol on the Border Gateways configured with TRM.

  • BGW on each site receives the multicast packet and re-encapsulate the packet before sending it to the local site.

For information about guidelines and limitations for TRM with VXLAN EVPN Multi-Site, see Configuring Tenant Routed Multicast.

Configuring TRM for Single Site Using Cisco Nexus Dashboard Fabric Controller

This section is assumes that a VXLAN EVPN fabric has already been provisioned using Cisco Nexus Dashboard Fabric Controller.

Procedure

Step 1

Enable TRM for the selected Easy Fabric. If the fabric template is Easy_Fabric, from the Fabric Overview Actions drop-down, choose the Edit Fabric option. Click the Replication tab. The fields on this tab are:

Enable Tenant Routed Multicast (TRM): Select the check box to enable Tenant Routed Multicast (TRM) that allows overlay multicast traffic to be supported over EVPN/MVPN in the VXLAN BGP EVPN fabric.

Default MDT Address for TRM VRFs: When you select the Enable Tenant Routed Multicast (TRM) check box, the multicast address for Tenant Routed Multicast traffic is auto populated. By default, this address is from the IP prefix specified in the Multicast Group Subnet field. When you update either field, ensure that the TRM address is chosen from the IP prefix specified in Multicast Group Subnet.

Click Save to save the fabric settings. At this point, all the switches turn “Blue” as it will be in the pending state. From the Fabric Overview Actions drop-down list, choose Recalculate Config and then choose Deploy Config to enable the following:

  • Enable feature ngmvpn: Enables the Next-Generation Multicast VPN (ngMVPN) control plane for BGP peering.

  • Configure ip multicast multipath s-g-hash next-hop-based: Multipath hashing algorithm for the TRM enabled VRFs.

  • Configure ip igmp snooping vxlan: Enables IGMP Snooping for VXLAN VLANs.

  • Configure ip multicast overlay-spt-only: Enables the MVPN Route-Type 5 on all MPVN enabled Cisco Nexus 9000 switches.

  • Configure and Establish MVPN BGP AFI Peering: This is necessary for the peering between BGP RR and the Leaves.

For VXLAN EVPN fabric created using Easy_Fabric_eBGP fabric template, Enable Tenant Routed Multicast (TRM) field and Default MDT Address for TRM VRFs field can be found on the EVPN tab.

Step 2

Enable TRM for the VRF.

Navigate to Fabric Overview > VRFs > VRFs and edit the selected VRF. Navigate to the Advanced tab and edit the following TRM settings:

TRM Enable – Select the check box to enable TRM. If you enable TRM, then the RP address and the underlay multicast address must be entered.

Is RP External – Enable this check box if the RP is external to the fabric. If this field is unchecked, RP is distributed in every VTEP.

Note

 

If the RP is external, then select the appropriate option. If the RP is external, then RP loopback ID is greyed out.

RP Address – Specifies the IP address of the RP.

RP Loopback ID – Specifies the loopback ID of the RP, if Is RP External is not enabled.

Underlay Mcast Address – Specifies the multicast address associated with the VRF. The multicast address is used for transporting multicast traffic in the fabric underlay.

Overlay Mcast Groups – Specifies the multicast group subnet for the specified RP. The value is the group range in “ip pim rp-address” command. If the field is empty, 224.0.0.0/24 is used as default.

Click Save to save the settings. The switches go into the pending state, that is, blue color. These settings enable the following:

  • Enable PIM on L3VNI SVI.

  • Route-Target Import and Export for MVPN AFI.

  • RP and other multicast configuration for the VRF.

  • Loopback interface using the above RP address and RP loopback id for the distributed RP.

Step 3

Enable TRM for the network.

Navigate to Fabric Overview > Networks > Networks. Edit the selected network and navigate to the Advanced tab. Edit the following TRM setting:

TRM Enable – Select the check box to enable TRM.

Click Save to save the settings. The switches go into the pending state, that is, the blue color. The TRM settings enable the following:

  • Enable PIM on the L2VNI SVI.

  • Create a PIM policy none to avoid PIM neighborship with PIM Routers within a VLAN. The none keyword is a configured route map to deny any ipv4 addresses to avoid establishing PIM neighborship policy using anycast IP.


Configuring TRM for Multi-Site Using Cisco Nexus Dashboard Fabric Controller

This section assumes that a Multi-Site Domain (MSD) has already been deployed by Cisco Nexus Dashboard Fabric Controller and TRM needs to be enabled.

Procedure

Step 1

Enable TRM on the BGWs.

Navigate to Fabric Overview > VRFs > VRFs. Make sure that the right DC Fabric is selected under the Scope and edit the VRF. Navigate to the Advanced tab. Edit the TRM settings. Repeat this process for every DC Fabric and its VRFs.

TRM Enable – Select the check box to enable TRM. If you enable TRM, then the RP address and the underlay multicast address must be entered.

Is RP External – Enable this check box if the RP is external to the fabric. If this field is unchecked, RP is distributed in every VTEP.

Note

 

If the RP is external, then select the appropriate option. If the RP is external, then RP loopback ID is greyed out.

RP Address – Specifies the IP address of the RP.

RP Loopback ID – Specifies the loopback ID of the RP, if Is RP External is not enabled.

Underlay Mcast Address – Specifies the multicast address associated with the VRF. The multicast address is used for transporting multicast traffic in the fabric underlay.

Overlay Mcast Groups – Specifies the multicast group subnet for the specified RP. The value is the group range in “ip pim rp-address” command. If the field is empty, 224.0.0.0/24 is used as default.

Enable TRM BGW MSite - Select the check box to enable TRM on Border Gateway Multi-Site.

Click on Save to save the settings. The switches go into the pending state, that is, blue color. These settings enable the following:

  • Enable feature ngmvpn: Enables the Next-Generation Multicast VPN (ngMVPN) control plane for BGP peering.

  • Enables PIM on L3VNI SVI.

  • Configures L3VNI Multicast Address.

  • Route-Target Import and Export for MVPN AFI.

  • RP and other multicast configuration for the VRF.

  • Loopback interface for the distributed RP.

  • Enable Multi-Site BUM ingress replication method for extending the Layer 2 VNI

Step 2

Establish MVPN AFI between the BGWs.

Double-click the MSD fabric to open the Fabric Overview window. Choose Links. Filter it by the policy - Overlays.

Select and edit each overlay peering to enable TRM by checking the Enable TRM check box.

Click Save to save the settings. The switches go into the pending state, that is, the blue color. The TRM settings enable the MVPN peering’s between the BGWs, or BGWs and Route Server.


vPC Fabric Peering

vPC Fabric Peering provides an enhanced dual-homing access solution without the overhead of wasting physical ports for vPC Peer Link. This feature preserves all the characteristics of a traditional vPC. For more information, see Information about vPC Fabric Peering section in Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide.

You can create a virtual peer link for two switches or change the existing physical peer link to a virtual peer link. Cisco NDFC support vPC fabric peering in both greenfield as well as brownfield deployments. This feature is applicable for Easy_Fabric and Easy_Fabric_eBGP fabric templates.


Note


The Easy_Fabric_eBGP fabric does not support brownfield import.


Guidelines and Limitations

The following are the guidelines and limitations for vPC fabric pairing.

  • vPC fabric peering is supported from Cisco NX-OS Release 9.2(3).

  • Only Cisco Nexus N9K-C9332C Switch, Cisco Nexus N9K-C9364C Switch, Cisco Nexus N9K-C9348GC-FXP Switch as also the Cisco Nexus 9000 Series Switches that ends with FX, and FX2 support vPC fabric peering.

  • Cisco Nexus N9K-C93180YC-FX3S and N9K-C93108TC-FX3P platform switches support vPC fabric peering.

  • Cisco Nexus 9300-EX, and 9300-FX/FXP/FX2/FX3/GX/GX2 platform switches support vPC Fabric Peering. Cisco Nexus 9200 and 9500 platform switches do not support vPC Fabric Peering. For more information, see Guidelines and Limitations for vPC Fabric Peering section in Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide.

  • If you use other Cisco Nexus 9000 Series Switches, a warning will appear during Recalculate & Deploy. A warning appears in this case because these switches will be supported in future releases.

  • If you try pairing switches that do not support vPC fabric peering, using the Use Virtual Peerlink option, a warning will appear when you deploy the fabric.

  • You can convert a physical peer link to a virtual peer link and vice-versa with or without overlays.

  • Switches with border gateway leaf roles do not support vPC fabric peering.

  • vPC fabric peering is not supported for Cisco Nexus 9000 Series Modular Chassis and FEXs. An error appears during Recalculate & Deploy if you try to pair any of these.

  • Brownfield deployments and greenfield deployments support vPC fabric peering in Cisco NDFC.

  • However, you can import switches that are connected using physical peer links and convert the physical peer links to virtual peer links after Recalculate & Deploy. To update a TCAM region during the feature configuration, use the hardware access-list tcam ingress-flow redirect 512 command in the configuration terminal.

QoS for Fabric vPC-Peering

In the Easy_Fabric fabric settings, you can enable QoS on spines for guaranteed delivery of vPC Fabric Peering communication. Additionally, you can specify the QoS policy name.

Note the following guidelines for a greenfield deployment:

  • If QoS is enabled and the fabric is newly created:

    • If spines or super spines neighbor is a virtual vPC, make sure neighbor is not honored from invalid links, for example, super spine to leaf or borders to spine when super spine is present.

    • Based on the Cisco Nexus 9000 Series Switch model, create the recommended global QoS config using the switch_freeform policy template.

    • Enable QoS on fabric links from spine to the correct neighbor.

  • If the QoS policy name is edited, make sure policy name change is honored everywhere, that is, global and links.

  • If QoS is disabled, delete all configuration related to QoS fabric vPC peering.

  • If there is no change, then honor the existing PTI.

For more information about a greenfield deployment, see Creating a VXLAN EVPN Fabric Using the Easy_Fabric Template.

Note the following guidelines for a brownfield deployment:

Brownfield Scenario 1:

  • If QoS is enabled and the policy name is specified:


    Note


    You need to enable only when the policy name for the global QoS and neighbor link service policy is same for all the fabric vPC peering connected spines.


    • Capture the QoS configuration from switch based on the policy name and filter it from unaccounted configuration based on the policy name and put the configuration in the switch_freeform with PTI description.

    • Create service policy configuration for the fabric interfaces as well.

    • Greenfield configuration should make sure to honor the brownfield configuration.

  • If the QoS policy name is edited, delete the existing policies and brownfield extra configuration as well, and follow the greenfield flow with the recommended configuration.

  • If QoS is disabled, delete all the configuration related to QoS fabric vPC peering.


    Note


    No cross check for possible or error mismatch user configuration, and user might see the diff.


Brownfield Scenario 2:

  • If QoS is enabled and the policy name is not specified, QoS configuration is part of the unaccounted switch freeform config.

  • If QoS is enabled from fabric settings after Recalculate & Deploy for brownfield, QoS configuration overlaps and you will see the diff if fabric vPC peering config is already present.

For more information about a brownfield deployment, see Creating a VXLAN EVPN Fabric Using the Easy_Fabric Template.

Fields and Description

To view the vPC pairing window of a switch, from the fabric topology window, right-click the switch and choose vPC Pairing. The vPC pairing window for a switch has the following fields:

Field

Description

Use Virtual Peerlink

Allows you to enable or disable the virtual peer linking between switches.

Switch name

Specifies all the peer switches in a fabric.

Note

 

When you have not paired any peer switches, you can see all the switches in a fabric. After you pair a peer switch, you can see only the peer switch in the vPC pairing window.

Recommended

Specifies if the peer switch can be paired with the selected switch. Valid values are true and false. Recommended peer switches will be set to true.

Reason

Specifies why the vPC pairing between the selected switch and the peer switches is possible or not possible.

Serial Number

Specifies the serial number of the peer switches.

You can perform the following with the vPC Pairing option:

Creating a Virtual Peer Link

To create a virtual peer link from the Cisco NDFC Web UI, perform the following steps:
Procedure

Step 1

Choose LAN > Fabrics.

The LAN Fabrics window appears.

Step 2

Choose a fabric with the Easy_Fabric or Easy_Fabric_eBGPfabric templates.

Step 3

On the Topology window, right-click a switch and choose vPC Pairing from the drop-down list.

The window to choose the peer appears.

Note

 

Alternatively, you can also navigate to the Fabric Overview window. Choose a switch in the Switches tab and click on Actions > vPC Pairing to create, edit, or unpair a vPC pair. However, you can use this option only when you choose a Cisco Nexus switch.

You will get the following error when you choose a switch with the border gateway leaf role.

<switch-name> has a Network/VRF attached. Please detach the Network/VRF before vPC Pairing/Unpairing

Step 4

Check the Use Virtual Peerlink check box.

Step 5

Choose a peer switch and check the Recommended column to see if pairing is possible.

If the value is true, pairing is possible. You can pair switches even if the recommendation is false. However, you will get a warning or error during Recalculate & Deploy.

Step 6

Click Save.

Step 7

In the Topology window, choose Recalculate & Deploy.

The Deploy Configuration window appears.

Step 8

Click the field against the switch in the Preview Config column.

The Config Preview window appears for the switch.

Step 9

View the vPC link details in the pending configuration and side-by-side configuration.

Step 10

Close the window.

Step 11

Click the pending errors icon next to Recalculate & Deploy icon to view errors and warnings, if any.

If you see any warnings that are related to TCAM, click the Resolve icon. A confirmation dialog box about reloading switches appears. Click OK. You can also reload the switches from the topology window. For more information, see Guidelines and Limitations for vPC Fabric Peering and Migrating from vPC to vPC Fabric Peering sections in Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide.

The switches that are connected through vPC fabric peering, are enclosed in a gray cloud.


Converting a Physical Peer Link to a Virtual Peer Link

To convert a physical peer link to a virtual peer link from the Cisco NDFC Web UI, perform the following steps:
Before you begin
  • Perform the conversion from physical peer link to virtual peer link during the maintenance window of switches.

  • Ensure the switches support vPC fabric peering. Only the following switches support vPC fabric peering:

    • Cisco Nexus N9K-C9332C Switch, Cisco Nexus N9K-C9364C Switch, and Cisco Nexus N9K-C9348GC-FXP Switch.

    • Cisco Nexus 9000 Series Switches that ends with FX, FX2, and FX2-Z.

    • Cisco Nexus 9300-EX, and 9300-FX/FXP/FX2/FX3/GX/GX2 platform switches. For more information, see Guidelines and Limitations for vPC Fabric Peering section in Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide.

Procedure

Step 1

Choose LAN > Fabrics.

The LAN Fabrics window appears.

Step 2

Choose a fabric with the Easy_Fabric or Easy_Fabric_eBGP fabric templates.

Step 3

On the Topology window, right-click the switch that is connected using the physical peer link and choose vPC Pairing from the drop-down list.

The window to choose the peer appears.

Note

 

Alternatively, you can also navigate to the Fabric Overview window. Choose a switch in the Switches tab and click on Actions > vPC Pairing to create, edit, or unpair a vPC pair. However, you can use this option only when you choose a Cisco Nexus switch.

You will get the following error when you choose a switch with the border gateway leaf role.

<switch-name> has a Network/VRF attached. Please detach the Network/VRF before vPC Pairing/Unpairing

Step 4

Check the Recommended column to see if pairing is possible.

If the value is true, pairing is possible. You can pair switches even if the recommendation is false. However, you will get a warning or error during Recalculate & Deploy.

Step 5

Check the Use Virtual Peerlink check box.

The Unpair icon changes to Save.

Step 6

Click Save.

Note

 

After you click Save, the physical vPC peer link is automatically deleted between the switches even without deployment.

Step 7

In the Topology window, choose Recalculate & Deploy.

The Deploy Configuration window appears.

Step 8

Click the field against the switch in the Preview Config column.

The Config Preview window appears for the switch.

Step 9

View the vPC link details in the pending configuration and the side-by-side configuration.

Step 10

Close the window.

Step 11

Click the pending errors icon next to the Recalculate & Deploy icon to view errors and warnings, if any.

If you see any warnings that are related to TCAM, click the Resolve icon. A confirmation dialog box about reloading switches appears. Click OK. You can also reload the switches from the fabric topology window.

The physical peer link between the peer switches turns red. Delete this link. The switches are connected only through a virtual peer link and are enclosed in a gray cloud.


Converting a Virtual Peer Link to a Physical Peer Link

To convert a virtual peer link to a physical peer link from the Cisco NDFC Web UI, perform the following steps:
Before you begin
Connect the switches using a physical peer link before disabling the vPC fabric peering.
Procedure

Step 1

Choose LAN > Fabrics.

The LAN Fabrics window appears.

Step 2

Choose a fabric with the Easy_Fabric or Easy_Fabric_eBGPfabric templates.

Step 3

On the Topology window, right-click the switch that is connected through a virtual peer link and choose vPC Pairing from the drop-down list.

The window to choose the peer appears.

Note

 

Alternatively, you can also navigate to the Fabric Overview window. Choose a switch in the Switches tab and click on Actions > vPC Pairing to create, edit, or unpair a vPC pair. However, you can use this option only when you choose a Cisco Nexus switch.

Step 4

Uncheck the Use Virtual Peerlink check box.

The Unpair icon changes to Save.

Step 5

Click Save.

Step 6

In the Topology window, choose Recalculate & Deploy.

The Deploy Configuration window appears.

Step 7

Click the field against the switch in the Preview Config column.

The Config Preview window appears for the switch.

Step 8

View the vPC peer link details in the pending configuration and the side-by-side configuration.

Step 9

Close the window.

Step 10

Click the pending errors icon next to the Recalculate & Deploy icon to view errors and warnings, if any.

If you see any warnings that are related to TCAM, click the Resolve icon. The confirmation dialog box about reloading switches appears. Click OK. You can also reload the switches from the fabric topology window.

The virtual peer link, represented by a gray cloud, disappears and the peer switches are connected through a physical peer link.


Precision Time Protocol for Easy Fabric

In the fabric settings for the Easy_Fabric template, select the Enable Precision Time Protocol (PTP) check box to enable PTP across a fabric. When you select this check box, PTP is enabled globally and on core-facing interfaces. Additionally, the PTP Loopback Id and PTP Domain Id fields are editable.

The PTP feature works only when all the devices in a fabric are cloud-scale devices. Warnings are displayed if there are non-cloud scale devices in the fabric, and PTP is not enabled. Examples of the cloud-scale devices are Cisco Nexus 93180YC-EX, Cisco Nexus 93180YC-FX, Cisco Nexus 93240YC-FX2, and Cisco Nexus 93360YC-FX2 switches.

For more information, see the Configuring PTP chapter in Cisco Nexus 9000 Series NX-OS System Management Configuration Guide and Cisco Nexus Dashboard Insights User Guide.

For Nexus Dashboard Fabric Controller deployments, specifically in a VXLAN EVPN based fabric deployments, you have to enable PTP globally, and also enable PTP on core-facing interfaces. The interfaces could be configured to the external PTP server like a VM or Linux-based machine. Therefore, the interface should be edited to have a connection with the grandmaster clock.

It is recommended that the grandmaster clock should be configured outside of Easy Fabric and it is IP reachable. The interfaces toward the grandmaster clock need to be enabled with PTP via the interface freeform config.

All core-facing interfaces are auto-enabled with the PTP configuration after you click Deploy Config. This action ensures that all devices are PTP synced to the grandmaster clock. Additionally, for any interfaces that are not core-facing, such as interfaces on the border devices and leafs that are connected to hosts, firewalls, service-nodes, or other routers, the ttag related CLI must be added. The ttag is added for all traffic entering the VXLAN EVPN fabric and the ttag must be stripped when traffic is exiting this fabric.

Here is the sample PTP configuration:

feature ptp
 
ptp source 100.100.100.10 -> IP address of the loopback interface (loopback0) that is already created or user created loopback interface in the fabric settings

ptp domain 1 -> PTP domain ID specified in fabric settings

interface Ethernet1/59 -> Core facing interface
  ptp
 
interface Ethernet1/50 -> Host facing interface
  ttag
  ttag-strip

The following guidelines are applicable for PTP:

  • The PTP feature can be enabled in a fabric when all the switches in the fabric have Cisco NX-OS Release 7.0(3)I7(1) or a higher version. Otherwise, the following error message is displayed:

    PTP feature can be enabled in the fabric, when all the switches have NX-OS Release 7.0(3)I7(1) or higher version. Please upgrade switches to NX-OS Release 7.0(3)I7(1) or higher version to enable PTP in this fabric.

  • For hardware telemetry support in NIR, the PTP configuration is a prerequisite.

  • If you are adding a non-cloud scale device to an existing fabric which contains PTP configuration, the following warning is displayed:

    TTAG is enabled fabric wide, when all devices are cloud scale switches so it cannot be enabled for newly added non cloud scale device(s).

  • If a fabric contains both cloud scale and non-cloud scale devices, the following warning is displayed when you try to enable PTP:

    TTAG is enabled fabric wide, when all devices are cloud scale switches and is not enabled due to non cloud scale device(s).

Support for Super Spine Switch Role

Super Spine is a device that is used for interconnecting multiple spine-leaf PODs. You have an extra interconnectivity option with super spines. You can have multiple spine-leaf PODs within the same Easy Fabric that are interconnected via super spines such that, the same IGP domain extends across all the PODs, including the super spines. Within such a deployment, the BGP RRs and RPs (if applicable) are provisioned on the super spine layer. The spine layer becomes a pseudo interconnect between the leafs and super spines. VTEPs may be optionally hosted on the super spines if they have the border functionality.

The following super spine switch roles are supported in NDFC:

  • Super Spine

  • Border Super Spine

  • Border Gateway Super Spine

A border super spine handles multiple functionalities including the functionalities of a super spine, RR, RP (optionally), and a border leaf. Similarly, a border gateway super spine serves a super spine, RR, RP (optional), and a border gateway. It is not recommended to overload border functionality on the super spine or RR layer. Instead, attach border leafs or border gateways to the super spine layer for external connectivity. The super spine layer serves as the interconnect with the RR or RP functionality.

The following are the characteristics of super spine switch roles in NDFC:

  • Supported with Easy Fabrics only.

  • From Cisco NDFC Release 12.1.1e, Super Spine switch role and Border Super Spine switch role are also supported with the eBGP routed fabrics for IPv6 underlay using Easy_Fabric_eBGP template.

  • Can only connect to spines and borders. The valid connections are:

    • Spines to super spines

    • Spines to border super spines and border gateway super spines

    • Super spines to border leafs and border gateway leafs.

  • RR or RP (if applicable) functionality is always be configured on super spines if they are present in a fabric. The maximum number of 4 RRs and RPs are supported even with Super Spines.

  • Border Super Spine and Border Gateway Super Spine roles are supported for inter-fabric connections.

  • vPC configurations aren’t supported on super spines.

  • Super spines don’t support IPv6 underlay configuration.

  • During the Brownfield import of switches, if a switch has the super spine role, the following error is displayed:

    Serial number: [super spine/border super spine/border gateway superspine] Role isn’t supported with preserved configuration yes option.

Supported Topologies for Super Spine Switches

NDFC supports the following topologies with super spine switches.

Topology 1: Super Spine Switches in a Spine Leaf Topology

In this topology, leaf switches are connected to spines, and spines are connected to super spine switches which can be super spines, border super spines, and border gateway super spines.

Topology 2: Super Spine Switches Connected to Border

In this topology, there are four leaf switches connecting to the spine switches, which are connected to two super spine switches. These super spine switches are connected to the border or border gateway leaf switches.

Adding a Super Spine Switch to an Existing VXLAN BGP EVPN Fabric

To add a super spine switch to an existing VXLAN BGP EVPN fabric, perform the following steps:

Procedure

Step 1

Choose LAN > Fabrics. Double-click on the required fabric.

The Fabric Overview window appears.

Step 2

On the Switches tab, click Actions > Add Switches.

For more information, see Adding Switches to a Fabric.

Step 3

Right-click on an existing switch or the newly added switch, and use the Set role option to set the appropriate super spine role.

Note

 
  • If the Super Spine role exists in the fabric, you can assign border super spine and border gateway super spine roles for any new device.

  • If super spine or any of its variation role is not assigned, you may assign the role to any new device if it is connected to a non-border spine. After a Recalculate & Deploy, you will receive an error that can be resolved by clicking on the Resolve button as shown in the below steps.

Step 4

On the Fabric Overview window, click Actions > Recalculate & Deploy.

The following error message is displayed:

Super Spine role cannot be allowed in the existing fabric as it is disruptive. Please go to 'Event Analytics' and click on the resolve button to proceed.

Step 5

Choose Event Analytics > Alarms, click on the ID.

The Alarm ID slide-in pane appears.

Step 6

Click Resolve.

The Confirm action dialog box appears.

Step 7

Click Confirm.

Step 8

On the Fabric Overview window, click Actions > Recalculate & Deploy.

Do not add a devices with super spine, border super spine, or border gateway super spine role if the device is connected to a border spine or border gateway spine. This action results in an error after you recalculate and deploy the configuration. To use existing devices with border spine roles, remove the device and add the device with appropriate roles.


Overlay Mode

You can create a VRF or network in CLI or config-profile mode at the fabric level. The overlay mode of member fabrics of an MSD fabric is set individually at the member-fabric level. Overlay mode can only be changed before deploying overlay configurations to the switches. After the overlay configuration is deployed, you cannot change the mode unless all the VRF and network attachments are removed.


Note


If you upgrade from Cisco DCNM Release 11.5(x), the existing config-profile mode functions the same.


If the switch has config-profile based overlays, you can import it in the config-profile overlay mode only. If you import it in the cli overlay mode, an error appears during brownfield import.

For brownfield import, if overlay is deployed as config-profile mode, it can be imported in config-profile mode only. However, if overlay is deployed as cli, it can be imported in either config-profile or cli modes.

To choose the overlay mode of VRFs or networks in a fabric, perform the following steps:

  1. Navigate to the Edit Fabric window.

  2. Go to the Advanced tab.

  3. From the Overlay Mode drop-down list, choose config-profile or cli.

    The default mode is config-profile.

Sync up Out-of-Band Switch Interface Configurations

Any interface level configuration made outside of Nexus Dashboard Fabric Controller (via CLI) can be synced to Nexus Dashboard Fabric Controller and then managed from Nexus Dashboard Fabric Controller. Also, the vPC pair configurations are automatically detected and paired. This applies to the External_Fabric and LAN_Classic fabrics only. The vPC pairing is performed with the vpc_pair policy.


Note


When Nexus Dashboard Fabric Controller is managing switches, ensure that all configuration changes are initiated from Nexus Dashboard Fabric Controller and avoid making changes directly on the switch.


When the interface config is synced up to the Nexus Dashboard Fabric Controller intent, the switch configs are considered as the reference, that is, at the end of the sync up, the Nexus Dashboard Fabric Controller intent reflects what is present on the switch. If there were any undeployed intent on Nexus Dashboard Fabric Controller for those interfaces before the resync operation, they will be lost.

Guidelines

  • Supported in fabrics using the following templates: Easy_Fabric, External_Fabric, and LAN_Classic.

  • Supported for Cisco Nexus switches only.

  • Supported for interfaces that don’t have any fabric underlay related policy associated with them prior to the resync. For example, IFC interfaces and intra fabric links aren’t subjected to resync.

  • The time taken by host port resync depends on the number of switches/interfaces to be synchronized.

  • Supported for interfaces that do not have any custom policy (policy template that isn’t shipped with Cisco Nexus Dashboard Fabric Controller) associated with them prior to resync.

  • Supported for interfaces where the intent is not exclusively owned by a Cisco Nexus Dashboard Fabric Controller feature and/or application prior to resync.

  • Supported on switches that don’t have Interface Groups associated with them.

  • Interface mode (switchport to routed, trunk to access, and so on) changes aren’t supported with overlays attached to that interface.

The sync up functionality is supported for the following interface modes and policies:

Interface Mode Policies
trunk (standalone, po, and vPC PO)
  • int_trunk_host

  • int_port_channel_trunk_host

  • int_vpc_trunk_host

access (standalone, po, and vPC PO)
  • int_access_host

  • int_port_channel_access_host

  • int_vpc_access_host

dot1q-tunnel
  • int_dot1q_tunnel_host

  • int_port_channel_dot1q_tunnel_host

  • int_vpc_ dot1q_tunnel_host

routed

int_routed_host

loopback

int_freeform

sub-interface

int_subif

FEX (ST, AA)
  • int_port_channel_fex

  • int_port_channel_aa_fex

breakout

interface_breakout

nve int_freeform (only in External_Fabric/LAN_Classic)
SVI int_freeform (only in External_Fabric/LAN_Classic)
mgmt0 int_mgmt

In an Easy fabric, the interface resync will automatically update the network overlay attachments based on the access VLAN or allowed VLANs on the interface.

After the resync operation is completed, the switch interface intent can be managed using normal Nexus Dashboard Fabric Controller procedures.

Syncing up Switch Interface Configurations

It is recommended to deploy all switch configurations from NDFC. In some scenarios, it may be necessary to make changes to the switch interface configuration out-of-band. This will cause configuration drift causing switches to be reported Out-of-Sync.

NDFC supports syncing up the out-of-band interface configuration changes back into its intent.

Guidelines and Limitations

The following limitations are applicable after Syncing up Switch Interface Configurations to NDFC:

  • The port channel membership changes (once the policy exists) is not supported.

  • Changing the interface mode (trunk to access etc.) that have overlays attached is not supported.

  • Resync for interfaces that belong to Interface Groups are not supported.

  • The vPC pairing in External_Fabric and LAN_Classic templates must be updated with the vpc_pair policy.

  • This feature is supported for easy fabric, external fabric and LAN classic fabric.

  • The resync can be performed for a set of switches and repeated as desired.

  • The time taken by host port resync depends on the number of switches/interfaces to be synchronized.

  • In Easy_Fabric fabrics, VXLAN overlay interface attachments are performed automatically based on the allowed VLANs.

Before you begin
  • We recommend taking a fabric backup before attempting the interface resync.

  • In External_Fabric and LAN_Classic fabrics, for the vPC pairing to work correctly, both the switches must be in the fabric and must be functional.

  • Ensure that the switches are In-Sync and switch mode must not be Migration or Maintenance.

  • From the Actions drop-list, choose Discovery > Rediscover to ensure that NDFC is aware of any new interfaces and other changes.

Procedure

Step 1

Choose LAN > Fabrics and double-click on a fabric.

The Fabric Overview window appears.

Step 2

Click the Switches tab and ensure that switches are present in the fabric and vPC pairings are completed.

Step 3

Click the Policies tab and select one or more switches where the interface intent resync is needed.

Note

 
  • If a pair of switches is already paired with either no_policy or vpc_pair, select only one switch of the pair.

  • If a pair of switches is not paired, then select both the switches.

Step 4

From the Actions drop-down list, choose Add Policy.

The Create Policy window appears.

Step 5

On the Create Policy window, choose host_port_resync from the Policy drop-down list.

Step 6

Click Save.

Step 7

Check the Mode column for the switches to ensure that they report Migration. For a vPC pair, both switches are in the Migration-mode.

  • After this step, the switches in the Topology view are in Migration-mode.

  • Both the switches in a vPC pair are in the migration mode even if one of the switches is placed into this mode.

  • If switch(es) are unintentionally put into the resync mode, they can be moved back to the normal mode by identifying the host_port_resync policy instance and deleting it from the Policies tab.

Step 8

After the configuration changes are ready to sync up to NDFC, navigate to the Switches tab and select the required switches.

Step 9

Click Recalculate & Deploy to start the resync process.

Note

 

This process might take some time to complete based on the size of the switch configuration and the number of switches involved.

The time taken by host port resync depends on the number of switches/interfaces to be synchronized.

Step 10

The Deploy Configuration window is displayed if no errors are detected during the resync operation. The interface intent is updated in NDFC.

Note

 

If the External_Fabric or LAN_Classic fabric is in Monitored Mode, an error message indicating that the fabric is in the read-only mode is displayed. This error message can be ignored and doesn’t mean that the resync process has failed.

Close the Deploy Configuration window, and you can see that the switches are automatically moved out of the Migration-mode. Switches in a vPC pair that were not paired or paired with no_policy show up as paired and associated with the vpc_pair policy.

Note

 

The host_port_resync policy that was created for the switch is automatically deleted after the resync process is completed successfully.


Configuration Compliance

The entire intent or expected configuration defined for a given switch is stored in NDFC. When you want to push this configuration down to one or more switches, the configuration compliance (CC) module is triggered. CC takes the current intent, the current running configuration, and then comes up with the set of configurations that are required to go from the current running configuration to the current expected configuration so that everything will be In-Sync.

When performing a software or firmware upgrade on the switches, the current running configuration on the switches is not changed. Post upgrade, if CC finds that the current running configuration does not have the current expected configuration or intent, it reports an Out-of-Sync status. There is no auto deployment of any configurations. You can preview the diffs that will get deployed to get one or more devices back In-Sync.

With CC, the sync is always from the NDFC to the switches. There is no reverse sync. So, if you make a change out-of-band on the switches that conflicts with the defined intent in NDFC, CC captures this diff, and indicates that the device is Out-of-Sync. The pending diffs will undo the configurations done out-of-band to bring back the device In-Sync. Note that such conflicts due to out-of-band changes are captured by the periodic CC run that occurs every 60 minutes by default, or when you click the RESYNC option either on a per fabric or per switch basis. From Cisco NDFC Release 12.1.1e, the periodic CC runs every 24 hours. You can configure the custom interval with the range of 30-3600 minutes. This configuration can be done by navigating to Server > Server Settings > LAN-Fabric. Note that you can also capture the out-of-band changes for the entire switch by using the CC REST API. For more information, see Cisco NDFC REST API Guide.

To improve ease of use and readability of deployed configurations, CC in NDFC has been enhanced with the following:

  • All displayed configurations in NDFC are easily readable and understandable.

  • Repeated configuration snippets are not displayed.

  • Pending configurations precisely show only the diff configuration.

  • Side-by-side diffs has greater readability, integrated search or copy, and diff summary functions.

Top-level configuration commands on the switch that do not have any associated NDFC intent are not checked for compliance by CC. However, CC performs compliance checks, and attempts removal, of the following commands even if there is no NDFC intent:

  • configure profile

  • apply profile

  • interface vlan

  • interface loopback

  • interface Portchannel

  • Sub-interfaces, for example, interface Ethernet X/Y.Z

  • fex

  • vlan <vlan-ids>

CC performs compliance checks, and attempts removal, of these commands only when Easy_Fabric and Easy_Fabric_eBGP templates are used. On External_Fabric and LAN_Classic templates, top-level configuration commands on the switch, including the commands mentioned above, that do not have any associated NDFC intent are not checked for compliance by CC.

We recommend using the NDFC freeform configuration template to create additional intent and deploy these commands to the switches to avoid unexpected behavior

Now, consider a scenario in which the configuration that exists on the switch has no relationship with the configuration defined in the intent. Examples of such configurations are a new feature that has not been captured in the intent but is present on the switch or some other configuration aspect that has not been captured in the intent. Configuration compliance does not consider these configuration mismatches as a diff. In such cases, Strict Configuration Compliance ensures that every configuration line that is defined in the intent is the only configuration that exists on the switch. However, configuration such as boot string, rommon configuration, and other default configurations are ignored during strict CC checks. For such cases, the internal configuration compliance engine ensures that these config changes are not called out as diffs. These diffs are also not displayed in the Pending Config window. But, the Side-by-side diff utility compares the diff in the two text files and does not leverage the internal logic used in the diff computation. As a result, the diff in default configurations are highlighted in red in the Side-by-side Comparison window.

In NDFC, the diffs in default configurations are not highlighted in the Side-by-side Comparison window. The auto-generated default configuration that is highlighted in the Running config window is not visible in the Expected config window.

Any configurations that are shown in the Pending Config window are highlighted in red in the Side-by-side Comparison window if the configurations are seen in the Running config window but not in the Expected config window. Also, any configurations that are shown in the Pending Config window are highlighted in green in the Side-by-side Comparison window if the configurations are seen in the Expected config window but not in the Running config window. If there are no configurations displayed in the Pending Config window, no configurations are shown in red in the Side-by-side Comparison window.

All freeform configurations have to strictly match the show running configuration output on the switch and any deviations from the configuration will show up as a diff during Recalculate & Deploy. You need to adhere to the leading space indentations.

You can typically enter configuration snippets in NDFC using the following methods:

  • User-defined profile and templates

  • Switch, interface, overlay, and vPC freeform configurations

  • Network and VRF per switch freeform configurations

  • Fabric settings for Leaf, Spine, or iBGP configurations


Caution


The configuration format should be identical to the show running configuration of the corresponding switch. Otherwise, any missing or incorrect leading spaces in the configuration can cause unexpected deployment errors and unpredictable pending configurations. If any unexpected diffs or deployment errors are displayed, check the user-provided or custom configuration snippets for incorrect values.


If NDFC displays the "Out-of-Sync" status due to unexpected pending configurations, and this configuration is either unable to be deployed or stays consistent even after a deployment, perform the following steps to recover:

  1. Check the lines of config highlighted under the Pending Config tab in the Config Preview window.

  2. Check the same lines in the corresponding Side-by-side Comparison tab. This tab shows whether the diff exists in "intent", or "show run", or in both with different leading spaces. Leading spaces are highlighted in the Side-by-side Comparison tab.

  3. If the pending configurations or switch with an out-of-sync status is due to any identifiable configuration with mismatched leading spaces in "intent" and "running configuration", this indicates that the intent has incorrect spacing and needs to be edited.

  4. To edit incorrect spacing on any custom or user-defined policies, navigate to the switch and edit the corresponding policy:

    1. If the source of the policy is UNDERLAY, you will need to edit this from the Fabric settings screen and save the updated configuration.

    2. If the source is blank, it can be edited from the View/Edit policies window for that switch.

    3. If the source of the policy is OVERLAY, but it is derived from a switch freeform configuration. In this case, navigate to the appropriate OVERLAY switch freeform configuration and update it.

    4. If the source of the policy is OVERLAY or a custom template, perform the following steps:

      1. Choose Settings > Server settings, set the template.in_use.check property to false and uncheck the Template In-Use Override check box and Save. This allows the profiles or templates to be editable.

      2. Edit the specific profile or template from the Operations > Templates > Edit template properties edit window, and save the updated profile template with the right spacing.

      3. Click Recalculate & Deploy to recompute the diffs for the impacted switches.

      4. After the configurations are updated, set the template.in_use.check property to true and check the Template In-Use Override check box and Save, as it slows down the performance of the NDFC system, specifically for Recalculate & Deploy operations.

To confirm that the diffs have been resolved, click Recalculate & Deploy after updating the policy to validate the changes.


Note


NDFC checks only leading spaces, as it implies hierarchy of the command, especially in case of multi-command sequences. NDFC does not check any trailing spaces in command sequences.


Example 1: Configuration Compliance in Switch Freeform Policy

Let us consider an example with an incorrect spacing in the Switch Freeform Configuration field.

Create the switch freeform policy.

After deploying this policy successfully to the switch, NDFC persistently reports the diffs.

After clicking the Side-by-side Comparison tab, you can see the cause of the diff. The ip pim rp-address line has 2 leading spaces, while the running configuration has 0 leading spaces.

To resolve this diff, edit the corresponding Switch Freeform policy so that the spacing is correct.

After you save, you can use the Push Config or Recalculate & Deploy option to re-compute diffs.

The diffs are now resolved. The Side-by-side Comparison tab confirms that the leading spaces are updated.

Example 2: Resolving a Leading Space Error in Overlay Configurations

Let us consider an example with a leading space error that is displayed in the Pending Config tab.

In the Side-by-side Comparison tab, search for diffs line by line to understand context of the deployed configuration.

A matched count of 0 means that it is a special configuration that NDFC has evaluated to push it to the switch.

You can see that the leading spaces are mismatched between running and expected configurations.

Navigate to the respective freeform configs and correct the leading spaces, and save the updated configuration.

Navigate to Fabric Overview window for the fabric and click Recalculate & Deploy.

In the Deply Configuration window, you can see that all the devices are in-sync.

Configuration Compliance in External Fabrics

With external fabrics, any Nexus switches, Cisco IOS-XE devices, Cisco IOS XR devices, and Arista can be imported into the fabric, and there is no restriction on the type of deployment. It can be LAN Classic, VXLAN, FabricPath, vPC, HSRP, etc. When switches are imported into an external fabric, the configuration on the switches is retained so that it is non-disruptive. Only basic policies such as the switch username and mgmt0 interface are created after a switch import.

In the external fabric, for any intent that is defined in the Nexus Dashboard Fabric Controller, configuration compliance (CC) ensures that this intent is present on the corresponding switch. If this intent is not present on the switch, CC reports an Out-of-Sync status. Additionally, there will be a Pending Config generated to push this intent to the switch to change the status to In-Sync. Any additional configuration that is on the switch but not in intent defined in Nexus Dashboard Fabric Controller, will be ignored by CC, as long as there is no conflict with anything in the intent.

When there is user-defined intent added on Nexus Dashboard Fabric Controller and the switch has additional configuration under the same top-level command, as mentioned earlier, CC will only ensure that the intent defined in Nexus Dashboard Fabric Controller is present on the switch. When this user defined intent on Nexus Dashboard Fabric Controller is deleted as a whole with the intention of removing it from the switch and the corresponding configuration exists on the switch, CC will report an Out-of-Sync status for the switch and will generate Pending Config to remove the config from the switch. This Pending Config includes the removal of the top-level command. This action leads to removal of the other out-of-band configurations made on the switch under this top-level command as well. If you choose to override this behavior, the recommendation is that, you create a freeform policy and add the relevant top-level command to the freeform policy.

Let us see this behavior with an example.

  1. A switch_freeform policy defined by the user in Nexus Dashboard Fabric Controller and deployed to the switch.

  2. Additional configuration exists under router bgp in Running config that does not exist in user-defined Nexus Dashboard Fabric Controller intent Expected config. Note that there is no Pending Config to remove the additional config that exists on the switch without a user defined intent on Nexus Dashboard Fabric Controller.

  3. The Pending Config and the Side-by-side Comparison when the intent that was pushed earlier via Nexus Dashboard Fabric Controller is deleted from Nexus Dashboard Fabric Controller by deleting the switch_freeform policy that was created in the Step 1.

  4. A switch_freeform policy with the top-level router bgp command needs to be created. This enables CC to generate the configuration needed to remove only the desired sub-config which was pushed from Nexus Dashboard Fabric Controller earlier.

  5. The removed configuration is only the subset of the configuration that was pushed earlier from Nexus Dashboard Fabric Controller.

    For interfaces on the switch in the external fabric, Nexus Dashboard Fabric Controller either manages the entire interface or does not manage it at all. CC checks interfaces in the following ways:

    • For any interface, if there is a policy defined and associated with it, then this interface is considered as managed. All configurations associated with this interface must be defined in the associated interface policy. This is applicable for both logical and physical interfaces. Otherwise, CC removes any out-of-band updates made to the interface to change the status to In-Sync.

    • Interfaces created out-of-band (applies for logical interfaces such as port-channels, sub interfaces, SVIs, loopbacks, etc.), will be discovered by Nexus Dashboard Fabric Controller as part of the regular discovery process. However, since there is no intent for these interfaces, CC will not report an Out-of-Sync status for these interfaces.

    • For any interface, there can always be a monitor policy associated with it in Nexus Dashboard Fabric Controller. In this case, CC will ignore the interface’s configuration when it reports the In-Sync or Out-of-Sync config compliance status.

Special Configuration CLIs Ignored for Configuration Compliance

The following configuration CLIs are ignored during configuration compliance checks:

  • Any CLI having 'username’ along with ‘password’

  • Any CLI that starts with ‘snmp-server user’

Any CLIs that match the above will not show up in pending diffs and clicking Save & Deploy in the Fabric Builder window will not push such configurations to the switch. These CLIs will not show up in the Side-by-side Comparison window also.

To deploy such configuration CLIs, perform the following procedure:

Procedure

Step 1

Select LAN > Fabrics.

Double click on the fabric name to view Fabric Overview screen.

Step 2

On the Switches tab, double click on the switch name to view Switch Overview screen.

On the Policies tab, all the policies applied on the switch within the chosen fabric are listed.

Step 3

On the Policies tab, from the Actions drop-down list, select Add Policy.

Step 4

Add a Policy Template Instances (PTIs) with the required configuration CLIs using the switch_freeform template and click Save.

Step 5

Select the created policy and select Push Config from the Actions drop-down list to deploy the configuration to the switch(es).


Resolving Diffs for Case Insensitive Commands

By default, all diffs generated in NDFC while comparing intent, also known as Expected Configuration, and Running Configuration, are case sensitive. However, the switch has many commands that are case insensitive, and therefore it may not be appropriate to flag these commands as differences. These are captured in the compliance_case_insensitive_clis.txt template that can be found under Operations > Templates.

From Cisco NDFC Release 12.0.1a, compliance_case_insensitive_clis.txt file, along with compliance_strict_cc_exclude_clis.txt and compliance_ipv6_clis.txt files are now part of the shipped templates. You can find all the templates under Operations > Templates. Modification of templates can be done after disabling Template In-Use Override.

There could be additional commands not included in the existing compliance_case_insensitive_clis.txt file that should be treated as case insensitive. If the pending configuration is due to the differences of cases between the Expected Configuration in NDFC and the Running Configuration, you can configure NDFC to ignore these case differences as follows:

  1. Navigate to Settings > Server Settings > LAN-Fabric, uncheck Template In-Use Override and then click Save.

  2. Navigate to Operations > Templates and search for compliance_case_insensitive_clis.txt file.

  3. Check compliance_case_insensitive_clis.txt and choose Actions > Edit template content.

    An example of the entries in the compliance_case_insensitive_clis.txt file is displayed in the following figure.

  4. Remove the entries highlighted in the figure and click Finish.

  5. If newer patterns are detected during deployment, and they are triggering pending configurations, you can add these patterns to this file. The patterns need to be valid regex patterns.

  6. Navigate to Settings > Server Settings > LAN-Fabric, check Template In-Use Override and then click Save.

    This enables NDFC to treat the documented configuration patterns as case insensitive while performing comparisons.

  7. Click Recalculate & Deploy for fabrics to view the updated comparison outputs.

Resolving Configuration Compliance After Importing Switches

After importing switches in Cisco NDFC, configuration compliance for a switch can fail because of an extra space in the management interface (mgmt0) description field.

For example, before importing the switch:


interface mgmt0
  description SRC=SDS-LB-LF111-mgmt0, DST=SDS-LB-SW001-Fa0/5

After importing the switch and creating a configuration profile:


interface mgmt0
  description SRC=SDS-LB-LF111-mgmt0,DST=SDS-LB-SW001-Fa0/5

Navigate to Interface Manager and click the Edit icon after selecting the mgmt0 interface. Remove the extra space in the description.

Strict Configuration Compliance

Strict configuration compliance checks for diff between the switch configuration and the associated intent and generates no commands for the configurations that present on the switch but are not present in the associated intent. When you click Recalculate and Deploy, switch configurations that are not present on the associated intent are removed. You can enable this feature by choosing the Enable Strict Config Compliance check box under the Advanced tab in the Create Fabric or Edit Fabric window. By default, this feature is disabled.

The strict configuration compliance feature is supported on the Easy Fabric templates - Easy_Fabric and Easy_Fabric_eBGP. To avoid generating diff for commands that are auto-generated by the switch, such as vdc, rmon, and so on, a file that has a list of default commands is used by CC to ensure that diffs are not generated for these commands. This file is maintained in Operations > Templates, compliance_strict_cc_exclude_clis.txt template.

Example: Strict Configuration Compliance

Let us consider an example in which the feature telnet command is configured on a switch but is not present in the intent. In such a scenario, the status of the switch is displayed as Out-of-sync after a CC check is done.

Now, click Preview Config of the out-of-sync switch. As the strict configuration compliance feature is enabled, the no form of the feature telnet command appears under Pending Config in the Preview Config window.

Click the Side-by-side Comparison tab to display the differences between the running configuration and the expected configuration. The Re-sync button is also displayed at the top right corner under the Side-by-side Comparison tab in the Preview Config window. Use this option to resynchronize NDFC state when there is a large scale out-of-band change, or if configuration changes do not register in the NDFC properly.

The re-sync operation does a full CC run for the switch and recollects “show run” and “show run all” commands from the switch. When you initiate the re-sync process, a progress message is displayed. During the re-sync, the running configuration is taken from the switch. The Out-of- Sync/In-Sync status for the switch is recalculated based on the intent defined in NDFC.

Now, close the Preview Config window and click Recalculate and Deploy. The strict configuration compliance feature ensures that the running configuration on the switch does not deviate from the intent by pushing the no form of the feature telnet command to the switch. The diff between the configurations is highlighted. The diff other than the feature telnet command are default switch and boot configurations and are ignored by the strict CC check.

You can right-click on a switch in the Fabric Overview window and select Preview Config to display the Preview Config window. This window displays the pending configuration that has to be pushed to the switch to achieve configuration compliance with the intent.

Custom freeform configurations can be added in NDFC to make the intended configuration on NDFC and the switch configurations identical. The switches are then in In-Sync status. For more information on how to add custom freeform configurations on NDFC, refer Enabling Freeform Configurations on Fabric Switches.

Enabling Freeform Configurations on Fabric Switches

In Nexus Dashboard Fabric Controller, you can add custom configurations through freeform policies in the following ways:

  1. Fabric-wide:

    • On all leaf, border leaf, and border gateway leaf switches in the fabric, at once.

    • On all spine, super spine, border spine, border super spine, border gateway spine and border switches, at once.

  2. On a specific switch at the global level.

  3. On a specific switch on a per Network or per VRF level.

  4. On a specific interface on a switch.

Leaf switches are identified by the roles Leaf, Border, and Border Gateway. The spine switches are identified by the roles Spine, Border Spine, Border Gateway Spine, Super Spine, Border Super Spine, and Border Gateway Super Spine.


Note


You can deploy freeform CLIs when you create a fabric or when a fabric is already created. The following examples are for an existing fabric. However, you can use this as a reference for a new fabric.


Deploying Fabric-Wide Freeform CLIs on Leaf and Spine Switches

  1. Choose LAN > Fabrics > Fabrics.

  2. Select the Fabric, and select Edit Fabric from Actions drop-down list.

    (If you are creating a fabric for the first time, click Create Fabric).

  3. Click the Advanced tab and update the following fields:

    Leaf Freeform Config – In this field, add configurations for all leaf, border leaf, and border gateway leaf switches in the fabric.

    Spine Freeform Config - In this field, add configurations for all Spine, Border Spine, Border Gateway Spine, Super Spine, Border Super Spine, and Border Gateway Super Spine switches in the fabric.


    Note


    Copy-paste the intended configuration with correct indentation, as seen in the running configuration on the Nexus switches. For more information, see Resolving Freeform Config Errors in Switches.


  4. Click Save. The fabric topology screen comes up.

  5. Click Deploy Config from the Actions drop-down list to save and deploy configurations.

    Configuration Compliance functionality ensures that the intended configuration as expressed by those CLIs are present on the switches and if they are removed or there is a mismatch, then it flags it as a mismatch and indicate that the device is Out-of-Sync.

Incomplete Configuration Compliance - On some Cisco Nexus 9000 Series switches, in spite of configuring pending switch configurations using the Deploy Config option, there could be a mismatch between the intended and switch configuration. To resolve the issue, add a switch_freeform policy to the affected switch (as explained in the Deploy Freeform CLIs on a Specific Switch section). For example, consider the following persistent pending configurations:


line vty
logout-warning 0

After adding the above configurations in a policy and saving the updates, click Deploy Config in the topology screen to complete the deployment process.

To bring back the switch in-sync, you can add the above configuration in a switch_freeform policy saved and deployed onto the switch.

Deploying Freeform CLIs on a Specific Switch

  1. Choose LAN > Fabrics > Fabrics.

  2. Select the Fabric, and select Edit Fabric from Actions drop-down list.

  3. Click Policies tab. From the Actions drop-down list, choose Add Policy.

    The Create Policy screen comes up.


    Note


    To provision freeform CLIs on a new fabric, you have to create a fabric, import switches into it, and then deploy freeform CLIs.


  4. In the Priority field, the priority is set to 500 by default. You can choose a higher priority (by specifying a lower number) for CLIs that need to appear higher up during deployment. For example, a command to enable a feature should appear earlier in the list of commands.

  5. In the Description field, provide a description for the policy.

  6. From the Template Name field, select freeform_policy.

  7. Add or update the CLIs in the Freeform Config CLI box.

    Copy-paste the intended configuration with correct indentation, as seen in the running configuration on the Nexus switches. For more information, see Resolving Freeform Config Errors in Switches.

  8. Click Save.

    After the policy is saved, it gets added to the intended configurations for that switch.

  9. From the Fabric Overview window, click the Switches tab and choose the required switches.

  10. On the Switches tab, click Actions drop-down list and choose Deploy.

Pointers for freeform_policy Policy Configuration:

  • You can create multiple instances of the policy.

  • For a vPC switch pair, create consistent freeform_policy policies on both the vPC switches.

  • When you edit a freeform_policy policy and deploy it onto the switch, you can see the changes being made (in the Side-by-side tab of the Preview option).

Freeform CLI Configuration Examples

Console line configuration

This example involves deploying some fabric-wide freeform configurations (for all leaf, and spine switches), and individual switch configurations.

Fabric-wide session timeout configuration:


line console
  exec-timeout 1

Console speed configuration on a specific switch:


line console
  speed 115200

IP Prefix List/Route-map configuration

IP prefix list and route-map configurations are typically configured on border devices. These configurations are global because they can be defined once on a switch and then applied to multiple VRFs as needed. The intent for this configuration can be captured and saved in a switch_freeform policy. As mentioned earlier, note that the configuration saved in the policy should match the show run output. This is especially relevant for prefix lists where the NX-OS switch may generate sequence numbers automatically when configured on the CLI. An example snippet is shown below:

ip prefix-list prefix-list-name1 seq 5 permit 20.2.0.1/32
ip prefix-list prefix-list-name1 seq 6 permit 20.2.0.2/32
ip prefix-list prefix-list-name2 seq 5 permit 192.168.100.0/24

ACL configuration

ACL configurations are typically configured on specific switches and not fabric-wide (leaf/spine switches). When you configure ACLs as freeform CLIs on a switch, you should include sequence numbers. Else, there will be a mismatch between the intended and running configuration. A configuration sample with sequence numbers:

ip access-list ACL_VTY 
  10 deny tcp 172.29.171.67/32 172.29.171.36/32 
  20 permit ip any any 
ip access-list vlan65-acl 
  10 permit ip 69.1.1.201/32 65.1.1.11/32 
  20 deny ip any any 

interface Vlan65
  ip access-group vlan65-acl in 
line vty
  access-class ACL_VTY in

If you have configured ACLs without sequence numbers in a freeform_policy policy, update the policy with sequence numbers as shown in the running configuration of the switch.

After the policy is updated and saved, right click the device and select the per switch Deploy Config option to deploy the configuration.

Resolving Freeform Config Errors in Switches

Copy-paste the running-config to the freeform config with correct indentation, as seen in the running configuration on the NX-OS switches. The freeform config must match the running config. Otherwise, configuration compliance in Nexus Dashboard Fabric Controller marks switches as out-of-sync.

Let us see an example of the freeform config of a switch.

feature bash-shell
feature telemetry
 
clock timezone CET 1 0
# Daylight saving time is observed in Metropolitan France from the last Sunday in March (02:00 CET) to the last Sunday in October (03:00 CEST)
clock summer-time CEST 5 Sunday March 02:00 5 Sunday October 03:00 60
clock protocol ntp
 
telemetry
  destination-profile
    use-vrf management

The highlighted line about the daylight saving time is a comment that is not displayed in the show running config command output. Therefore, configuration compliance marks the switch as out-of-sync because the intent does not match the running configuration.

Let us check the running config in the switch for the clock protocol.

spine1# show run all | grep "clock protocol"
clock protocol ntp vdc 1

You can see that vdc 1 is missing from the freeform config.

In this example, let us copy-paste the running config to the freeform config.

Here is the updated freeform config:

feature bash-shell
feature telemetry
 
clock timezone CET 1 0
clock summer-time CEST 5 Sunday March 02:00 5 Sunday October 03:00 60
clock protocol ntp vdc 1
 
telemetry
  destination-profile
    use-vrf management

After you copy-paste the running config and deploy, the switch will be in-sync. When you click Recalculate Config, click the Pending Config column. The Side-by-Side Comparison to view information about the difference between the defined intent and the running config.

Deploying Freeform CLIs on a Specific Switch on a Per VRF/Network basis

  1. Choose LAN > Fabrics > Fabrics.

  2. Select the Fabric, and select Edit Fabric from Actions drop-down list.

  3. Click VRFs tab. From the Actions drop-down list, select Create.

    The Create VRF screen comes up.

  4. Select an individual switch. The VRF attachment form shows up listing the switch that is selected. In case of a vPC pair, both switches belonging to the pair shows up.

  5. Under the CLI Freeform column, select the button labeled Freeform config. This option allows a user to specify additional configuration that should be deployed to the switch along with the VRF profile configuration.

  6. Add or update the CLIs in the Free Form Config CLI box. Copy-paste the intended configuration with correct indentation, as seen in the running configuration on the Nexus switches. For more information, see Resolving Freeform Config Errors in Switches.

  7. Click Deploy Config.


    Note


    The Freeform config button will be gray when there is no per VRF per switch config specified. The button will be blue when some config has been saved by the user.


    After the policy is saved, Click Save on the VRF Attachment pop-up to save the intent to deploy the VRF to that switch. Ensure that the checkbox on the left next to the switch is checked.

  8. Now, optionally, click Preview to look at the configuration that will be pushed to the switch.

  9. Click Deploy Config to push the configuration to the switch.

The same procedure can be used to define a per Network per Switch configuration.

MACsec Support in Easy Fabric and eBGP Fabric

MACsec is supported in the Easy Fabric and eBGP Fabric on intra-fabric links. You should enable MACsec on the fabric and on each required intra-fabric link to configure MACsec. Unlike CloudSec, auto-configuration of MACsec is not supported.

MACsec is supported on switches with minimum Cisco NX-OS Releases 7.0(3)I7(8) and 9.3(5).

Guidelines

  • If MACsec cannot be configured on the physical interfaces of the link, an error is displayed when you click Save. MACsec cannot be configured on the device and link due to the following reasons:

    • The minimum NX-OS version is not met.

    • The interface is not MACsec capable.

  • MACsec global parameters in the fabric settings can be changed at any time.

  • MACsec and CloudSec can coexist on a BGW device.

  • MACsec status of a link with MACsec enabled is displayed on the Links window.

  • Brownfield migration of devices with MACsec configured is supported using switch and interface freeform configs.

    For more information about MACsec configuration, which includes supported platforms and releases, see the Configuring MACsec chapter in Cisco Nexus 9000 Series NX-OS Security Configuration Guide.

The following sections show how to enable and disable MACsec in Nexus Dashboard Fabric Controller:

Enabling MACsec

Procedure

Step 1

Navigate to LAN > Fabrics.

Step 2

Click Actions > Create to create a new fabric or click Actions > Edit Fabric on an existing Easy or eBGP fabric.

Step 3

Click the Advanced tab and specify the MACsec details.

Enable MACsec – Select the check box to enable MACsec for the fabric.

MACsec Primary Key String – Specify a Cisco Type 7 encrypted octet string that is used for establishing the primary MACsec session. For AES_256_CMAC, the key string length must be 130 and for AES_128_CMAC, the key string length must be 66. If these values are not specified correctly, an error is displayed when you save the fabric.

Note

 

The default key lifetime is infinite.

MACsec Primary Cryptographic Algorithm – Choose the cryptographic algorithm used for the primary key string. It can be AES_128_CMAC or AES_256_CMAC. The default value is AES_128_CMAC.

You can configure a fallback key on the device to initiate a backup session if the primary session fails.

MACsec Fallback Key String – Specify a Cisco Type 7 encrypted octet string that is used for establishing a fallback MACsec session. For AES_256_CMAC, the key string length must be 130 and for AES_128_CMAC, the key string length must be 66. If these values are not specified correctly, an error is displayed when you save the fabric.

MACsec Fallback Cryptographic Algorithm – Choose the cryptographic algorithm used for the fallback key string. It can be AES_128_CMAC or AES_256_CMAC. The default value is AES_128_CMAC.

MACsec Cipher Suite – Choose one of the following MACsec cipher suites for the MACsec policy:

  • GCM-AES-128

  • GCM-AES-256

  • GCM-AES-XPN-128

  • GCM-AES-XPN-256

The default value is GCM-AES-XPN-256.

Note

 

The MACsec configuration is not deployed on the switches after the fabric deployment is complete. You need to enable MACsec on intra-fabric links to deploy the MACsec configuration on the switch.

MACsec Status Report Timer – Specifies MACsec operational status periodic report timer in minutes.

Step 4

Click a fabric to view the Summary in the side kick. Click the side kick to expand. Click Links tab.

Step 5

Choose an intra-fabric link on which you want to enable MACsec and click Actions > Edit.

Step 6

In the Link Management – Edit Link window, click Advanced in the Link Profile section, and select the Enable MACsec check box.

If MACsec is enabled on the intra fabric link but not in the fabric settings, an error is displayed when you click Save.

When MACsec is configured on the link, the following configurations are generated:

  • Create MACsec global policies if this is the first link that enables MACsec.

  • Create MACsec interface policies for the link.

Step 7

From the Fabric Actions drop-down list, select Deploy Config to deploy the MACsec configuration.


Disabling MACsec

To disable MACsec on an intra-fabric link, navigate to the Link Management – Edit Link window, unselect the Enable MACsec check box, click Save. From the Fabric Actions drop-down list, select Deploy Config to disable MACsec configuration. This action performs the following:

  • Deletes MACsec interface policies from the link.

  • If this is the last link where MACsec is enabled, MACsec global policies are also deleted from the device.

Only after disabling MACsec on links, navigate to the Fabric Settings and unselect the Enable MACsec check box under the Advanced tab to disable MACsec on the fabric. If there’s an intra-fabric link in the fabric with MACsec enabled, an error is displayed when you click Actions > Recalculate Config from the Fabric Actions drop-down list.

Create Easy_Fabric for Cisco Catalyst 9000 Series Switches

You can add Cisco Catalyst 9000 Series Switches to an easy fabric using the Easy_Fabric_IOS_XE fabric template. You can add only Cisco Catalyst 9000 IOS XE switches to this fabric. This fabric supports OSPF as underlay protocol and BGP EVPN as the overlay protocol. Using this fabric template allows Nexus Dashboard Fabric Controller to manage all the configurations of a VXLAN EVPN Fabric composed of Cisco Catalyst 9000 IOS-XE switches. Backing up and restoring this fabric is the same as the Easy_Fabric.

Guidelines

  • EVPN VXLAN Distributed Anycast Gateway is supported when each SVI is configured with the same Anycast Gateway MAC.

  • StackWise Virtual switch is supported.

  • Brownfield is not supported.

  • Upgrade from earlier versions is not supported (However, it is a preview feature in 11.5).

  • IPv6 Underlay, VXLAN Multi-site, Anycast RP, and TRM is not supported.

  • ISIS, ingress replication, unnumbered intra-fabric link, 4 bytes BGP ASN, and Zero-Touch Provisioning (ZTP) is not supported.


Note


For information about configuration compliance, see Configuration Compliance in External Fabrics.


Creating Easy Fabric for Cisco Catalyst 9000 Series Switches

UI Navigation: Choose LAN > Fabrics.

Perform the following steps to create an easy fabric for Cisco Catalyst 9000 Series Switches:

  1. Choose Create Fabric from the Actions drop-down list.

  2. Enter a fabric name and click Choose Template.

    The Select Fabric Template dialog appears.

  3. Choose the Easy_Fabric_IOS_XE fabric template and click Select.

  4. Fill in all the required fields and click Save.


    Note


    BGP ASN is the only mandatory field.


Adding Cisco Catalyst 9000 Series Switches to IOS-XE Easy Fabrics

Cisco Catalyst 9000 series switches are discovered using SNMP. Hence, before adding them to the fabric, configuring the Cisco Catalyst 9000 series switches includes configuring SNMP views, groups, and users. For more information, see the Configuring IOS-XE Devices for Discovery section.

For StackWise Virtual switches, configure the StackWise Virtual-related configuration before adding them to the fabric.

UI Navigation

Choose any one of the following navigation paths to add switch(es) in the Add Switches window.

  • Choose LAN > Fabrics. Choose a fabric that uses the Easy_Fabric_IOS_XE fabric template from the list, click Actions, and choose Add Switches.

  • Choose LAN > Fabrics. Choose a fabric that uses the Easy_Fabric_IOS_XE fabric template from the list. Click the Switches tab. Click Actions and choose Add Switches.

  • Choose LAN > Switches. Click Actions and choose Add Switches. Click Choose Fabric, choose the IOS-XE VXLAN fabric, and click Select.

Before you begin

Set the default credentials for the device in the LAN Credentials Management window if the default credentials are not set. To navigate to the LAN Credentials Management window from the Cisco Nexus Dashboard Fabric Controller Web UI, choose Settings > LAN Credentials Management.

Procedure

Step 1

Enter values for the following fields:

Field

Description

Seed IP

Enter the IP address of the switch.

You can import more than one switch by providing the IP address range. For example: 10.10.10.40-60

The switches must be properly cabled and reachable to the Cisco Nexus Dashboard Fabric Controller server and the switch status must be manageable.

Authentication Protocol

Choose the authentication protocol from the drop-down list.

Username

Enter the username of the switch(es).

Password

Enter the password of the switch(es).

Note

 

You can change the Discover and LAN credentials only after discovering the switch.

Step 2

Click Discover Switches.

The switch details are populated.

Cisco Nexus Dashboard Fabric Controller supports the import of Cisco Catalyst 9500 Switches running in StackWise Virtual. The StackWise Virtual configuration to form a pair of Cisco Catalyst 9500 Switches into a virtual switch has to be in place before the import. For more information on how to configure StackWise Virtual, see the Configuring Cisco StackWise Virtual chapter in the High Availability Configuration Guide (Catalyst 9500 Switches) for the required release.

Step 3

Check the check boxes next to the switches you want to import.

You can import only switches with the manageable status.

Step 4

Click Add Switches.

The switch discovery process is initiated and the discovery status is updated under the Discovery Status column in the Switches tab.

Step 5

(Optional) View the details of the device.

After the discovery of the device, the discovery status changes to ok in green.


What to do next
  1. Set the appropriate role. The supported roles are:

    • Leaf

    • Spine

    • Border

    To set the role, choose a switch and click Actions. Choose Set role. Choose a role and click Select.


    Note


    After discovering the switch(es), Nexus Dashboard Fabric Controller usually assigns Leaf as the default role.


  2. Recalculate the configurations and deploy the configurations to the switches.

Recalculating and Deploying Configurations

To recalculate and deploy the configurations to the switch(es) in the IOS-XE easy fabric, perform the following steps to recalculate configurations:

Before you begin

Set the role of the switch(es) in the IOS-XE easy fabric.

Procedure

Step 1

Click Actions from Fabric Overview.

Step 2

Choose Recalculate Config.

Recalculation of configurations starts on the switch(es).


Creating DCI Links for Cisco Catalyst Switches in IOS-XE Easy Fabrics

You can create VRF-Lite IFC between a Cisco Catalyst 9000 Series Switch with border role in IOS-XE easy fabrics, and another switch in a different fabric. The other switch can be a Nexus switch in External Fabric, LAN Classic fabric, or Easy Fabric. It can also be a Catalyst 9000 switch in External Fabric or IOS-XE Easy Fabric. The link can be created only from IOS-XE Easy Fabric.

For more information, see Links and Templates.


Note


When creating DCI links for IOS-XE Easy Fabric, auto-deploy is supported only if the destination device is a Nexus switch.


To create links for IOS-XE Easy Fabric, perform the following procedure:

  1. Navigate to the Links tab in the fabric overview.

    The list of previously created links is displayed. The list contains intra-fabric links, which are between switches within a fabric, and inter-fabric links, which are between border switches in this fabric and switches in other fabrics.

    The inter-fabric links also support edge router switches in the External Fabric, apart from BGW and Border Leaf/Spine.

  2. Click Actions and choose Create.

    The Create Link window appears. By default, the Intra-Fabric option is chosen as the link type.

  3. From the Link Type drop-down box, choose Inter-Fabric . The fields change correspondingly.

  4. Choose VRF_LITE as the link sub-type, ext_fabric_setup template for VRF_LITE IFC, and IOS-XE fabric as the source fabric.

    Link Template: The link template is populated.

    The templates are autopopulated with corresponding pre-packaged default templates that are based on your selection. The template to use for VRF_LITE IFC is ext_fabric_setup.


    Note


    You can add, edit, or delete only the ext_routed_fabric template. For more information, see Templates.


  5. Choose the IOS-XE fabric as the source fabric from the Source Fabric drop-down list.

  6. Choose a destination fabric from the Destination Fabric drop-down list.

  7. Choose the source device and Ethernet interface that connects to the destination device.

  8. Choose the destination device and Ethernet interface that connects to the source device.

  9. Enter values in other fields accordingly.

  10. Click Save.


Note


Instead of the create action, you can also use the Edit action to create VRF-Lite IFC(s) using the existing inter fabric link(s). Choose the VRF_Lite link subtype. By default, if you select Edit, then the data for the fields Link-Type, Source Fabric, Destination Fabric, Source Device, Destination Device, Source Interface and Destination Interface are auto-populated in the Edit Link window.

Choose VRF_LITE as the link sub-type, ext_fabric_setup template for VRF_LITE IFC, and IOS-XE fabric as the source fabric.

To complete the procedure, repeat step 4 to step 10 mentioned above.


Creating VRFs for Cisco Catalyst 9000 Series Switches in IOS-XE Easy Fabrics

UI Navigation

  • Choose LAN > Fabrics. Click on a fabric to open the Fabric slide-in pane. Click the Launch icon. Choose Fabric Overview > VRFs > VRFs.

  • Choose LAN > Fabrics. Double-click on a fabric to open Fabric Overview > VRFs > VRFs.

You can create VRFs for IOS-XE easy fabrics.

To create VRF from the Cisco Nexus Dashboard Fabric Controller Web UI, perform the following steps:

  1. Click Actions and choose Create.

    The Create VRF window appears.

  2. Enter the required details in the mandatory fields. Some of the fields have default values.

    The fields in this window are:

    VRF Name - Specifies a VRF name automatically or allows you to enter a name for Virtual Routing and Forwarding (VRF). The VRF name should not contain any white spaces or special characters except underscore (_), hyphen (-), and colon (:).

    VRF ID - Specifies the ID for the VRF or allows you to enter an ID for the VRF.

    VLAN ID - Specifies the corresponding tenant VLAN ID for the network or allows you to enter an ID for the VLAN. If you want to propose a new VLAN for the network, click Propose Vlan.

    VRF Template - A universal template is autopopulated. This is only applicable for leaf switches. The default template for IOS_XE Easy Fabric is the IOS_XE_VRF template.

    VRF Extension Template - A universal extension template is autopopulated. This allows you to extend this network to another fabric. The default template for IOS_XE Easy Fabric is the IOS_XE_VRF template.

    The VRF profile section contains the General Parameters and Advanced tabs.

  3. The fields on the General tab are:

    VRF Description - Enter the a description for the VRF.

    VRF Intf Description - Specifies the description for the VRF interface.

  4. Click the Advanced tab to optionally specify the advanced profile settings. The fields on the Advanced tab are:

    Redistribute Direct Route Map - Specifies the redistribute direct route map name.

    Max BGP Paths - Specifies the maximum BGP paths. The valid value range is between 1 and 64.

    Max iBGP Paths - Specifies the maximum iBGP paths. The valid value range is between 1 and 64.

    Advertise Host Routes - Enable this check box to control advertisement of /32 and /128 routes to Edge routers.

    Advertise Default Route - Enable this check box to control advertisement of default route internally.

    Config Static 0/0 Route - Enable this check box to control configuration of static default route.

  5. Click Create to create the VRF or click Cancel to discard the VRF.

    A message appears indicating that the VRF is created.

    The new VRF appears on the VRFs horizontal tab. The status is NA as the VRF is created but not yet deployed. Now that the VRF is created, you can create and deploy networks on the devices in the fabric.

What to do next

Attach the VRF.

Create a loopback interface selecting the VRF_LITE extension.

For more information about attaching and detaching VRFs, see VRF Attachments.

Attaching VRFs on Cisco Catalyst 9000 Series Switches in IOS-XE Easy Fabrics

To attach the VRFs on the Cisco Catalyst 9000 Series Switches in the IOS-XE easy fabric, see VRF Attachments.


Note


Choose the VRF corresponding to the CAT9000 series switch by checking the check box next to it.



Note


Similarly, you can create a loopback interface, and select VRF_LITE extension.


What to do next

Deploy the configurations as follows:

  1. Click Actions in Fabric Overview.

  2. Choose Deploy config to switches.

  3. Click Deploy after the configuration preview is complete.

  4. Click Close after the deployment is complete.

Creating and Deploying Networks in IOS-XE Easy Fabrics

The next step is to create and deploy networks in IOS-XE Easy Fabrics.


Note


  • The Network Template and Network Extension template uses the default IOS_XE_Network template that was created for the IOS-XE easy fabric.


UI Navigation

The following options are applicable only for switch fabrics, easy fabrics, and MSD fabrics:

  • Choose LAN > Fabrics. Click on a fabric to open the Fabric slide-in pane. Click the Launch icon. Choose Fabric Overview > Networks.

  • Choose LAN > Fabrics. Double-click on a fabric to open Fabric Overview > Networks.

Creating Networks for IOS-XE Easy Fabrics

To create network for IOX-XE easy fabric from the Cisco Nexus Dashboard Fabric Controller Web UI, perform the following steps:

  1. On the Networks horizontal tab, click Actions and choose Create.

    The Create Network window appears.

  2. Enter the required details in the mandatory fields.

    The fields in this window are:

    Network ID and Network Name - Specifies the Layer 2 VNI and name of the network. The network name should not contain any white spaces or special characters except underscore (_) and hyphen (-).

    Layer 2 Only - Specifies whether the network is Layer 2 only.

    VRF Name - Allows you to select the Virtual Routing and Forwarding (VRF).

    When no VRF is created, this field appears blank. If you want to create a new VRF, click Create VRF. The VRF name should not contain any white spaces or special characters except underscore (_), hyphen (-), and colon (:).

    VLAN ID - Specifies the corresponding tenant VLAN ID for the network. If you want to propose a new VLAN for the network, click Propose VLAN.

    Network Template - A universal template is autopopulated. This is only applicable for leaf switches.

    Network Extension Template - A universal extension template is autopopulated. This allows you to extend this network to another fabric. The VRF Lite extension is supported. The template is applicable for border leaf switches.

    Generate Multicast IP - If you want to generate a new multicast group address and override the default value, click Generate Multicast IP.

    The network profile section contains the General and Advanced tabs.

  3. The fields on the General tab are:


    Note


    If the network is a non Layer 2 network, then it is mandatory to provide the gateway IP address.


    IPv4 Gateway/NetMask - Specifies the IPv4 address with subnet.

    Specify the anycast gateway IP address for transporting the L3 traffic from a server belonging to MyNetwork_30000 and a server from another virtual network. The anycast gateway IP address is the same for MyNetwork_30000 on all switches of the fabric that have the presence of the network.


    Note


    If the same IP address is configured in the IPv4 Gateway and IPv4 Secondary GW1 or GW2 fields of the network template, Nexus Dashboard Fabric Controller does not show an error, and you will be able to save this configuration.

    However, after the network configuration is pushed to the switch, it would result in a failure as the configuration is not allowed by the switch.


    IPv6 Gateway/Prefix List - Specifies the IPv6 address with subnet.

    Vlan Name - Enter the VLAN name.

    Vlan Interface Description - Specifies the description for the interface. This interface is a switch virtual interface (SVI).

    IPv4 Secondary GW1 - Enter the gateway IP address for the additional subnet.

    IPv4 Secondary GW2 - Enter the gateway IP address for the additional subnet.

  4. Click the Advanced tab to optionally specify the advanced profile settings. The fields on the Advanced tab are:

    Multicast Group Address - The multicast IP address for the network is autopopulated.

    Multicast group address is a per fabric instance variable and remains the same for all networks by default. If a new multicast group address is required for this network, you can generate it by clicking the Generate Multicast IP button.

    DHCPv4 Server 1 - Enter the DHCP relay IP address of the first DHCP server.

    DHCPv4 Server VRF - Enter the DHCP server VRF ID.

    DHCPv4 Server 2 - Enter the DHCP relay IP address of the next DHCP server.

    DHCPv4 Server2 VRF - Enter the DHCP server VRF ID.

    Loopback ID for DHCP Relay interface (Min:0, Max:1023) - Specifies the loopback ID for DHCP relay interface.

    Enable L3 Gateway on Border - Select the check box to enable a Layer 3 gateway on the border switches.

  5. Click Create.

    A message appears indicating that the network is created.

    The new network appears on the Networks page that comes up.

    The Status is NA since the network is created but not yet deployed on the switches. Now that the network is created, you can create more networks if needed and deploy the networks on the devices in the fabric.

Deploying Networks in IOS-XE Easy Fabrics

You can deploy networks in IOS-XE easy fabrics as follows:

  • The network configurations can also be deployed in the Fabric Overview window as follows:

    1. Click Actions in the fabric overview.

    2. Choose Deploy config to switches.

    3. Click Deploy after the configuration preview is complete.

    4. Click Close after the deployment is complete

  • To deploy the network in the IOS-XE easy fabric, see Network Attachments.

External Fabrics

You can add switches to the external fabric. Generic pointers:

NDFC will not generate "no router bgp". If you want to change it, go to the switch and do a “no feature bgp” followed by a re-sync, if you don't have anything and want to update the ASN.

  • The external fabric is a monitor-only or managed mode fabric.

  • From Cisco Nexus Dashboard Fabric Controller Release 12.0.1, Cisco IOS-XR family devices Cisco ASR 9000 Series Aggregation Services Routers and Cisco Network Convergence System (NCS) 5500 Series are supported in external fabric in managed mode and monitor mode. NDFC will generate and push configurations to these switches, and configuration compliance will also be enabled for these platforms.

  • From Cisco Nexus Dashboard Fabric Controller Release 12.1.1e, you can also add Cisco 8000 Series Routers to external fabrics both in managed mode and monitored mode, and configuration compliance is also supported.

  • You can import, remove, and delete switches for an external fabric.

  • For Inter-Fabric Connection (IFC) cases, you can choose Cisco 9000, 7000 and 5600 Series switches as destination switches in the external fabric.

  • You can use non-existing switches as destination switches.

  • The template that supports an external fabric is External_Fabric.

  • If an external fabric is an MSD fabric member, then the MSD topology screen displays the external fabric with its devices, along with the member fabrics and their devices.

    When viewed from an external fabric topology screen, any connections to non-Nexus Dashboard Fabric Controller managed switches are represented by a cloud icon labeled as Undiscovered.

  • You can set up a Multi-Site or a VRF-lite IFC by manually configuring the links for the border devices in the VXLAN fabric or by using an automatic Deploy Border Gateway Method or VRF Lite IFC Deploy Method. If you are configuring the links manually for the border devices, we recommend using the Core Router role to set up a Multi-Site eBGP underlay from a Border Gateway device to a Core Router and the Edge Router role to set up a VRF-lite Inter-Fabric Connection (IFC) from a Border device to an Edge device.

  • If you are using the Cisco Nexus 7000 Series Switch with Cisco NX-OS Release 6.2(24a) on the LAN Classic or External fabrics, make sure to enable AAA IP Authorization in the fabric settings.

  • You can discover the following non-Nexus devices in an external fabric:

    • IOS-XE family devices: Cisco CSR 1000v, Cisco IOS XE Gibraltar 16.10.x, Cisco ASR 1000 Series routers, and Cisco Catalyst 9000 Series Switches

    • IOS-XR family devices: ASR 9000 Series Routers, IOS XR Release 6.5.2 and Cisco NCS 5500 Series Routers, IOS XR Release 6.5.3

    • Arista 4.2 (Any model)

  • Configure all the non-Nexus devices, except Cisco CSR 1000v, before adding them to the external fabric.

  • You can configure non-Nexus devices as borders. You can create an IFC between a non-Nexus device in an external fabric and a Cisco Nexus device in an easy fabric. The interfaces supported for these devices are:

    • Routed

    • Subinterface

    • Loopback

  • You can configure a Cisco ASR 1000 Series routers and Cisco Catalyst 9000 Series switches as edge routers, set up a VRF-lite IFC and connect it as a border device with an easy fabric.

  • Before a VDC reload, discover Admin VDC in the fabric. Otherwise, the reload operation does not occur.

  • You can connect a Cisco data center to a public cloud using Cisco CSR 1000v. See the Connecting Cisco Data Center and a Public Cloud chapter for a use case.

  • In an external fabric, when you add the switch_user policy and provide the username and password, the password must be an encrypted string that is displayed in the show run command.

    For example:

    username admin password 5 $5$I4sapkBh$S7B7UcPH/iVTihLKH5sgldBeS3O2X1StQsvv3cmbYd1  role network-admin

    In this case, the entered password should be 5$5$I4sapkBh$S7B7UcPH/iVTihLKH5sgldBeS3O2X1StQsvv3cmbYd1.

  • For the Cisco Network Insights for Resources (NIR) Release 2.1 and later, and flow telemetry, feature lldp command is one of the required configuration.

    Cisco Nexus Dashboard Fabric Controller pushes feature lldp on the switches only for the Easy Fabric deployments, that is, for the eBGP routed fabric or VXLAN EVPN fabric.

    Therefore, NIR users need to enable feature lldp on all the switches in the following scenarios:

    • External fabric in Monitored or Managed Mode

    • LAN Classic fabric in Monitored or Managed Mode

  • Backup/restore is only supported for Nexus devices on external fabrics.


    Note


    Before you do fabric or switch restore, ensure that the target device is supported. If the target device is not supported, then per switch restore will be blocked, and the same will be shown as not supported during fabric-wide restore.


Move an External Fabric Under an MSD Fabric

You should go to the MSD fabric page to associate an external fabric as its member.

  1. On Topology, click within the MSD-Parent-Fabric. From Actions drop-down list, select Move Fabrics.

    The Move Fabric screen comes up. It contains a list of fabrics. The external fabric is displayed as a standalone fabric.

  2. Select the radio button next to the external fabric and click Add.

    Now, in the Scope drop-down box at the top right, you can see that the external fabric appears under the MSD fabric.

External Fabric Depiction in an MSD Fabric Topology

The MSD topology screen displays MSD member fabrics and external fabrics together. The external fabric External65000 is displayed as part of the MSD topology.


Note


When you deploy networks or VRFs for the VXLAN fabric, the deployment page (MSD topology view) shows the VXLAN and external fabrics that are connected to each other.


Creating an External Fabric

To create an external fabric using Cisco Fabric Controller Web UI, perform the following steps:

Procedure

Step 1

Choose LAN > Fabrics > Fabrics.

Step 2

From the Actions drop-down list, select Create Fabric.

Step 3

Enter a unique name for the fabric and click Choose Template.

Step 4

From the drop-down list, select External_Fabric template.

The fields in this screen are:

BGP AS # – Enter the BGP AS number.

Fabric Monitor Mode – Clear the check box if you want Nexus Dashboard Fabric Controller to manage the fabric. Keep the check box selected to enable a monitor only external fabric.

From Cisco Nexus Dashboard Fabric Controller Release 12.1.1e, you can also add Cisco 8000 Series Routers to external fabrics both in managed mode and monitored mode.

When you create an Inter-Fabric Connection from a VXLAN fabric to this external fabric, the BGP AS number is referenced as the external or neighbor fabric AS Number.

When an external fabric is set to Fabric Monitor Mode Only, you cannot deploy configurations on its switches. If you click Deploy Config, it displays an error message.

The configurations must be pushed for non-Nexus devices before you discover them in the fabric. You cannot push configurations in the monitor mode.

Enable Performance Monitoring – Check this check box to enable performance monitoring on NX-OS switches only.

Ensure that you do not clear interface counters from the Command Line Interface of the switches. Clearing interface counters can cause the Performance Monitor to display incorrect data for traffic utilization. If you must clear the counters and the switch has both clear counters and clear counters snmp commands (not all switches have the clear counters snmp command), ensure that you run both the main and the SNMP commands simultaneously. For example, you must run the clear counters interface ethernet slot/port command followed by the clear counters interface ethernet slot/port snmp command. This can lead to a one time spike.

Step 5

Enter values in the fields under the Advanced tab.

Power Supply Mode – Choose the appropriate power supply mode.

Enable MPLS Handoff – Select the check box to enable the MPLS Handoff feature. For more information, see the MPLS SR and LDP Handoff chapter in External/WAN Layer 3 Connectivity for VXLAN BGP EVPN Fabrics.

Underlay MPLS Loopback Id – Specifies the underlay MPLS loopback ID. The default value is 101.

Enable AAA IP Authorization – Enables AAA IP authorization, after IP Authorization is enabled on the AAA Server

Enable Nexus Dashboard Fabric Controller as Trap Host – Select this check box to enable Nexus Dashboard Fabric Controller as a trap host.

Enable CDP for Bootstrapped Switch – Select the check box to enable CDP for bootstrapped switch.

Enable NX-API – Specifies enabling of NX-API on HTTPS. This check box is unchecked by default.

Enable NX-API on HTTP – Specifies enabling of NX-API on HTTP. This check box is unchecked by default. Enable this check box and the Enable NX-API check box to use HTTP. If you uncheck this check box, the applications that use NX-API and supported by Cisco Nexus Dashboard Fabric Controller, such as Endpoint Locator (EPL), Layer 4-Layer 7 services (L4-L7 services), VXLAN OAM, and so on, start using the HTTPS instead of HTTP.

Note

 

If you check the Enable NX-API check box and the Enable NX-API on HTTP check box, applications use HTTP.

Inband Mgmt – For External and Classic LAN Fabrics, this knob enables Nexus Dashboard Fabric Controller to import and manage of switches with inband connectivity (reachable over switch loopback, or routed interface, or SVI interfaces) , in addition to management of switches with out-of-band connectivity (aka reachable over switch mgmt0 interface). The only requirement is that for Inband managed switches, there should be IP reachability from Nexus Dashboard Fabric Controller to the switches over the Nexus Dashboard data interface, also known as inband interface. For this purpose, static routes may be needed on the Nexus Dashboard Fabric Controller, that in turn can be configured from Administration > Customization > Network Preferences. After enabling Inband management, during discovery provide the IPs of all the switches to be imported using Inband Management and set maximum hops to 0. Nexus Dashboard Fabric Controller has a precheck that validates that the Inband managed switch IPs are reachable over the Nexus Dashboard data interface. After completing the precheck, Nexus Dashboard Fabric Controller discovers and learns about the interface on that switch that has the specified discovery IP in addition to the VRF that the interface belongs to. As part of the process of switch import/discovery, this information is captured in the baseline intent that is populated on the Nexus Dashboard Fabric Controller. For more information, see Inband Management in External Fabrics and LAN Classic Fabrics.

Note

 

Bootstrap or POAP is only supported for switches that are reachable over out-of-band connectivity, that is, over switch mgmt0. The various POAP services on the Nexus Dashboard Fabric Controller are typically bound to the eth1 or out-of-band interface. In scenarios, where Nexus Dashboard Fabric Controller eth0/eth1 interfaces reside in the same IP subnet, the POAP services are bound to both interfaces.

Enable Precision Time Protocol (PTP) – Enables PTP across a fabric. When you select this check box, PTP is enabled globally and on core-facing interfaces. You can also edit PTP Source Loopback Id and PTP Domain Id fields. For more information, see Precision Time Protocol for External Fabrics and LAN Classic Fabrics.

PTP Source Loopback Id – Specifies the loopback interface ID Loopback that is used as the Source IP Address for all PTP packets. The valid values range 0–1023. The PTP loopback ID cannot be the same as RP, Phantom RP, NVE, or MPLS loopback ID. Otherwise, an error will be generated. The PTP loopback ID can be the same as BGP loopback or user-defined loopback which is created from Nexus Dashboard Fabric Controller. If the PTP loopback ID is not found during Save & Deploy, the following error is generated: Loopback interface to use for PTP source IP is not found. Please create PTP loopback interface on all the devices to enable PTP feature.

PTP Domain Id – Specifies the PTP domain ID on a single network. The valid values range 0–127.

Fabric Freeform – You can apply configurations globally across all the devices that are discovered in the external fabric using this freeform field. The devices in the fabric should belong to the same device-type and the fabric should not be in monitor mode. The different device types are:

  • NX-OS

  • IOS-XE

  • IOS-XR

  • Others

Depending on the device types, enter the configurations accordingly. If some of the devices in the fabric do not support these global configurations, they go out-of-sync or fail during the deployment. Hence, ensure that the configurations you apply are supported on all the devices in the fabric or remove the devices that do not support these configurations.

AAA Freeform Config – You can apply AAA configurations globally across all devices that are discovered in the external fabric using this freeform field.

Step 6

Fill up the Resources tab as explained in the following.

Subinterface Dot1q Range – The subinterface 802.1Q range and the underlay routing loopback IP address range are autopopulated.

Underlay MPLS Loopback IP Range – Specifies the underlay MPLS SR or LDP loopback IP address range.

The IP range should be unique, that is, it should not overlap with IP ranges of the other fabrics.

Step 7

Fill up the Configuration Backup tab as shown below.

The fields on this tab are:

Hourly Fabric Backup – Select the check box to enable an hourly backup of fabric configurations and the intent.

You can enable an hourly backup for fresh fabric configurations and the intent as well. If there is a configuration push in the previous hour, Nexus Dashboard Fabric Controller takes a backup. In case of the external fabric, the entire configuration on the switch is not converted to intent on Nexus Dashboard Fabric Controller as compared to the VXLAN fabric. Therefore, for the external fabric, both intent and running configuration are backed up.

Intent refers to configurations that are saved in Nexus Dashboard Fabric Controller but yet to be provisioned on the switches.

The hourly backups are triggered during the first 10 minutes of the hour.

Scheduled Fabric Backup – Check the check box to enable a daily backup. This backup tracks changes in running configurations on the fabric devices that are not tracked by configuration compliance.

Scheduled Time: Specify the scheduled backup time in a 24-hour format. This field is enabled if you check the Scheduled Fabric Backup check box.

Select both the check boxes to enable both back up processes.

The backup process is initiated after you click Save.

The scheduled backups are triggered exactly at the time that you specify with a delay of up to two minutes. The scheduled backups are triggered regardless of the configuration deployment status.

You can also initiate the fabric backup in the fabric topology window. Click Backup Fabric in the Actions pane.

The backups contain running configuration and intent that is pushed by Nexus Dashboard Fabric Controller. Configuration compliance forces the running config to be the same as the Nexus Dashboard Fabric Controller config. Note that for the external fabric, only some configurations are part of intent and the remaining configurations are not tracked by Nexus Dashboard Fabric Controller. Therefore, as part of backup, both Nexus Dashboard Fabric Controller intent and running config from switch are captured.

Step 8

Click the Bootstrap tab.

Enable Bootstrap – Select this check box to enable the bootstrap feature.

After you enable bootstrap, you can enable the DHCP server for automatic IP address assignment using one of the following methods:

  • External DHCP Server: Enter information about the external DHCP server in the Switch Mgmt Default Gateway and Switch Mgmt IP Subnet Prefix fields.

  • Local DHCP Server: Enable the Local DHCP Server check box and enter details for the remaining mandatory fields.

From Cisco NDFC Release 12.1.1e, you can choose Inband POAP or out-of-band POAP for External fabrics.

Enable Inband POAP – Choose this check box to enable Inband POAP.

Note

 

You must enable Inband Mgmt on the Advanced tab to enable this option.

Enable Local DHCP Server – Select this check box to initiate enabling of automatic IP address assignment through the local DHCP server. When you choose this check box, all the remaining fields become editable.

DHCP Version – Select DHCPv4 or DHCPv6 from this drop-down list. When you select DHCPv4, Switch Mgmt IPv6 Subnet Prefix field is disabled. If you select DHCPv6, the Switch Mgmt IP Subnet Prefix is disabled.

Note

 

Cisco Nexus Dashboard Fabric Controller IPv6 POAP is not supported with Cisco Nexus 7000 Series Switches. Cisco Nexus 9000 and 3000 Series Switches support IPv6 POAP only when switches are either L2 adjacent (eth1 or out-of-band subnet must be a /64) or they are L3 adjacent residing in some IPv6 /64 subnet. Subnet prefixes other than /64 are not supported.

If you do not select this check box, Nexus Dashboard Fabric Controller uses the remote or external DHCP server for automatic IP address assignment.

DHCP Scope Start Address and DHCP Scope End Address – Specifies the first and last IP addresses of the IP address range to be used for the switch out of band POAP.

Switch Mgmt Default Gateway – Specifies the default gateway for the management VRF on the switch.

Switch Mgmt IP Subnet Prefix – Specifies the prefix for the Mgmt0 interface on the switch. The prefix range is 8-30.

DHCP scope and management default gateway IP address specification - If you specify the management default gateway IP address 10.0.1.1 and subnet mask 24, ensure that the DHCP scope is within the specified subnet, between 10.0.1.2 and 10.0.1.254.

Switch Mgmt IPv6 Subnet Prefix – Specifies the IPv6 prefix for the Mgmt0 interface on the switch. The prefix should be from 112 through 126. This field is editable if you enable IPv6 for DHCP.

Enable AAA Config – Select this check box to include AAA configs from Advanced tab during device bootup.

Bootstrap Freeform Config - (Optional) Enter other commands as needed. For example, if you are using AAA or remote authentication-related configurations, add these configurations in this field to save the intent. After the devices boot up, they contain the intent that is defined in the Bootstrap Freeform Config field.

Copy-paste the running-config to a freeform config field with correct indentation, as seen in the running configuration on the NX-OS switches. The freeform config must match the running config. For more information, see Enabling Freeform Configurations on Fabric Switches.

DHCPv4/DHCPv6 Multi Subnet Scope - Specifies the field to enter one subnet scope per line. This field is editable after you check the Enable Local DHCP Server check box.

The format of the scope should be defined as:

DHCP Scope Start Address, DHCP Scope End Address, Switch Management Default Gateway, Switch Management Subnet Prefix

for example: 10.6.0.2, 10.6.0.9, 10.6.0.1, 24

Step 9

Click the Flow Monitor tab. The fields on this tab are as follows.

Enable NetFlow – Check this check box to enable NetFlow on VTEPs for this Fabric. By default, NetFlow is disabled. On Enable, NetFlow configuration will be applied to all VTEPS that support NetFlow.

Note: When NetFlow is enabled on the fabric, you can choose not to have NetFlow on a particular switch by having a dummy no_netflow PTI.

If NetFlow is not enabled at the fabric level, an error message is generated when you enable NetFlow at the interface, network, or VRF level. For information about NetFlow support for Cisco NDFC, see Netflow Support.

In the NetFlow Exporter area, click Actions > Add to add one or more NetFlow exporters. This exporter is the receiver of the NetFlow data. The fields on this screen are:

  • Exporter Name – Specifies the name of the exporter.

  • IP – Specifies the IP address of the exporter.

  • VRF – Specifies the VRF over which the exporter is routed.

  • Source Interface – Enter the source interface name.

  • UDP Port – Specifies the UDP port over which the NetFlow data is exported.

Click Save to configure the exporter. Click Cancel to discard. You can also choose an existing exporter and select Actions > Edit or Actions > Delete to perform relevant actions.

In the NetFlow Record area, click Actions > Add to add one or more NetFlow records. The fields on this screen are:

  • Record Name – Specifies the name of the record.

  • Record Template – Specifies the template for the record. Enter one of the record templates names. In Release 12.0.2, the following two record templates are available for use. You can create custom NetFlow record templates. Custom record templates that are saved in the template library are available for use here.

    • netflow_ipv4_record – to use the IPv4 record template.

    • netflow_l2_record – to use the Layer 2 record template.

  • Is Layer 2 Record – Check this check box if the record is for Layer 2 NetFlow.

Click Save to configure the report. Click Cancel to discard. You can also choose an existing record and select Actions > Edit or Actions > Delete to perform relevant actions.

In the NetFlow Monitor area, click Actions > Add to add one or more NetFlow monitors. The fields on this screen are:

  • Monitor Name – Specifies the name of the monitor.

  • Record Name – Specifies the name of the record for the monitor.

  • Exporter1 Name – Specifies the name of the exporter for the NetFlow monitor.

  • Exporter2 Name – (optional) Specifies the name of the secondary exporter for the NetFlow monitor.

The record name and exporters referred to in each NetFlow monitor must be defined in Netflow Record and Netflow Exporter.

Click Save to configure the monitor. Click Cancel to discard. You can also choose an existing monitor and select Actions > Edit or Actions > Delete to perform relevant actions.

Step 10

Click Save.

After the external fabric is created, the external fabric topology page comes up.

After creating the external fabric, add switches to it.


Adding Switches to the External Fabric

Switches in each fabric are unique, and hence, each switch can only be added to one fabric. To add switches to the external fabric, perform he following steps:

Procedure

Step 1

Choose LAN > Switches. From the Actions drop-down list, select Add Switches

You can also add switches to a Fabric from LAN > Fabrics. Select a fabric and view the Summary. On the Switches tab, from the Actions drop-down list, select Add switches to add switches to the selected Fabric.

From Topology, right click on the Fabric and select Add Switches.

Step 2

Select Discover to discover new switches. Select Move Neighbor Switches to add existing switches to the Fabric.

Step 3

If you select Discover option, perform the following steps:

  1. Enter the IP address (Seed IP) of the switch.

  2. In the Authentication Protocol field, from the drop-down list, select the appropriate protocol to add switches to the Fabric.

  3. Choose the device type from the Device Type drop-down list.

    The options are NX-OS, IOS XE, IOS XR, and Other.

    • Select NX-OS to discover a Cisco Nexus switch.

    • Select IOS XE to discover a CSR device.

    • Select IOS XR to discover an ASR device.

    • Select Other to discover non-Cisco devices.

    Refer the Adding non-Nexus Devices to External Fabrics section for more information on adding other non-Nexus devices.

    Config compliance is disabled for all non-Nexus devices except for Cisco CSR 1000v.

  4. Enter the administrator username and password of the switch.

  5. Click Discovery Switches at the bottom part of the screen.

The Scan Details section comes up shortly. Since the Max Hops field was populated with 2, the switch with the specified IP address and switches two hops from it are populated.

Select the check boxes next to the concerned switches and click Add Switches into fabric.

You can discover multiple switches at the same time. The switches must be properly cabled and connected to the Nexus Dashboard Fabric Controller server and the switch status must be manageable.

The switch discovery process is initiated. The Progress column displays the progress. After Nexus Dashboard Fabric Controller discovers the switch, click Close to revert to the previous screen.

Step 4

If you select Move Neighbor Switches option, select the switch and click Move Switch.

The selected switch is moved to the External Fabric.


Switch Settings for External Fabrics

External Fabric Switch Settings vary from the VXLAN fabric switch settings. Double-click on the switch to view the Switch Overview screen to edit/modify options.

The options are:

Set Role – By default, no role is assigned to an external fabric switch. You can assign desired role to the fabric. Assign the Core Router role for a Multi-Site Inter-Fabric Connection (IFC) and the Edge Router role for a VRF Lite IFC between the external fabric and VXLAN fabric border devices.


Note


Changing of switch role is allowed only before executing Deploy Config.


vPC Pairing – Select a switch for vPC and then select its peer.

Change Modes – Allows you to modify the mode of switch from Active to Operational.

Manage Interfaces – Deploy configurations on the switch interfaces.

Straight-through FEX, Active/Active FEX, and breakout of interfaces are not supported for external fabric switch interfaces.

View/edit Policies – Add, update, and delete policies on the switch. The policies you add to a switch are template instances of the templates available in the template library. After creating policies, deploy them on the switch using the Deploy option available in the View/edit Policies screen.

History – View per switch deployment history.

Recalculate Config – View the pending configuration and the side-by-side comparison of the running and expected configuration.

Deploy Config – Deploy per switch configurations.

Discovery – You can use this option to update the credentials of the switch, reload the switch, rediscover the switch, and remove the switch from the fabric.

Click Deploy from the Actions drop-down list. The template and interface configurations form the configuration provisioning on the switches.

When you click Deploy, the Deploy Configuration screen comes up.

Click Config at the bottom part of the screen to initiate pending configuration onto the switch. The Deploy Progress screen displays the progress and the status of configuration deployment.

Click Close after the deployment is complete.


Note


If a switch in an external fabric does not accept default credentials, you should perform one of the following actions:

  • Remove the switch in the external fabric from inventory, and then rediscover.

  • LAN discovery uses both SNMP and SSH, so both passwords need to be the same. You need to change the SSH password to match the SNMP password on the switch. If SNMP authentication fails, discovery is stopped with authentication error. If SNMP authentication passes but SSH authentication fails, Nexus Dashboard Fabric Controller discovery continues, but the switch status shows a warning for the SSH error.


Discovering New Switches

To discover new switches, perform the following steps:
Procedure

Step 1

Power on the new switch in the external fabric after ensuring that it is cabled to the Nexus Dashboard Fabric Controller server.

Boot the Cisco NX-OS and setup switch credentials.

Step 2

Execute the write, erase, and reload commands on the switch.

Choose Yes to both the CLI commands that prompt you to choose Yes or No.

Step 3

On the Nexus Dashboard Fabric Controller UI, select the External Fabric. Choose Edit Fabric from the Actions drop-down list.

The Edit Fabric screen is displayed.

Step 4

Click the Bootstrap tab and update the DHCP information.

Step 5

Click Save at the bottom right part of the Edit Fabric screen to save the settings.

Step 6

Double click on the Fabric to view the Fabric Overview.

Step 7

On Switches tab, from the Actions drop-down list, select Add Switches.

Step 8

Click the POAP tab.

In an earlier step, the reload command was executed on the switch. When the switch restarts to reboot, Nexus Dashboard Fabric Controller retrieves the serial number, model number, and version from the switch and displays them on the Inventory Management along screen. Also, an option to add the management IP address, hostname, and password are made available. If the switch information is not retrieved, refresh the screen using the Refresh icon at the top right part of the screen.

Note

 
At the top left part of the screen, export and import options are provided to export and import the .csv file that contains the switch information. You can pre-provision a device using the import option too.

Select the checkbox next to the switch and add switch credentials: IP address and host name.

Based on the IP address of your device, you can either add the IPv4 or IPv6 address in the IP Address field.

You can provision devices in advance.

Step 9

In the Admin Password and Confirm Admin Password fields, enter and confirm the admin password.

This admin password is applicable for all the switches displayed in the POAP window.

Note

 

If you do not want to use admin credentials to discover switches, you can instead use the AAA authentication, that is, RADIUS or TACACS credentials for discovery only.

Step 10

(Optional) Use discovery credentials for discovering switches.

  1. Click the Add Discovery Credentials icon to enter the discovery credentials for switches.

  2. In the Discovery Credentials window, enter the discovery credentials such as discovery username and password.

    Click OK to save the discovery credentials.

    If the discovery credentials are not provided, Nexus Dashboard Fabric Controller uses the admin user and password to discover switches.

    Note

     
    • The discovery credentials that can be used are AAA authentication based credentials, that is, RADIUS or TACACS.

    • The discovery credential is not converted as commands in the device configuration. This credential is mainly used to specify the remote user (or other than the admin user) to discover the switches. If you want to add the commands as part of the device configuration, add them in the Bootstrap Freeform Config field under the Bootstrap tab in the fabric settings. Also, you can add the respective policy from View/Edit Policies window.

Step 11

Click Bootstrap at the top right part of the screen.

Nexus Dashboard Fabric Controller provisions the management IP address and other credentials to the switch. In this simplified POAP process, all ports are opened up.

After the added switch completes POAP, the fabric builder topology screen displays the added switch with some physical connections.

Step 12

Monitor and check the switch for POAP completion.

Step 13

Click Deploy Config from the Actions drop-down list on the Fabric Overview screen to deploy pending configurations (such as template and interface configurations) onto the switches.

Note

 
  • If there is a sync issue between the switch and Nexus Dashboard Fabric Controller, the switch icon is displayed in red color, indicating that the fabric is Out-Of-Sync. For any changes on the fabric that results in the out-of-sync, you must deploy the changes. The process is the same as explained in the Discovering Existing Switches section.

  • The discovery credential is not converted as commands in the device configuration. This credential is mainly used to specify the remote user (or other than the admin user) to discover the switches. If you want to add the commands as part of the device configuration, add them in the Bootstrap Freeform Config field under the Bootstrap tab in the fabric settings. Also, you can add the respective policy from View/Edit Policies window.

During fabric creation, if you have entered AAA server information (in the Manageability tab), you must update the AAA server password on each switch. Else, switch discovery fails.

Step 14

After the pending configurations are deployed, the Progress column displays 100% for all switches.

Step 15

On the Topology screen, click Refresh Topology icon to view the update.

All switches must be in green color indicating that they are functional.

The switch and the link are discovered in Nexus Dashboard Fabric Controller. Configurations are built based on various policies (such as fabric, topology, and switch generated policies). The switch image (and other required) configurations are enabled on the switch.

Step 16

Right-click and select History to view the deployed configurations.

Click the Success link in the Status column for more details. An example:

Step 17

On the Nexus Dashboard Fabric Controller UI, the discovered switches can be seen in the fabric topology.

Up to this step, the POAP is completed with basic settings. All the interfaces are set to trunk ports. You must setup interfaces through the LAN > Interfaces option for any additional configurations, but not limited to the following:

  • vPC pairing.

  • Breakout interfaces

    Support for breakout interfaces is available for 9000 Series switches.

  • Port channels, and adding members to ports.

Note

 
After discovering a switch (new or existing), at any point in time you can provision configurations on it again through the POAP process. The process removes existing configurations and provision new configurations. You can also deploy configurations incrementally without invoking POAP.

Adding Non-Nexus Devices to External Fabrics

From Cisco Nexus Dashboard Fabric Controller Release 12.0.1a, you can add Cisco IOS-XR devices to external fabrics in managed mode as well. You can manage the following Cisco IOS-XR devices in external fabrics:

  • Cisco ASR 9000 Series Routers

  • Cisco NCS 5500 Series Routers, IOS XR Release 6.5.3

From Cisco Nexus Dashboard Fabric Controller Release 12.1.1e, you can also add Cisco 8000 Series Routers to external fabrics both in managed mode and monitored mode.

You can discover non-Nexus devices in an external fabric and perform the configuration compliance of these devices as well. For more information, see the Configuration Compliance in External Fabrics section.

Refer the Cisco Nexus Dashboard Fabric Controller Compatibility Matrix to see the non-Nexus devices supported by Cisco Nexus Dashboard Fabric Controller.

Only Cisco Nexus switches support SNMP discovery by default. Hence, configure all the non-Nexus devices before adding it to the external fabric. Configuring the non-Nexus devices includes configuring SNMP views, groups, and users. See the Configuring non-Nexus Devices for Discovery section for more information.

Cisco CSR 1000v is discovered using SSH. Cisco CSR 1000v does not need SNMP support because it can be installed in clouds where SNMP is blocked for security reasons. See the Connecting Cisco Data Center and a Public Cloud chapter to see a use case to add Cisco CSR 1000v, Cisco IOS XE Gibraltar 16.10.x to an external fabric.

However, Cisco Nexus Dashboard Fabric Controller can only access the basic device information like system name, serial number, model, version, interfaces, up time, and so on. Cisco Nexus Dashboard Fabric Controller does not discover non-Nexus devices if the hosts are part of CDP or LLDP.

The settings that are not applicable for non-Nexus devices appear blank, even if you get many options when you right-click a non-Nexus device in the fabric topology window. You cannot add or edit interfaces for ASR 9000 Series Routers and Arista switches.

You can add IOS-XE devices like Cisco Catalyst 9000 Series switches and Cisco ASR 1000 Series Routers as well to external fabrics.

Configuration Compliance in External Fabrics

With external fabrics, any Nexus switches, Cisco IOS-XE devices, Cisco IOS XR devices, and Arista can be imported into the fabric, and there is no restriction on the type of deployment. It can be LAN Classic, VXLAN, FabricPath, vPC, HSRP, etc. When switches are imported into an external fabric, the configuration on the switches is retained so that it is non-disruptive. Only basic policies such as the switch username and mgmt0 interface are created after a switch import.

In the external fabric, for any intent that is defined in the Nexus Dashboard Fabric Controller, configuration compliance (CC) ensures that this intent is present on the corresponding switch. If this intent is not present on the switch, CC reports an Out-of-Sync status. Additionally, there will be a Pending Config generated to push this intent to the switch to change the status to In-Sync. Any additional configuration that is on the switch but not in intent defined in Nexus Dashboard Fabric Controller, will be ignored by CC, as long as there is no conflict with anything in the intent.

When there is user-defined intent added on Nexus Dashboard Fabric Controller and the switch has additional configuration under the same top-level command, as mentioned earlier, CC will only ensure that the intent defined in Nexus Dashboard Fabric Controller is present on the switch. When this user defined intent on Nexus Dashboard Fabric Controller is deleted as a whole with the intention of removing it from the switch and the corresponding configuration exists on the switch, CC will report an Out-of-Sync status for the switch and will generate Pending Config to remove the config from the switch. This Pending Config includes the removal of the top-level command. This action leads to removal of the other out-of-band configurations made on the switch under this top-level command as well. If you choose to override this behavior, the recommendation is that, you create a freeform policy and add the relevant top-level command to the freeform policy.

Let us see this behavior with an example.

  1. A switch_freeform policy defined by the user in Nexus Dashboard Fabric Controller and deployed to the switch.

  2. Additional configuration exists under router bgp in Running config that does not exist in user-defined Nexus Dashboard Fabric Controller intent Expected config. Note that there is no Pending Config to remove the additional config that exists on the switch without a user defined intent on Nexus Dashboard Fabric Controller.

  3. The Pending Config and the Side-by-side Comparison when the intent that was pushed earlier via Nexus Dashboard Fabric Controller is deleted from Nexus Dashboard Fabric Controller by deleting the switch_freeform policy that was created in the Step 1.

  4. A switch_freeform policy with the top-level router bgp command needs to be created. This enables CC to generate the configuration needed to remove only the desired sub-config which was pushed from Nexus Dashboard Fabric Controller earlier.

  5. The removed configuration is only the subset of the configuration that was pushed earlier from Nexus Dashboard Fabric Controller.

    For interfaces on the switch in the external fabric, Nexus Dashboard Fabric Controller either manages the entire interface or does not manage it at all. CC checks interfaces in the following ways:

    • For any interface, if there is a policy defined and associated with it, then this interface is considered as managed. All configurations associated with this interface must be defined in the associated interface policy. This is applicable for both logical and physical interfaces. Otherwise, CC removes any out-of-band updates made to the interface to change the status to In-Sync.

    • Interfaces created out-of-band (applies for logical interfaces such as port-channels, sub interfaces, SVIs, loopbacks, etc.), will be discovered by Nexus Dashboard Fabric Controller as part of the regular discovery process. However, since there is no intent for these interfaces, CC will not report an Out-of-Sync status for these interfaces.

    • For any interface, there can always be a monitor policy associated with it in Nexus Dashboard Fabric Controller. In this case, CC will ignore the interface’s configuration when it reports the In-Sync or Out-of-Sync config compliance status.

Special Configuration CLIs Ignored for Configuration Compliance

The following configuration CLIs are ignored during configuration compliance checks:

  • Any CLI having 'username’ along with ‘password’

  • Any CLI that starts with ‘snmp-server user’

Any CLIs that match the above will not show up in pending diffs and clicking Save & Deploy in the Fabric Builder window will not push such configurations to the switch. These CLIs will not show up in the Side-by-side Comparison window also.

To deploy such configuration CLIs, perform the following procedure:

Procedure

Step 1

Select LAN > Fabrics.

Double click on the fabric name to view Fabric Overview screen.

Step 2

On the Switches tab, double click on the switch name to view Switch Overview screen.

On the Policies tab, all the policies applied on the switch within the chosen fabric are listed.

Step 3

On the Policies tab, from the Actions drop-down list, select Add Policy.

Step 4

Add a Policy Template Instances (PTIs) with the required configuration CLIs using the switch_freeform template and click Save.

Step 5

Select the created policy and select Push Config from the Actions drop-down list to deploy the configuration to the switch(es).


Managing Cisco IOS-XR Devices using NDFC

In general, workload requires communication with services outside of the data center domain in a data center fabric. This includes users accessing an application and services from the internet and WAN. VXLAN EVPN fabrics with border devices are considered as a handoff for north-south connectivity. These border devices are in peer with IOS-XR routers, which is a backbone routers for WAN and internet connectivity.

In DCNM Release 11.5(x), users with an admin role can control VXLAN EVPN fabrics with capabilities such as monitoring, automation, and compliance. You can only monitor the IOS-XR routers in monitored mode. Therefore, there is a requirement for a single fabric controller to manage, and automate configurations between these devices to balance and check configurations compliance for communicating between different services.

From NDFC Release 12.0.1a, users with an admin role can manage IOS-XR routers which is limited to automation and checking compliance. New templates and policies are introduced to automate and manage eBGP VRF Lite handoff between border switches and IOS-XR routers. NDFC allows you to check configuration compliance for IOS-XR devices similar to Cisco Nexus switches in the external fabrics.


Note


For all non-Nexus devices, only MD5 protocol is supported for SNMPv3 authentication.


Configuring IOS-XR as Edge Router

To extend VRF Lite from Cisco Nexus 9000 fabric with border devices for IOS-XR as edge router, refer to VRF Lite Between Cisco Nexus 9000 Based Border and Non-Nexus Device section.

For more information, see video at Managing and Configuring ASR 9000 using NDFC.

Configuring Non-Nexus Devices for Discovery

Before discovering any non-Nexus device in Cisco Nexus Dashboard Fabric Controller, configure it on the switch console.

Configuring IOS-XE Devices for Discovery

Note


In case of failure or issues configuring devices contact Cisco Technical Assistance Center (TAC).


Before you discover the Cisco IOS-XE devices in Nexus Dashboard Fabric Controller, perform the following steps:

Procedure

Step 1

Run the following SSH commands on the switch console.

switch (config)# hostname <hostname>
switch (config)# ip domain name <domain_name>
switch (config)# crypto key generate rsa
switch (config)# ip ssh time-out 90
switch (config)# ip ssh version 2
switch (config)# line vty 1 4
switch (config-line)# transport input ssh
switch (config)# username admin privilege secret <password>
switch (config)# aaa new-model
switch (config)# aaa authentication login default local
switch (config)# aaa authorization exec default local none

Step 2

Before you run SNMP command on the switch, ensure that the IP addresses, username and SNMP related configurations are defined on the switch. Run the following SNMP command on the switch console.

aaa new-model
aaa session-id common
 ip domain name cisco
username admin privilege 15 secret 0 xxxxx
snmp-server group group1 v3 auth read view1 write view1 
snmp-server view view1 mib-2 included
snmp-server view view1 cisco included
snmp-server user admin group1 v3 auth md5 xxxxx priv des xxxxx
line vty 0 4
privilege level 15
transport input all
line vty 5 15
privilege level 15
transport input all
line vty 16 31
transport input ssh

Configuring Arista Devices for Discovery

Enable Privilege Exec mode using the following command:

switch> enable
switch#

switch# show running confiruation | grep aaa        /* to view the authorization*/
aaa authorization exec default local
Run the following commands in the switch console to configure Arista devices:
switch# configure terminal
switch (config)# username ndfc privilege 15 role network-admin secret cisco123
snmp-server view view_name SNMPv2 included
snmp-server view view_name SNMPv3 included
snmp-server view view_name default included
snmp-server view view_name entity included
snmp-server view view_name if included
snmp-server view view_name iso included
snmp-server view view_name lldp included
snmp-server view view_name system included
snmp-server view sys-view default included
snmp-server view sys-view ifmib included
snmp-server view sys-view system included
snmp-server community private ro
snmp-server community public ro
snmp-server group group_name v3 auth read view_name
snmp-server user username group_name v3 auth md5 password priv aes password 
         

Note


SNMP password should be same as the password for username.


You can verify the configuration by running the show run command, and view the SNMP view output by running the show snmp view command.

Show Run Command
switch (config)# snmp-server engineID local f5717f444ca824448b00
snmp-server view view_name SNMPv2 included
snmp-server view view_name SNMPv3 included
snmp-server view view_name default included
snmp-server view view_name entity included
snmp-server view view_name if included
snmp-server view view_name iso included
snmp-server view view_name lldp included
snmp-server view view_name system included
snmp-server view sys-view default included
snmp-server view sys-view ifmib included
snmp-server view sys-view system included
snmp-server community private ro
snmp-server community public ro
snmp-server group group_name v3 auth read view_name
snmp-server user user_name 
            group_name v3 localized f5717f444ca824448b00 auth md5 be2eca3fc858b62b2128a963a2b49373 priv aes be2eca3fc858b62b2128a963a2b49373
!
spanning-tree mode mstp
!
service unsupported-transceiver labs f5047577
!
aaa authorization exec default local
!
no aaa root
!
username admin role network-admin secret sha512 $6$5ZKs/7.k2UxrWDg0$FOkdVQsBTnOquW/9AYx36YUBSPNLFdeuPIse9XgyHSdEOYXtPyT/0sMUYYdkMffuIjgn/d9rx/Do71XSbygSn/
username cvpadmin role network-admin secret sha512 $6$fLGFj/PUcuJT436i$Sj5G5c4y9cYjI/BZswjjmZW0J4npGrGqIyG3ZFk/ULza47Kz.d31q13jXA7iHM677gwqQbFSH2/3oQEaHRq08.
username ndfc privilege 15 role network-admin secret sha512 $6$M48PNrCdg2EITEdG$iiB880nvFQQlrWoZwOMzdt5EfkuCIraNqtEMRS0TJUhNKCQnJN.VDLFsLAmP7kQBo.C3ct4/.n.2eRlcP6hij/ 
Show SNMP View Command
configure terminal# show snmp view
view_name SNMPv2 - included
view_name SNMPv3 - included
view_name default - included
view_name entity - included
view_name if - included
view_name iso - included
view_name lldp - included
view_name system - included
sys-view default - included
sys-view ifmib - included
sys-view system - included
leaf3-7050sx#show snmp user

User name : user_name
Security model : v3
Engine ID : f5717f444ca824448b00
Authentication : MD5
Privacy : AES-128
Group : group_name 
         
Configuring and Verifying Cisco IOS-XR Devices for Discovery

To configure IOS-XR devices, run the following commands on the switch console:

switch# configure terminal
switch (config)# snmp-server view view_name cisco included
snmp-server view view_name mib-2 included
snmp-server group group_name v3 auth read view_name write view_name
snmp-server user user_name 
group_name v3 auth md5 password priv des56 password SystemOwner

Below shown example of configuring IOS-XR device on a switch.

RP/0/RSP0/CPU0:ios(config)#snmp-server view view_name cisco included
RP/0/RSP0/CPU0:ios(config)#snmp-server view view_name mib-2 included
RP/0/RSP0/CPU0:ios(config)#snmp-server group group_name v3 auth read view_name write view_name
RP/0/RSP0/CPU0:ios(config)#snmp-server user user_name group_name v3 auth md5 password priv des56 password SystemOwner
RP/0/RSP0/CPU0:ios(config)#commit 

To verify IOS-XR devices, run the following command:

RP/0/RSP0/CPU0:ios(config)#
RP/0/RSP0/CPU0:ios(config)#show run snmp-server 
snmp-server user user_name group1 v3 auth md5 encrypted 10400B0F3A4640585851 priv des56 encrypted 000A11103B0A59555B74 SystemOwner
snmp-server view view_name cisco included
snmp-server view view_name mib-2 included
snmp-server group group_name v3 auth read view_name write view_name
Discovering Non-Nexus Devices in an External Fabric

To add non-Nexus devices to an external fabric in the fabric topology window, perform the following steps:

Before you begin

Ensure that the configurations are pushed for non-Nexus devices before adding them to an external fabric. You cannot push configurations in a fabric in the monitor mode.

Procedure

Step 1

Click Add switches in the Actions pane.

Step 2

Enter values for the following fields under the Discover Existing Switches tab:

Field

Description

Seed IP

Enter the IP address of the switch.

You can import more than one switch by providing the IP address range. For example: 10.10.10.40-60

The switches must be properly cabled and connected to the Nexus Dashboard Fabric Controller server and the switch status must be manageable.

Device Type

  • Choose IOS XE from the drop-down list for adding Cisco CSR 1000v, Cisco ASR 1000 Series routers, or Cisco Catalyst 9000 Series Switches.

  • Choose IOS XR from the drop-down list for adding ASR 9000 Series Routers, Cisco NCS 5500 Series Routers, IOS XR Release 6.5.3 or Cisco 8000 Series Routers.

    Note

     

    To add Cisco IOS XR devices in managed mode, navigate to the General Parameters tab in the fabric settings and uncheck the Fabric Monitor Mode check box.

  • Choose Other from the drop-down list for adding non-Cisco devices, like Arista switches.

Username

Enter the username.

Password

Enter the password.

Note

 

An error message appears if you try to discover a device that is already discovered.

Set the password of the device in the LAN Credentials window if the password is not set. To navigate to the LAN Credentials window from the Cisco Nexus Dashboard Fabric Controller Web UI, choose Administration > LAN Credentials.

Step 3

Click Start Discovery.

The Scan Details section appears with the switch details populated.

Step 4

Check the check boxes next to the switches you want to import.

Step 5

Click Import into fabric.

The switch discovery process is initiated. The Progress column displays the progress.

Discovering devices takes some time. A pop-up message appears at the bottom-right about the device discovery after the discovery progress is 100%, or done. For example: <ip-address> added for discovery.

Note

 

If you see the following error message after attempting to import the switch into the fabric:

Error while creating the (Seed interface) intent for basic switch configurations. Please retry using config Save/Deploy.

This might be because the permissions were not set properly for the switch before you tried to import it into the fabric. Set the permissions for the switch using the procedures in Configuring IOS-XE Devices for Discovery, then try importing the switch into the fabric again.

Step 6

Click Close.

The fabric topology window appears with the switches.

Step 7

(Optional) Click Refresh topology to view the latest topology view.

Step 8

(Optional) Click Fabric Overview.

The switches and links window appears, where you can view the scan details. The discovery status is discovering in red with a warning icon next to it if the discovery is in progress.

Step 9

(Optional) View the details of the device.

After the discovery of the device:

  • The discovery status changes to ok in green with a check box checked next to it.

  • The value of the device under the Fabric Status column changes to In-Sync.

Note

 

When a switch is in Unreachable discovery status, the last available information of the switch is retained in other columns. For example, if the switch was in RUNNING tracker status before it becomes unreachable, the value under the Tracker Status column for this switch will still be RUNNING despite the switch being in Unreachable discovery status.


What to do next
Set the appropriate role. Right-click the device, choose Set role.

If you added these devices under managed mode, you can add policies too.

Managing Non-Nexus Devices to External Fabrics

From Nexus Dashboard Fabric Controller 12.0.1a, IOS-XR is supported in managed mode.


Note


Configuration compliance is enabled for IOS-XE and IOS-XR switches, similar to the way the Nexus switches are handled in External Fabric. For more information, see Configuration Compliance in External Fabrics.

Nexus Dashboard Fabric Controller sends commit at the end of deployment for IOS-XR devices.

Nexus Dashboard Fabric Controller provides a few templates for IOS-XR devices. Use the ios_xr_Ext_VRF_Lite_Jython.template for IOS-XR switch to be an edge router to establish eBGP peering with border. This will create config for vrf, eBGP peering for the vrf and the sub-interface. Similarly, ios_xe_Ext_VRF_Lite_Jython can be used for IOS-XE switch to be an edge router to establish eBGP peering with border.


Creating a vPC Setup

You can create a vPC setup for a pair of switches in the external fabric. Ensure that the switches are of the same role and connected to each other.
Procedure

Step 1

Right-click one of the two designated vPC switches and choose vPC Pairing.

The Select vPC peer dialog box comes up. It contains a list of potential peer switches. Ensure that the Recommended column for the vPC peer switch is updated as true.

Note

 

Alternatively, you can also navigate to the Tabular view from the Actions pane. Choose a switch in the Switches tab and click vPC Pairing to create, edit, or unpair a vPC pair. However, you can use this option only when you choose a Cisco Nexus switch.

Step 2

Click the radio button next to the vPC peer switch and choose vpc_pair from the vPC Pair Template drop-down list. Only templates with the VPC_PAIR template sub type are listed here.

The vPC Domain and vPC Peerlink tabs appear. You must fill up the fields in the tabs to create the vPC setup. The description for each field is displayed at the extreme right.

vPC Domain tab: Enter the vPC domain details.

vPC+: If the switch is part of a FabricPath vPC + setup, enable this check box and enter the FabricPath switch ID field.

Configure VTEPs: Check this check box to enter the source loopback IP addresses for the two vPC peer VTEPs and the loopback interface secondary IP address for NVE configuration.

NVE interface: Enter the NVE interface. vPC pairing will configure only the source loopback interface. Use the freeform interface manager for additional configuration.

NVE loopback configuration: Enter the IP address with the mask. vPC pairing will only configure primary and secondary IP address for loopback interface. Use the freeform interface manager for additional configuration.

vPC Peerlink tab: Enter the vPC peer-link details.

Switch Port Mode: Choose trunk or access or fabricpath.

If you select trunk, then corresponding fields (Trunk Allowed VLANs and Native VLAN) are enabled. If you select access, then the Access VLAN field is enabled. If you select fabricpath, then the trunk and access port related fields are disabled.

Step 3<