Cisco VIM supports C-series pod running with either all Intel 710X NICs or Cisco VICs for control and data plane. In the
Intel NIC setup, M4 and M5 (Micropod) based pods need to have 2-4 port and 1 or 2 4 port X710 respectively, for control and
data plane connectivity. The orchestrator identifies the NIC support based on the following INTEL_NIC_SUPPORT values:
To define the value, run the following command
# INTEL_NIC_SUPPORT: <True or False>
The X710 based NIC redundancy is enabled by default for M4-based Intel NIC system, but not for M5-based Intel NIC system.
See Figure 7: UCS C-Series Intel NIC Details in UCS C-Series Network Topologies. To bring in NIC redundancy across the X710s for M5-based Intel NIC systems, define the following global parameter in the
setup_data.
# NIC_LEVEL_REDUNDANCY: <True or False> # optional and only applies when INTEL_NIC_SUPPORT is set to True
A C-series pod, running Intel NIC, also supports SRIOV as an option when defined in a setup_data. To enable SRIOV as an option,
define a value in the range 1-32 (32 is maximum number of INTEL_SRIOV_VFS: <integer>.
By default, in the C-series pod running with 4 port Intel 710 card, 1 port (port #c) from each of the Intel NICs are used
for SRIOV. However, some VNFs needs additional SRIOV ports to function. To meet the requirement, an additional variable has
been introduced in the setup_data.yaml file by which you can include a second port (port d) of the Intel NIC for SRIOV.
To adjust the number of SRIOV ports, set the following option in the setup_data.yaml file:
#INTEL_SRIOV_PHYS_PORTS: <2 or 4>
The parameter, INTEL_SRIOV_PHYS_PORTS is optional, and if nothing is defined a value of 2 is used. The only values the parameter
takes is 2 or 4. For NCS-5500, the only value supported for INTEL_SRIOV_PHYS_PORTS is 4, and has to be defined for SRIOV support
on NCS-5500. As the M5 Micropod environment is based on X710 for control and data plane and an additional XL710 or 2 port
X710 for SRIOV only INTEL_SRIOV_PHYS_PORTS of 2 is supported.
SRIOV support on a Cisco VIC POD
Cisco VIM supports M4 based C-series pod running with one 2-port Cisco VIC for control plane and two 2-port Intel 520s or
two 2-port XL710 for SRIOV (called VIC/NIC deployment). We also support M5 based C-series pod running with one 2-port Cisco
VIC for control plane and two 2-port XL710 for SRIOV.
The orchestrator identifies the VIC/NIC support based on the following CISCO_VIC_INTEL_SRIOV values:
To define the value, run the following command:
# CISCO_VIC_INTEL_SRIOV: <True or False>
A C-series M4 pod, running Cisco VIC/Intel NIC (2x520 or 2xXL710), also supports SRIOV on the Intel NIC. To enable,SRIOV define
a value in the range 1-63 (63 is maximum) (for X520) or 1-32 (32 is maximum for XL710) number of INTEL_SRIOV_VFS: <integer>
By default in the C-series M4 pod running with Cisco VIC and Intel 520/XL710, the control plane runs on the Cisco VIC ports,
and all the 4 ports from the 2 Intel 520 NICs or 2 intel XL710 are used for SRIOV.
In C-Series M5 pods running with Cisco VIC and Intel XL710, the control plane runs on the Cisco VIC ports and all the 4 or
8 ports from the 2 intel XL710 are used for SRIOV.
In M5-based VIC/NIC pods, define INTEL_SRIOV_PHYS_PORTS: <4 or 8>, with default value as 4, to indicate the number of ports
participating in SRIOV.
In the pods running with CISCO_VIC_INTEL_SRIOV option, some computes can run only with Cisco VIC without SRIOV option if they
do not have Intel NIC cards.
Define the following parameter in the setup_data yaml to setup the card type, in SRIOV (only for M4 based pod).
#SRIOV_CARD_TYPE: <X520 or XL710>
Compute supports different types of the card. If SRIOV_CARD_TYPE is not provided, Cisco VIM chooses the first 2 slots from
all SRIOV compute nodes. If SRIOV_CARD_TYPE is provided, Cisco VIM chooses the first 2 slots matching the target card type
from each of the SRIOV compute nodes, so that a match between intent and reality exist.
For Quanta-based pods, the SRIOV slot order starts from the higher slot number, that is, for NUMA, NIC at higher slot has
value 0, 2. You can override this, by defining the following as ascending, in which case NIC at higher slot has value of 1,
3.
# SRIOV_SLOT_ORDER: <ascending or descending> # Optional, applicable to Quanta-based pods
Note |
From release Cisco VIM 2.4.4 onwards, some computes have XL710 while others have X520 for SRIOV in an M4 settings. This is
achieved by defining the SRIOV_CARD_TYPE at a per compute level (see the SERVERS section of the setup_data in example file).
From Cisco VIM 2.4.9 onwards, 40G based M5 computes are supported.
|
Support of Third-Party Compute in Hybrid Mode (HP DL360 Gen9)
Cisco VIM 2.4 introduces the first third-party compute. The first SKU chosen is HPE ProLiant DL360 Gen9. With this support,
the Cisco VIM software is flexible enough to accommodate for other SKUs. In Cisco VIM 2.4, the supported deployment is a full-on
pod, with OVS as the mechanism driver, where the management, control, and storage nodes are based on existing Cisco UCS c220/240M4
BOM, and the compute nodes are on HPE ProLiant DL360 Gen9 hardware. From Cisco VIM 2.4.5 onwards, Cisco VIM supports the same
HP SKU with both “HP” and “HPE” brand.
To minimize the changes done to the existing orchestration workflow and Insight UI, you can reuse the existing Cisco VIC+NIC
combo deployment scenario. This minimizes the changes needed for the hardware topology and the "setup_data.yaml" configuration
file. For NIC settings that need to be passed to enable HPE ProLiant DL360 Gen9 third-party compute, see "Intel NIC Support
for SRIOV only".
In case of Quanta servers, the support of third-party has been extended to all nodes (servers in control, compute, storage
and management role.
The following table shows the port type mapping between Cisco UCS C-series, HPE ProLiant DL360, and Quanta computes:
Port Type
|
Cisco UCS c220/c240Compute
|
HPE ProLiant DL360 Gen9 Compute
|
Quanta Server
|
Control and Data Plane
|
M4: MLOM - VIC 1227
M5: MLOM - VIC 1387
|
FlexLOM - HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter
|
OCP 25G 2 port xxv710 based card
|
SRIOV
|
M4: PCIe - Intel X520-DA2 10 Gbps or Intel XL710 DA2 40 Gbps 2 port NIC
M5: PCIe - Intel XL710 DA2 40 Gbps 2 port NIC
|
PCIe - HP Ethernet 10Gb 2-port 560SFP+ Adapter
|
PCIe - Intel xxv710 DA2 25 Gbps 2 port NIC
|
SRIOV
|
PCIe - Intel X520-DA2 10 Gbps or Intel XL710 DA2 40 Gbps 2 port NIC
|
PCIe - HP Ethernet 10Gb 2-port 560SFP+ Adapter
|
PCIe - Intel xxv710 DA2 25 Gbps 2 port NIC
|
As this deployment do not support Auto-ToR configuration, the TOR switch needs to have Trunk configuration with native VLAN,
jumbo MTU, and no LACP suspend-individual on the control and data plane switch ports.
Sample Nexus 9000 port-channel configuration is as follows:
interface port-channel30
description compute-server-hp-1 control and data plane
switchport mode trunk
switchport trunk native vlan 201
spanning-tree port type edge trunk
spanning-tree bpdufilter enable
mtu 9216
no lacp suspend-individual
vpc 30
!
interface Ethernet1/30
description compute-server-hp-1 flexlom port 1
switchport mode trunk
switchport trunk native vlan 201
mtu 9216
channel-group 30 mode active
Once the physical connection to the top-of-rack switches and the switch ports' configuration have been completed, enable/add
the following additional variables in the VIM's "setup_data.yaml" configuration file:
CISCO_VIC_INTEL_SRIOV: True
INTEL_SRIOV_VFS: 63
Remote Registry Credentials
REGISTRY_USERNAME: '<username>'
REGISTRY_PASSWORD: '<password>'
REGISTRY_EMAIL: '<email@address.com>'
REGISTRY_NAME: <hostname of Cisco VIM software hub’> # optional only if Cisco VIM software Hub is used
Common CIMC Access Information for C-series POD
CIMC-COMMON:
cimc_username: "admin"
cimc_password: <"password">
UCSM Common Access Information for B-series POD
UCSMCOMMON:
ucsm_username: "admin"
ucsm_password: <"password">
ucsm_ip: <"a.b.c.d">
ucsm_resource_prefix: <"skull"> # max of 6 chars
ENABLE_UCSM_PLUGIN: <True> #optional; if True, Cisco-UCSM is used, if not defined, default is False
MRAID_CARD: <True or False>
Note |
In Cisco VIM 3.x, UCSM plugin support is not enabled.
|
Configure Cobbler
## Cobbler specific information.
## kickstart: static values as listed below
## cobbler_username: cobbler #username to access cobbler server; static value of Cobbler; not user configurable
## admin_username: root # static value of root; not user configurable
## admin_ssh_keys: This is a generated key which is put on the hosts.
## This is needed for the next install step, using Ansible.
COBBLER:
pxe_timeout: 45 # Optional parameter (in minutes); min of 30 and max of 120, defaults to 45 mins
cobbler_username: cobbler # cobbler UI user; currently statically mapped to cobbler; not user configurable
admin_username: root # cobbler admin user; currently statically mapped to root; not user configurable
#admin_password_hash has be the output from:
# python -c "import crypt; print crypt.crypt('<plaintext password>')"
admin_password_hash: <Please generate the admin pwd hash using the step above; verify the output starts with $6>
admin_ssh_keys: # Optional parameter
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAoMrVHLwpDJX8j2DiE55WtJ5NWdiryP5+FjvPEZcjLdtdWaWA7W
dP6EBaeskmyyU9B8ZJr1uClIN/sT6yD3gw6IkQ73Y6bl1kZxu/ZlcUUSNY4RVjSAz52/oLKs6n3wqKnn
7rQuLGEZDvXnyLbqMoxHdc4PDFWiGXdlg5DIVGigO9KUncPK cisco@cisco-server
kickstart: # not user configurable, optional
control: ucs-b-and-c-series.ks
compute: ucs-b-and-c-series.ks
block_storage: ucs-b-and-c-series.ks
Configure Network
NETWORKING:
domain_name: domain.example.com
#max of 4 NTP servers
ntp_servers:
- <1.ntp.example.com>
- <2.ntp.example2.com >
or
ntp_servers: ['2001:c5c0:1234:5678:1002::1', 15.0.0.254] <== support for IPv6 address
#max of 3 DNS servers
domain_name_servers:
- <a.b.c.d>
or
domain_name_servers: ['2001:c5c0:1234:5678:1002::5', 15.0.0.1] <== support for IPv6 address
http_proxy_server: <a.b.c.d:port> # optional, needed if install is through internet, and the pod is behind a proxy
https_proxy_server: <a.b.c.d:port> # optional, needed if install is through internet, and the pod is behind a proxy
admin_source_networks: # optional, host based firewall to white list admin's source IP (v4 or v6)
- 10.0.0.0/8
- 172.16.0.0/12
- <"2001:xxxx::/64">
Note |
External access to the management node is made through the IP address configured on the br_api interface. To provide additional
security for this connection, the optional admin_source_networks parameter is provided. When specified, access to administrator services is only allowed from the IP addresses specified on
this list. Use this setting with care, since a misconfiguration can lock out an administrator from accessing the management
node through the network. Recovery can be made by logging in through the console and reconfiguring this setting.
|
Define Network Segments
networks:
- # CIMC network section is applicable only for B-series vlan_id: <int> # between 1 and 4096
subnet: <cidr with mask> # true routable network, e.g. 10.30.115.192/28
gateway: <ip address>
pool:
- ip_address_1 to ip_address_2 in the current network segment
segments:
- cimc
-
vlan_id: <int>
subnet: <cidr with mask> # true routable network
gateway: <ipv4_address>
ipv6_gateway: <ipv6_address> <== required if IPv6 based OpenStack public API is enabled
ipv6_subnet: <v6 cidr with mask>
segments:
- api
-
vlan_id: <int> subnet: <cidr/mask>
gateway: <ipaddress>
pool:
# specify the pool range in form of <start_ip> to <end_ip>, IPs without the "to" # is treated as an individual IP and is used for configuring
- ip_address_1 to ip_address_2 in the current network segment
# optional, required if managemen_ipv6 is defined at server level ipv6_gateway: <ipv6_address>
ipv6_subnet: <v6 cidr with mask>
ipv6_pool: ['ipv6_address_1 to ipv6_address_2']
segments: #management and provisioning are always the same
- management
- provision
# OVS-VLAN requires VLAN-id as "None"
# LinuxBridge-VXLAN requires valid VLAN-id
-
vlan_id: <vlan_id or None> subnet: <v4_cidr w/ mask>
gateway: <v4 ip address>
pool:
- ip_address_1 to ip_address_2 in the current network segment
segments:
- tenant
-
vlan_id: <vlan_id>
subnet: <v4_cidr w/ mask>
gateway: <ipv4_addr>
pool:
- ip_address_1 to ip_address_2 in the current network segment
segments:
- storage
# optional network "external"
-
vlan_id: <int> segments:
- external
# optional network "provider"; None for C-series, vlan range for B-series
-
vlan_id: "<None or 3200-3210>" segments:
- provider
Note |
For PODTYPE: ceph, the storage segment needs to be replaced with segment named “cluster”. Also, for central ceph pod, the
only other segment allowed is management/provision.
|
Define Server Roles
In the Roles section, add the hostname of the servers and their corresponding roles. In case of Micropod, specify the same
server names under control, compute, and ceph. Ensure that the number of servers under each role must be three for Micropod.
You can optionally expand the Micropod to include additional computes. In the case of HC (Hyperconverged deployment), all
storage nodes acts as compute nodes, but not vice-versa.
In the case of Edge pod (to support low latency workloads without persistent storage), specify the same server names under
control (total of 3), and compute role (there is no server with storage role). You can optionally expand the edge pod, to
include additional computes. The edge pod can connect to a central Ceph cluster via its management network, so that the Ceph
cluster offers glance image service.
The central Ceph cluster to which the edge pod is communicating to for the glance image service is called the “ceph” pod-type.
For the pod-type “ceph”, specify the same server names under cephcontrol (total of 3), and cephosd role. You can optionally
expand the ceph pod, to include additional cephosd nodes.
ROLES: -> for PODTYPE: fullon
control:
- Your-Controller-Server-1-HostName
- Your-Controller-Server-2-HostName
- Your-Controller-Server-3-HostName
compute:
- Your-Compute-Server-1-HostName
- Your-Compute-Server-2-HostName
- ……
- Your-Compute-Server-n-HostName
block_storage:
- Your-Ceph-Server-1-HostName
- Your-Ceph-Server-2-HostName
- Your-Ceph-Server-3-HostName
ROLES: -> for PODTYPE: micro
control:
- Your-Server-1-HostName
- Your-Server-2-HostName
- Your-Server-3-HostName
compute:
- Your-Server-1-HostName
- Your-Server-2-HostName
- Your-Server-3-HostName
- Your-Server-4-HostName (optional expansion of computes)
- Your-Server-5-HostName (optional expansion of computes)
block_storage:
- Your-Server-1-HostName
- Your-Server-2-HostName
- Your-Server-3-HostName
object_storage:
networker:
ROLES: -> for PODTYPE: UMHC
control:
- Your-Controller-Server-1-HostName
- Your-Controller-Server-2-HostName
- Your-Controller-Server-3-HostName
compute:
- Your-Compute-Server-1-HostName
- Your-Compute-Server-2-HostName
- Your_HC_Server-1_HostName
- Your_HC_Server-2_HostName
- Your_HC_Server-3_HostName
block_storage:
- Your_HC_Server-1_HostName
- Your_HC_Server-2_HostName
- Your_HC_Server-3_HostName
object_storage:
networker:
ROLES: -> for PODTYPE: edge
control:
- Your-Server-1-HostName
- Your-Server-2-HostName
- Your-Server-3-HostName compute:
- Your-Server-1-HostName
- Your-Server-2-HostName
- Your-Server-3-HostName
- Your-Server-4-HostName (optional expansion of computes)
- Your-Server-5-HostName (optional expansion of computes)
ROLES: -> for PODTYPE: ceph
cephcontrol:
- Your-Server-1-HostName
- Your-Server-2-HostName
- Your-Server-3-HostName cephosd:
- Your-Server-1-HostName
- Your-Server-2-HostName
- Your-Server-3-HostName
- Your-Server-4-HostName (optional expansion of Ceph OSD Nodes)
- Your-Server-5-HostName (optional expansion of Ceph OSD Nodes)
object_storage: networker:
# Server common
# Provide the username (default: root)
SERVER_COMMON:
server_username: root
# Allow static override value for platform vendor instead of dynamic
# discovery at runtime, optional value.
#
# Allowed values
# CSCO - Cisco Systems Inc
# HPE - Hewlett Packard Enterprise
# QCT - Quanta Cloud Technology Inc
#
# vendor: <CSCO or QCT> <= Global level override, all servers
# control:
# hardware_info:
# vendor: <CSCO or QCT> <= Role level override, all controls
# compute:
# hardware_info:
# vendor: <CSCO, HPE, or QCT> <= Role level override, all computes
# block_storage:
# hardware_info:
# vendor: <CSCO or QCT> <= Role level override, all storages
Note |
The maximum length of non-FQDN hostname is 32 characters. The length of Your-Controller-Server-1-HostName hostname is 32 characters
in both the ROLES and SERVERS section. The maximum length including the FQDN is 64 characters, where the hostname can only
have characters that are in any combination of “A-Za-z0-9-.”, and the TLD is not all numeric.Cisco VIM does not allow “_”
in the hostnames.
|
Cisco VIM introduces a new topology type called Micropod to address solutions that have requirements of high availability,
but with limited compute and storage needs. In this deployment model,the control, compute, and storage services reside on
each of the three nodes that constitute the pod. Cisco VIM also supports the expansion of the Micropod to accommodate additional
compute nodes. Each cloud application can decide the type of pod needed based on their resource (mem, storage consumption)
requirements. The Micropod option supports only OVS/VLAN (with Cisco-VIC or Intel 710 NIC) or VPP/VLAN (only on Intel NIC)
on a specific BOM.
To enable the Micropod option, update the setup_data as follows:
PODTYPE: micro
Cisco VIM supports the hyper-convergence (UMHC) option of UMHC and NGENAHC. The UMHC option supports only OVS/VLAN with a
combination of Cisco-VIC and Intel 520 NIC on a specific BOM, while the NGENAHC option supports only VPP/VLAN with control
plane over Cisco-VIC and data plane over 2-port Intel X-710.
To enable the hyper convergence with (UMHC) option, update the setup_data as follows:
PODTYPE: UMHC
To enable the hyper convergence with NGENAHC option, update the setup_data as follows:
PODTYPE: NENAHC
On Quanta server, you can also enable edge cloud functionality for low-latency workloards, for example, vRAN that does not
need persistent storage. To enable such deployment, update the setup_data as follows:
PODTYPE: edge
If the edge pod is communicating with a central Ceph cluster that is managed by Cisco VIM, update the setup_data for the
respective central-ceph cluster as follows:
PODTYPE: ceph
Define Servers - Rack (C-Series, Quanta) Pod Example
Note |
The maximum host name length is 32 characters.
|
SERVERS:
Your_Controller_Server-1_HostName:
cimc_info: {‘cimc_ip’: <IPv4 or IPv6>}
rack_info: {'rack_id': 'RackA'}
#hardware_info: {'VIC_slot': '7'} # optional; only needed if vNICs need to be created on a specific slot, e.g. slot 7
#management_ip: <static_ip from management pool> #optional, if defined for one server, has to be defined for all nodes
#cimc username, password at a server level is only needed if it is different from the one defined in the CIMC-COMMON section
# management_ipv6: <Fixed ipv6 from the management_ipv6 pool> # <== optional, allow manual static IPv6 addressing, also if defined management_ip has to be defined
#storage_ip: <Fixed IP from the storage pool> # optional, but if defined for one server, then it must be defined for all, also if defined management_ip has to be defined
Your_Controller_Server-2_HostName:
cimc_info: {'cimc_ip': ‘<v4 or v6>’, 'cimc_username': 'admin','cimc_password': 'abc123'}
rack_info: {'rack_id': 'RackB'}
Your_Controller_Server-3_HostName:
cimc_info: {'cimc_ip': 'v4 or v6'}
rack_info: {'rack_id': 'RackC'}
hardware_info: {'VIC_slot': '7'} # optional only if the user wants a specific VNIC to be chosen
Your_Storage_or_Compute-1_HostName:
cimc_info: {'cimc_ip': ‘<v4 or v6>}
rack_info: {'rack_id': 'RackA'}
hardware_info: {'VIC_slot': '3'} # optional only if the user wants a specific VNIC to be chosen
VM_HUGHPAGE_PERCENTAGE: <0 – 100> # optional only for compute nodes and when NFV_HOSTS: ALL and
MECHANISM_DRIVER: openvswitch or ACI
VM_HUGHPAGE_SIZE: <2M or 1G> # optional, only for compute nodes and when NFV_HOSTS is ALL and MECHANISM_DRIVER is openvswitch or ACI
trusted_vf: <True or False> # optional, only for compute nodes which have in SRIOV
rx_tx_queue_size: <512 or 1024> # optional, only for compute nodes
hardware_info: {'VIC_slot': '<7>', SRIOV_CARD_TYPE: <XL710 or X520>} # VIC_Slot is optional, defined for location of Cisco VIC
Your_Storage HostName:
cimc_info: {'cimc_ip': 'v4 or v6'} rack_info: {'rack_id': 'RackA'}
hardware_info: {osd_disk_type: <HDD or SSD>} # optional only the pod is multi-backend ceph, and a minimum of three storage servers should be available for each backend type.
Note |
SRIOV_CARD_TYPE option is valid only when CISCO_VIC_INTEL_SRIOV is True; and can be defined at per compute level for M4 pod.
If it is not defined at a per compute level, the global value is taken for that compute. If not defined at the compute nor
at the global level, the default of X520 is set. The compute can be standalone or hyper-converged node.
|
Note |
Cisco VIM installation requires that controller node Rack IDs be unique. The intent it to indicates the physical rack location
so that physical redundancy is provided within the controllers. If controller nodes are installed all in the same rack, you
must assign a unique rack ID to prepare for future Cisco NFVI releases that include rack redundancy. However, compute and
storage nodes does not have rack ID restrictions.
|
Note |
For Central Ceph cluster, swap the “storage_ip” with “cluster_ip”.
|
Define Servers - B-Series Pod Example
Note |
For UCS B-Series servers, the maximum host name length is 16 characters.
|
SERVERS:
Your_Controller_Server-1_HostName:
rack_info: {'rack_id': 'rack2'}
ucsm_info: {'server_type': 'blade',
'chassis_id': 1,
'blade_id' : 1}
Your_Controller_Server-2_HostName:
rack_info: {'rack_id': 'rack3'}
ucsm_info: {'server_type': 'blade',
'chassis_id': 2,
'blade_id' : 1}
Your_Controller_Server-3_HostName:
rack_info: {'rack_id': 'rack4'}
ucsm_info: {'server_type': 'blade',
'chassis_id': 2,
'blade_id' : 4}
#management_ip: <static_ip from management pool> #optional, if defined for one server, it must be defined for all nodes
#storage_ip: <Fixed ip from the storage pool> # optional, but if defined for one server, then it must be defined for all,
also if defined management_ip has to be defined
Your_Compute-1_HostName:
rack_info: {'rack_id': 'rack2'}
ucsm_info: {'server_type': 'blade',
'chassis_id': 2,
'blade_id' : 2}
.. add more computes as needed
Your_Storage-1_HostName:
rack_info: {'rack_id': 'rack2'}
ucsm_info: {'server_type': 'rack',
'rack-unit_id': 1}
Your_Storage-2_HostName:
rack_info: {'rack_id': 'rack3'}
ucsm_info: {'server_type': 'rack',
'rack-unit_id': 2}
Your_Storage-3_HostName:
rack_info: {'rack_id': 'rack4'}
ucsm_info: {'server_type': 'rack',
'rack-unit_id': 3}
# max # of chassis id: 24
# max # of blade id: 8
#max # of rack-unit_id: 96
Note |
Cisco VIM requires the controller Rack IDs to be unique to indicate the physical rack location and provide physical redundancy
for controllers. If your controllers are all in the same rack, you must still assign a unique rack ID to the controllers to
provide for future rack redundancy. Compute and storage nodes have no Rack ID restrictions.
|