- About Getting Started with APIC Examples
- Switch Discovery with the APIC
- Switch Registration with the APIC Cluster
- Switch Discovery Validation and Switch Management from the APIC
- Validating the Fabric Topology
- Unmanaged Switch Connectivity in VM Management
- Time Synchronization and NTP
- In-Band Management NTP
- NTP over IPv6
- Configuring NTP Using the REST API
- Verifying NTP Operation Using the GUI
- Verifying NTP Policy Deployed to Each Node Using the CLI
- Creating User Accounts
- Adding Management Access
- Configuring Virtual Machine Networking Policies
- About the VM Manager
- About Attachable Entity Profile
- Prerequisites for Creating a VMM Domain Profile
- Custom User Account with Minimum VMware vCenter Privileges
- Creating a VMM Domain Profile
- Creating a vCenter Domain Profile Using the REST API
- Creating a vCenter and a vShield Domain Profile Using the REST API
Using the REST API
This chapter contains the following sections:
- About Getting Started with APIC Examples
- Switch Discovery with the APIC
- Creating User Accounts
- Adding Management Access
- Configuring a VMM Domain
- Creating Tenants, VRF, and Bridge Domains
- Configuring External Connectivity for Tenants
- Deploying an Application Policy
About Getting Started with APIC Examples
The steps in several examples in this guide include a parameter name. These parameter names are provided as examples for convenience and ease of your understanding, and it is not required for you to use them.
Switch Discovery with the APIC
The APIC is a central point of automated provisioning and management for all the switches that are part of the ACI fabric. A single data center might include multiple ACI fabrics; each data center might have its own APIC cluster and Cisco Nexus 9000 Series switches that are part of the fabric. To ensure that a switch is managed only by a single APIC cluster, each switch must be registered with that specific APIC cluster that manages the fabric.
The APIC discovers new switches that are directly connected to any switch it currently manages. Each APIC instance in the cluster first discovers only the leaf switch to which it is directly connected. After the leaf switch is registered with the APIC, the APIC discovers all spine switches that are directly connected to the leaf switch. As each spine switch is registered, that APIC discovers all the leaf switches that are connected to that spine switch. This cascaded discovery allows the APIC to discover the entire fabric topology in a few simple steps.
- Switch Registration with the APIC Cluster
- Switch Discovery Validation and Switch Management from the APIC
- Validating the Fabric Topology
- Unmanaged Switch Connectivity in VM Management
Switch Registration with the APIC Cluster
Note | Before you begin registering a switch, make sure that all switches in the fabric are physically connected and booted in the desired configuration. For information about the installation of the chassis, see http://www.cisco.com/c/en/us/support/cloud-systems-management/application-policy-infrastructure-controller-apic/products-installation-guides-list.html. |
After a switch is registered with the APIC, the switch is part of the APIC-managed fabric inventory. With the Application Centric Infrastructure fabric (ACI fabric), the APIC is the single point of provisioning, management, and monitoring for switches in the infrastructure.
Note | The infrastructure IP address range must not overlap with other IP addresses used in the ACI fabric for in-band and out-of-band networks. |
Registering the Unregistered Switches Using the REST API
Note | The infrastructure IP address range must not overlap with other IP addresses used in the ACI fabric for in-band and out-of-band networks. |
Switch Discovery Validation and Switch Management from the APIC
After the switches are registered with the APIC, the APIC performs fabric topology discovery automatically to gain a view of the entire network and to manage all the switches in the fabric topology.
Each switch can be configured, monitored, and upgraded from the APIC without having to access the individual switches.
Validating the Registered Switches Using the REST API
Validating the Fabric Topology
After all the switches are registered with the APIC cluster, the APIC automatically discovers all the links and connectivity in the fabric and discovers the entire topology as a result.
Validating the Fabric Topology Using the REST API
Unmanaged Switch Connectivity in VM Management
Note | The ACI simulator only supports LLDP. Cisco Discovery Protocol (CDP) is not supported. |
Time Synchronization and NTP
Within the Cisco Application Centric Infrastructure (ACI) fabric, time synchronization is a crucial capability upon which many of the monitoring, operational, and troubleshooting tasks depend. Clock synchronization is important for proper analysis of traffic flows as well as for correlating debug and fault time stamps across multiple fabric nodes.
An offset present on one or more devices can hamper the ability to properly diagnose and resolve many common operational issues. In addition, clock synchronization allows for the full utilization of the atomic counter capability that is built into the ACI upon which the application health scores depend. Nonexistent or improper configuration of time synchronization does not necessarily trigger a fault or a low health score. You should configure time synchronization before deploying a full fabric or applications so as to enable proper usage of these features. The most widely adapted method for synchronizing a device clock is to use Network Time Protocol (NTP).
Prior to configuring NTP, consider what management IP address scheme is in place within the ACI fabric. There are two options for configuring management of all ACI nodes and Application Policy Infrastructure Controllers (APICs), in-band management and/or out-of-band management. Depending upon which management option is chosen for the fabric, configuration of NTP will vary. Another consideration in deploying time synchronization is where the time source is located. The reliability of the source must be carefully considered when determining if you will use a private internal clock or an external public clock.
In-Band Management NTP
-
In-Band Management NTP—When an ACI fabric is deployed with in-band management, consider the reachability of the NTP server from within the ACI in-band management network. In-band IP addressing used within the ACI fabric is not reachable from anywhere outside the fabric. To leverage an NTP server external to the fabric with in-band management, construct a policy to enable this communication..
NTP over IPv6
NTP over IPv6 addresses is supported in hostnames and peer addresses. The gai.conf can also be set up to prefer the IPv6 address of a provider or a peer over an IPv4 address. The user can provide a hostname that can be resolved by providing an IP address (both IPv4 or IPv6, depending on the installation or preference).
Configuring NTP Using the REST API
Verifying NTP Operation Using the GUI
Verifying NTP Policy Deployed to Each Node Using the CLI
Creating User Accounts
Configuring a Local User
In the initial configuration script, the admin account is configured and the admin is the only user when the system starts. The APIC supports a granular, role-based access control system where user accounts can be created with various roles including non-admin users with fewer privileges.
Configuring a Remote User
Instead of configuring local users, you can point the APIC at the centralized enterprise credential datacenter. The APIC supports Lightweight Directory Access Protocol (LDAP), active directory, RADIUS, and TACACS+.
To configure a remote user authenticated through an external authentication provider, you must meet the following prerequisites:
Configuring a Local User Using the REST API
AV Pair on the External Authentication Server
You can add a Cisco attribute/value (AV) pair to the existing user record to propagate the user privileges to the APIC controller. The Cisco AV pair is a single string that you use to specify the Role-Based Access Control (RBAC) roles and privileges for an APIC user. An example configuration for an open RADIUS server (/etc/raddb/users) is as follows:
aaa-network-admin Cleartext-Password := "<password>" Cisco-avpair = "shell:domains = all/aaa/read-all(16001)"
- Changing the Default Behavior for Remote Users with Missing or Bad Cisco AV Pairs
- Best Practice for Assigning AV Pairs
- Configuring an AV Pair on the External Authentication Server
Changing the Default Behavior for Remote Users with Missing or Bad Cisco AV Pairs
Best Practice for Assigning AV Pairs
As best practice, Cisco recommends that you assign unique UNIX user ids in the range 16000-23999 for the AV Pairs that are assigned to users when in bash shell (using SSH, Telnet or Serial/KVM consoles). If a situation arises when the Cisco AV Pair does not provide a UNIX user id, the user is assigned a user id of 23999 or similar number from the range that also enables the user's home directories, files, and processes accessible to remote users with a UNIX ID of 23999.
Configuring an AV Pair on the External Authentication Server
The numerical value within the parentheses in the attribute/value (AV) pair string is used as the UNIX user ID of the user who is logged in using Secure Shell (SSH) or Telnet.
Configuring a Remote User Using the REST API
Adding Management Access
In-band management access—You can configure in-band management connectivity to the APIC and the ACI fabric. You first configure the VLANs that will be used by APIC when the APIC is communicating with the leaf switches, and then you configure the VLANs that the VMM servers will use to communicate with the leaf switches.
Note | Do not configure the APIC selector (the set of leaf ports to which the APIC is connected) when configuring the simulator with in-band management access. |
Configuring the external management instance profile under the management tenant for in-band has no effect on the protocols that are configured under the fabric-wide communication policies. The subnets and contracts specified under the external management instance profile do not affect HTTP/HTTPS or SSH/Telnet.
IPv4/IPv6 Addresses and In-Band Policies
In-band management addresses can be provisioned on the APIC controller only through a policy (Postman REST API, NX-OS Style CLI, or GUI). Additionally, the in-band management addresses must be configured statically on each node.
Configuring Management Access
Configuring In-Band Management Access Using the REST API
Note | When using the ACI simulator, the IP addresses are automatically assigned. If you configure any IP addresses in the following steps, the IP addresses you configure will not be effective. |
IPv4 and IPv6 addresses are supported for in-band management access. IPv6 configurations are supported using static configurations (for both in-band and out-of-band). IPv4 and IPv6 dual in-band and out-of-band configurations are supported only through static configuration. For more information, see the KB article,Configuring Static Management Access in Cisco APIC.
IPv6 Table Modifications to Mirror the Existing IP Tables Functionality
All IPv6 tables mirror the existing IP tables functionality, except for Network Address Translation (NAT).
Existing IP Tables
-
Earlier, every rule in the IPv6 tables were executed one at a time and a system call was made for every rule addition or deletion.
-
Whenever a new policy was added, rules were appended to the existing IP tables file and no extra modifications were done to the file.
-
When a new source port was configured in the out-of-band policy, it added source and destination rules with the same port number.
Modifications to IP Tables
-
When IP tables are created, they are first written into hash maps that are then written into intermediate file IP tables-new which are restored. When saved, a new IP tables file is created in the /etc/sysconfig/ folder. You can find both these files at the same location. Instead of making a system call for every rule, you must make a system call only while restoring and saving the file.
-
When a new policy is added instead of appending it to the file, an IP table is created from scratch, that is by loading default policies into the hashmaps, checking for new policies, and adding them to hashmaps. Later, they are written to the intermediate file (/etc/sysconfig/iptables-new) and saved.
-
It is not possible to configure source ports alone for a rule in out-of-band policy. Either destination port or source port along with a destination port can be added to the rules.
-
When a new policy is added, a new rule will be added to the IP tables file. This rule changes the access flow of IP tables default rules.
-A INPUT -s <OOB Address Ipv4/Ipv6> -j apic-default
-
When a new rule is added, it presents in the IP tables-new file and not in the IP tables file, and it signifies that there is some error in the IP tables-new file. Only if the restoration is successful, the file is saved and new rules are seen in the IP tables file.
Management Connectivity Modes
Establish connection to external entities using the out-of-band or in-band network depending upon whether you have configured out-of-band and/or in-band management connectivity. The following two modes are available to establish connectivity to external entities such as the vCenter server:
-
Layer 2 management connectivity—Use this mode when the external entities are attached to the leaf node using Layer 2.
-
Layer 3 management connectivity—Use this mode when the external entities are attached to the leaf node using Layer 3 through a router. The leaf is connected to a router through which external entities can be reached.
The following diagram displays the two modes available to establish connectivity.
- Configuring Layer 2 Management Connectivity Using the REST API
- Configuring Layer 3 Management Connectivity Using the REST API
- Validating Management Connectivity
Configuring Layer 2 Management Connectivity Using the REST API
Note | The name vmm is used as an example string in this task. |
The policy creates the following objects under Tenant-mgmt:
Creates bridge domain (vmm) and the following related objects as follows:
-
Creates the subnet object with this IP prefix (192.168.64.254/18) in this bridge domain. This IP address (192.168.64.254) is assigned to the bridge domain that typically is used as the switch virtual interface (SVI) in a traditional switch configuration.
-
Creates an association to the in-band network (ctx).
Creates an application profile (vmm) and management EPG (vmmMgmt) with related objects as follows:
Before you create a vCenter domain profile, you must establish connectivity to establish an external network using an in-band management network.
Make sure that the IP address range configured as part of management connectivity policy does not overlap with the infrastructure IP address range used by the ACI fabric.
Configuring Layer 3 Management Connectivity Using the REST API
The name vmm is used as an example string in this task.
The policy creates the following objects under Tenant-mgmt:
-
Creates a routed outside policy (vmm) with the following details:
-
Creates a Layer 3 external network instance profile object (vmmMgmt).
-
Creates a route for the remote network (192.168.64.0/18) with the IP address of the next-hop router 192.168.62.2.
-
Creates a logical node profile object (borderLeaf) that is attached to leaf1.
-
Creates a port profile (portProfile1) with the routed interface 1/40 with the IP address 192.168.62.1/30.
-
Creates an association to inband network (ctx).
-
Make sure that the IP address range configured as part of management connectivity policy does not overlap with the infrastructure IP address range used by the ACI fabric.
Validating Management Connectivity
This validation process applies to both Layer 2 and Layer 3 modes and can be used to verify connectivity that is established by using the APIC GUI, REST API, or CLI.
After completing the steps to establish management connectivity, log in to the APIC console. Ping to the IP address of the vCenter server that is reachable (for example, 192.168.81.2) and verify that the ping works. This action indicates that the policies have been successfully applied.
Configuring a VMM Domain
Configuring Virtual Machine Networking Policies
The APIC integrates with third-party VM manager (VMM) (for example, VMware vCenter and SCVMM) to extend the benefits of ACI to the virtualized infrastructure. The APIC enables the ACI policies inside the VMM system to be used by its administrator.
This section provides examples of VMM integration using VMware vCenter and vShield. For details about the different modes of Cisco ACI and VMM integration, see the ACI Virtualization Guide.
About the VM Manager
Note | Information about the necessary configuration of the APIC for integration with the vCenter is described here. For instructions about configuring the VMware components, see the VMware documentation. |
The following are details of some VM manager terms:
-
A VM controller is an external virtual machine management entity such as VMware vCenter, and the VMware vShield. The APIC communicates with the controller to publish network policies that are applied to virtual workloads. A VM controller administrator provides an APIC administrator with a VM controller authentication credential; multiple controllers of the same type can use the same credential.
-
Credentials represent the authentication credentials to communicate with VM controllers. Multiple controllers can use the same credentials.
-
A virtual machine mobility domain (vCenter mobility domain) is a grouping of VM controllers with similar networking policy requirements. This mandatory container holds one or more VM controllers with policies such as for a VLAN pool, server to network MTU policy, or server to network access LACP policy. When an endpoint group gets associated with a vCenter domain, network policies get pushed to all the VM controllers in the vCenter domain.
-
A pool represents a range of traffic encapsulation identifiers (for example, VLAN IDs, VNIDs, and multicast addresses). A pool is a shared resource and can be consumed by multiple domains such as VMM and Layer 4 to Layer 7 services. A leaf switch does not support overlapping VLAN pools. You must not associate different overlapping VLAN pools with the VMM domain. The two types of VLAN-based pools are as follows: -
Dynamic pools—Managed internally by the APIC to allocate VLANs for endpoint groups (EPGs). A vCenter Domain can associate only to a dynamic pool.
-
Static pools—The EPG has a relation to the domain, and the domain has a relation to the pool. The pool contains a range of encapsulated VLANs and VXLANs. For static EPG deployment, the user defines the interface and the encapsulation. The encapsulation must be within the range of a pool that is associated with a domain with which the EPG is associated.
-
-
For a VMware vCenter to be deployed, it must operate in VLAN mode or VXLAN mode. A VMM domain must be associated with a VLAN pool and a vShield must be associated with the vCenter.
About Attachable Entity Profile
Attach Entity Profiles
The ACI fabric provides multiple attachment points that connect through leaf ports to various external entities such as baremetal servers, hypervisors, Layer 2 switches (for example, the Cisco UCS fabric interconnect), and Layer 3 routers (for example Cisco Nexus 7000 Series switches). These attachment points can be physical ports, port channels, or a virtual port channel (vPC) on the leaf switches.
An attachable entity profile (AEP) represents a group of external entities with similar infrastructure policy requirements. The infrastructure policies consist of physical interface policies, for example, Cisco Discovery Protocol (CDP), Link Layer Discovery Protocol (LLDP), maximum transmission unit (MTU), and Link Aggregation Control Protocol (LACP).
A VM manager (VMM) domain automatically derives the physical interfaces policies from the interface policy groups that are associated with an AEP.
-
An override policy at AEP can be used to specify a different physical interface policy for a VMM domain. This policy is useful in scenarios where a hypervisor is connected to the leaf switch through an intermediate Layer 2 node, and a different policy is desired at the leaf switch and hypervisor physical ports. For example, you can configure LACP between a leaf switch and a Layer 2 node. At the same time, you can disable LACP between the hypervisor and the Layer 2 switch by disabling LACP under the AEP override policy.
An AEP is required to deploy any VLAN pools on the leaf switches. It is possible to reuse the encapsulation pools (for example, VLAN) across different leaf switches. An AEP implicitly provides the scope of the VLAN pool (associated to the domain) to the physical infrastructure.
Prerequisites for Creating a VMM Domain Profile
Custom User Account with Minimum VMware vCenter Privileges
To configure the vCenter from Cisco APIC, your credentials must allow the following minimum set of privileges within the vCenter:
This allows the APIC to send vmware API commands to vCenter to allow the creation of the DVS/AVS, creation of the VMK interface (AVS), publish port groups and relay all necessary alerts.
Creating a VMM Domain Profile
In this section, examples of a VMM domain are vCenter domain or vCenter and vShield domains.
Creating a vCenter Domain Profile Using the REST API
Creating a vCenter and a vShield Domain Profile Using the REST API
Creating Tenants, VRF, and Bridge Domains
Tenants Overview
-
A tenant contains policies that enable qualified users domain-based access control. Qualified users can access privileges such as tenant administration and networking administration.
-
A user requires read/write privileges for accessing and configuring policies in a domain. A tenant user can have specific privileges into one or more domains.
-
In a multitenancy environment, a tenant provides group user access privileges so that resources are isolated from one another (such as for endpoint groups and networking). These privileges also enable different users to manage different tenants.
Tenant Creation
A tenant contains primary elements such as filters, contracts, bridge domains, and application profiles that you can create after you first create a tenant.
VRF and Bridge Domains
You can create and specify a VRF and a bridge domain for the tenant. The defined bridge domain element subnets reference a corresponding Layer 3 context.
For details about enabling IPv6 Neighbor Discovery see the related KB article, KB: Creating a Tenant, VRF, and Bridge Domain with IPv6 Neighbor Discovery .
Creating the Tenant, VRF, and Bridge Domain Using the REST API
Configuring External Connectivity for Tenants
Note | The MP-BGP route reflector and the OSPF external routed network protocols do not work if you are using the simulator. |
Before you can distribute the static route to the other leaf switches on the Application Centric Infrastructure (ACI) fabric, a multiprotocol BGP (MP-BGP) process must first be operating, and the spine switches must be configured as BGP route reflectors.
To integrate the ACI fabric into an external routed network, you can configure Open Shortest Path First (OSPF) for management tenant Layer 3 connectivity.
- Configuring an MP-BGP Route Reflector Using the REST API
- Verifying the MP-BGP Route Reflector Configuration
- Creating OSPF External Routed Network for Management Tenant Using REST API
Configuring an MP-BGP Route Reflector Using the REST API
Step 1 | Mark the spine
switches as route reflectors.
Example: POST URL: https://apic-ip/api/policymgr/mo/uni/fabric.xml <bgpInstPol name="default"> <bgpAsP asn="1" /> <bgpRRP> <bgpRRNodePEp id=“<spine_id1>”/> <bgpRRNodePEp id=“<spine_id2>”/> </bgpRRP> </bgpInstPol> |
Step 2 | Set up the pod
selector using the following post.
Example: For the FuncP setup— POST URL: https://APIC-IP/api/policymgr/mo/uni.xml <fabricFuncP> <fabricPodPGrp name="bgpRRPodGrp”> <fabricRsPodPGrpBGPRRP tnBgpInstPolName="default" /> </fabricPodPGrp> </fabricFuncP> Example: For the PodP setup— POST URL: https://APIC-IP/api/policymgr/mo/uni.xml <fabricPodP name="default"> <fabricPodS name="default" type="ALL"> <fabricRsPodPGrp tDn="uni/fabric/funcprof/podpgrp-bgpRRPodGrp"/> </fabricPodS> </fabricPodP> |
Verifying the MP-BGP Route Reflector Configuration
Step 1 | Verify the
configuration by performing the following actions:
|
Step 2 | Verify that the
autonomous system number is configured in the spine switches by performing the
following actions:
|
Creating OSPF External Routed Network for Management Tenant Using REST API
-
You must verify that the router ID and the logical interface profile IP address are different and do not overlap.
-
The following steps are for creating an OSPF external routed network for a management tenant. To create an OSPF external routed network for a tenant, you must choose a tenant and create a VRF for the tenant.
-
For more details, see also the KB article about Transit Routing.
Example: POST: https://192.0.20.123/api/mo/uni/tn-mgmt.xml <fvTenant name="mgmt"> <fvBD name="bd1"> <fvRsBDToOut tnL3extOutName="RtdOut" /> <fvSubnet ip="1.1.1.1/16" /> <fvSubnet ip="1.2.1.1/16" /> <fvSubnet ip="40.1.1.1/24" scope="public" /> <fvRsCtx tnFvCtxName="inb" /> </fvBD> <fvCtx name="inb" /> <l3extOut name="RtdOut"> <l3extRsL3DomAtt tDn="uni/l3dom-extdom"/> <l3extInstP name="extMgmt"> </l3extInstP> <l3extLNodeP name="borderLeaf"> <l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="10.10.10.10"/> <l3extRsNodeL3OutAtt tDn="topology/pod-1/node-102" rtrId="10.10.10.11"/> <l3extLIfP name='portProfile'> <l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/40]" ifInstT='l3-port' addr="192.168.62.1/24"/> <l3extRsPathL3OutAtt tDn="topology/pod-1/paths-102/pathep-[eth1/40]" ifInstT='l3-port' addr="192.168.62.5/24"/> <ospfIfP/> </l3extLIfP> </l3extLNodeP> <l3extRsEctx tnFvCtxName="inb"/> <ospfExtP areaId="57" /> </l3extOut> </fvTenant> |
Deploying an Application Policy
Three-Tier Application Deployment
A filter specifies the data protocols to be allowed or denied by a contract that contains the filter. A contract can contain multiple subjects. A subject can be used to realize uni- or bidirectional filters. A unidirectional filter is a filter that is used in one direction, either from consumer-to-provider (IN) or from provider-to-consumer (OUT) filter. A bidirectional filter is the same filter that is used in both directions. It is not reflexive.
Contracts are policies that enable inter-End Point Group (inter-EPG) communication. These policies are the rules that specify communication between application tiers. If no contract is attached to the EPG, inter-EPG communication is disabled by default. No contract is required for intra-EPG communication because intra-EPG communication is always allowed.
Application profiles enable you to model application requirements that the APIC then automatically renders in the network and data center infrastructure. The application profiles enable administrators to approach the resource pool in terms of applications rather than infrastructure building blocks. The application profile is a container that holds EPGs that are logically related to one another. EPGs can communicate with other EPGs in the same application profile and with EPGs in other application profiles.
To deploy an application policy, you must create the required application profiles, filters, and contracts. Typically, the APIC fabric hosts a three-tier application within a tenant network. In this example, the application is implemented by using three servers (a web server, an application server, and a database server). See the following figure for an example of a three-tier application.
The web server has the HTTP filter, the application server has the Remote Method Invocation (RMI) filter, and the database server has the Structured Query Language (SQL) filter. The application server consumes the SQL contract to communicate with the database server. The web server consumes the RMI contract to communicate with the application server. The traffic enters from the web server and communicates with the application server. The application server then communicates with the database server, and the traffic can also communicate externally.
Parameters to Create a Filter for http
The parameters to create a filter for http in this example is as follows:
Parameter Name | Filter for http |
---|---|
Name |
http |
Number of Entries |
2 |
Entry Name |
Dport-80 Dport-443 |
Ethertype |
IP |
Protocol |
tcp tcp |
Destination Port |
http https |
Parameters to Create Filters for rmi and sql
The parameters to create filters for rmi and sql in this example are as follows:
Parameter Name | Filter for rmi | Filter for sql |
---|---|---|
Name |
rmi |
sql |
Number of Entries |
1 |
1 |
Entry Name |
Dport-1099 |
Dport-1521 |
Ethertype |
IP |
IP |
Protocol |
tcp |
tcp |
Destination Port |
1099 |
1521 |
Example Application Profile Database
The application profile database in this example is as follows:
EPG | Provided Contracts | Consumed Contracts |
---|---|---|
web |
web |
rmi |
app |
rmi |
sql |
db |
sql |
-- |
Deploying an Application Policy Using the REST API
The port the EPG uses must belong to one of the VM Managers (VMM) or physical domains associated with the EPG.
Step 1 | Send this HTTP
POST message to deploy the application using the XML API.
Example: POST https://192.0.20.123/api/mo/uni/tn-ExampleCorp.xml |
Step 2 | Include this XML
structure in the body of the POST message.
Example: <fvTenant name="ExampleCorp"> <fvAp name="OnlineStore"> <fvAEPg name="web"> <fvRsBd tnFvBDName="bd1"/> <fvRsCons tnVzBrCPName="rmi"/> <fvRsProv tnVzBrCPName="web"/> <fvRsDomAtt tDn="uni/vmmp-VMware/dom-datacenter"/> </fvAEPg> <fvAEPg name="db"> <fvRsBd tnFvBDName="bd1"/> <fvRsProv tnVzBrCPName="sql"/> <fvRsDomAtt tDn="uni/vmmp-VMware/dom-datacenter"/> </fvAEPg> <fvAEPg name="app"> <fvRsBd tnFvBDName="bd1"/> <fvRsProv tnVzBrCPName="rmi"/> <fvRsCons tnVzBrCPName="sql"/> <fvRsDomAtt tDn="uni/vmmp-VMware/dom-datacenter"/> </fvAEPg> </fvAp> <vzFilter name="http" > <vzEntry dFromPort="80" name="DPort-80" prot="tcp" etherT="ip"/> <vzEntry dFromPort="443" name="DPort-443" prot="tcp" etherT="ip"/> </vzFilter> <vzFilter name="rmi" > <vzEntry dFromPort="1099" name="DPort-1099" prot="tcp" etherT="ip"/> </vzFilter> <vzFilter name="sql"> <vzEntry dFromPort="1521" name="DPort-1521" prot="tcp" etherT="ip"/> </vzFilter> <vzBrCP name="web"> <vzSubj name="web"> <vzRsSubjFiltAtt tnVzFilterName="http"/> </vzSubj> </vzBrCP> <vzBrCP name="rmi"> <vzSubj name="rmi"> <vzRsSubjFiltAtt tnVzFilterName="rmi"/> </vzSubj> </vzBrCP> <vzBrCP name="sql"> <vzSubj name="sql"> <vzRsSubjFiltAtt tnVzFilterName="sql"/> </vzSubj> </vzBrCP> </fvTenant> |
In the XML structure, the first line modifies, or creates if necessary, the tenant named ExampleCorp.
<fvTenant name="ExampleCorp">
This line creates an application network profile named OnlineStore.
<fvAp name="OnlineStore">
The elements within the application network profile create three endpoint groups, one for each of the three servers. The following lines create an endpoint group named web and associate it with an existing bridge domain named bd1. This endpoint group is a consumer, or destination, of the traffic allowed by the binary contract named rmi and is a provider, or source, of the traffic allowed by the binary contract named web. The endpoint group is associated with the VMM domain named datacenter.
<fvAEPg name="web"> <fvRsBd tnFvBDName="bd1"/> <fvRsCons tnVzBrCPName="rmi"/> <fvRsProv tnVzBrCPName="web"/> <fvRsDomAtt tDn="uni/vmmp-VMware/dom-datacenter"/> </fvAEPg>
The remaining two endpoint groups, for the application server and the database server, are created in a similar way.
The following lines define a traffic filter named http that specifies TCP traffic of types HTTP (port 80) and HTTPS (port 443).
<vzFilter name="http" > <vzEntry dFromPort="80" name="DPort-80" prot="tcp" etherT="ip"/> <vzEntry dFromPort="443" name="DPort-443" prot="tcp" etherT="ip"/> </vzFilter>
The remaining two filters, for application data and database (sql) data, are created in a similar way.
The following lines create a binary contract named web that incorporates the filter named http:
<vzBrCP name="web"> <vzSubj name="web"> <vzRsSubjFiltAtt tnVzFilterName="http"/> </vzSubj> </vzBrCP>
The remaining two contracts, for rmi and sql data protocols, are created in a similar way.
The final line closes the structure:
</fvTenant>