Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Topology

 

The topology consists of an IP fabric containing two QFX leaf devices and two QFX spine devices. The spines will be configured as a data center gateway (DC-GW) to provide connectivity to the public network as part of this example.

All servers in the topology have an attachment to the external management network. This network is used for external access to the servers, and from there to “hop” to the fabric management subnet as needed. The fabric devices and cluster servers attach to a fabric management network (the fabric device connections are not shown to reduce clutter). The fabric management network is used to support management access, greenfield and brownfield fabric onboarding, and to push configuration changes to the fabric devices. The controller (shown as “Contrail Cluster”), has a connection to the fabric through Leaf 1. This control data connection is used for overlay BGP peering to the spine-based route reflectors, and to support DHCP and DNS services for BMS and VM workloads.

The BMS and compute workloads are also attached to leaf switches to provide underlay and overlay connectivity. The ToR switch provides a hardware-based VTEP function to support overlay connectivity for the BMS devices. The compute nodes, in contrast, use a vRouter for software VTEP functionality that supports overlay connectivity for their VMs.

Figure 1: Topology
Topology

In this example, two virtual networks (VN) are defined, VN-Green and VN-Red. Each VN includes a BMS and a VM that is housed within a compute node. For example, the VM in Compute1 and BMS2 are both on the red virtual network, and therefore share an (overlay) IP subnet. The green VN is assigned IP subnet 10.1.101.0/24 while the red VN is assigned 10.1.102.0/24.

This example uses DCHP to assign the host ID portion of the overlay address for the VMs and BMSes. This results in host ID assignments of either “.6” or “.7”, as shown in the figure.

Unlike the BMSes, the compute nodes require underlay connectivity through the fabric to the controller. The IP subnets used to support this connectivity are assigned to the ToR ports shown in Figure 1. For example, the Controller attaches to leaf 1 on port xe-0/0/2 using the 10.1.11.0/24 subnet and is assigned a host ID of .101. The ToR end of this link is configured with a host ID .254. The IP fabric is pre-configured with an EBGP based underlay and the IP addressing shown in the figure. This brownfield underlay supports the connectivity needed by the Contrail cluster components and supports the EVPN-VXLAN based overlay that provides connecting for the attached workloads.

Table 1: External Management IP Addresses Used in this Example

Role of the Server

IP Address

Contrail Command Server

10.100.70.216

Contrail Cluster Server

10.100.70.217

Insights Server

10.100.70.218

Insights Flows Server

10.100.70.219

Compute-1

10.100.70.220

Compute-2

10.100.70.221

Desktop

(Optional, used to connect to VMs over the fabric network)

10.100.70.215

NTP server (Not shown in topology, must be reachable through the external or fabric management subnets)

x.x.x.x (or name)