Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Underlay Network Configuration for ContrailVM

The ContrailVM can be configured in several different ways for the underlay (ip-fabric) connectivity:

Standard Switch Setup

In the standard switch setup, the ContrailVM is provided an interface through the standard switch port group that is used for management and control data, see Figure 1.

Figure 1: Standard Switch SetupStandard Switch Setup

To set up the ContrailVM in this mode, the standard switch and port group must be configured in vcenter_vars.yml.

If switch name is not configured, the default values of vSwitch0 are used for the standard switch.

The ContrailVM supports multiple NICs for management and control_data interfaces. The management interface must have the DHCP flag as true and the control_data interface can have DHCP set as false. When DHCP is set to false, the IP address of the control_data interface must be configured by the user and ensure connectivity. Additional configuration such as static routes and bond interface must be configured by the user.

The following is an example of configuration with standard switch.

Distributed Switch Setup

A distributed switch functions as a single virtual switch across associated hosts.

In the distributed switch setup, the ContrailVM is provided an interface through the distributed switch port group that is used for management and control data, see Figure 2.

The ContrailVM can be configured to use the management and control_data NICs from DVS. When the DVS configuration is specified, the standard switch configuration is ignored.

Figure 2: Distributed Switch SetupDistributed Switch Setup

To set up the ContrailVM in this mode, configure the distributed switch, port group, number of ports in the port group, and the uplink in the vcenter_servers section in vcenter_servers.yml.

Note:

The uplink can be a link aggregation group (LAG). If you use LAG, then DVS and LAG should be preconfigured.

The following is an example distributed switch configuration in vcenter_vars.yml.

PCI Pass-Through Setup

PCI pass-through is a virtualization technique in which a physical Peripheral Component Interconnect (PCI) device is directly connected to a virtual machine, bypassing the hypervisor. Drivers in the VM can directly access the PCI device, resulting in a high rate of data transfer.

In the pass-through setup, the ContrailVM is provided management and control data interfaces. Pass-through interfaces are used for control data. Figure 3 shows a PCI pass-through setup with a single control_data interface.

Figure 3: PCI Pass-Through with Single Control Data InterfacePCI Pass-Through with Single Control Data Interface

When setting up the ContrailVM with pass-through interfaces, upon provisioning ESXi hosts in the installation process, the PCI pass-through interfaces are exposed as Ethernet interfaces in the ContrailVM, and are identified in the control_data device field.

The following is an example PCI pass-through configuration with a single control_data interface:

Figure 4 shows a PCI pass-through setup with a bond_control data interface, which has multiple pass-through NICs.

Figure 4: PCI Pass-Through Setup with Bond Control InterfacePCI Pass-Through Setup with Bond Control Interface

Update the ContrailVM section in vcenter_vars.yml with pci_devices as shown in the following example:

SR-IOV Setup

A single root I/O virtualization (SR-IOV) interface allows a network adapter device to separate access to its resources among various hardware functions.

In the SR-IOV setup, the ContrailVM is provided management and control data interfaces. SR-IOV interfaces are used for control data. See Figure 5.

Figure 5: SR-IOV SetupSR-IOV Setup

In VMware, the port-group is mandatory for SR-IOV interfaces because the ability to configure the networks is based on the active policies for the port holding the virtual machines. For more information, refer to VMware’s SR-IOV Component Architecture and Interaction.

To set up the ContrailVM with SR-IOV interfaces, all configurations used for the standard switch setup are also used for the pass-through setup, providing management connectivity to the ContrailVM.

To provide the control_data interfaces, configure the SR-IOV-enabled physical interfaces in the contrail_vm section, and configure the control_data in the global section of vcenter_vars.yml.

Upon provisioning ESXi hosts in the installation process, the SR-IOV interfaces are exposed as Ethernet interfaces in the ContrailVM.

Figure 6 shows a SR-IOV setup with a single control_data interface.

Figure 6: SR-IOV With Single Control Data InterfaceSR-IOV With Single Control Data Interface

The following is an example SR-IOV configuration for the cluster and server configuration.

The cluster configuration:

The server configuration:

Figure 7 shows an SR-IOV configuration with a bond control_data interface, which has multiple SR-IOV NICs.

Figure 7: SR-IOV With Bond Control Data InterfaceSR-IOV With Bond Control Data Interface

For Bond interface-configuration specify multiple NICs in sr_iov_nics, and add required configuration for multi-interface and bond configuration in vcenter_vars.yml.

The cluster configuration:

The server configuration: