Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    Underlay Network Configuration for ContrailVM

    When using vCenter as compute, the ContrailVM can be configured in several different ways for the underlay (ip-fabric) connectivity:

    Standard Switch Setup

    In the standard switch setup, the ContrailVM is provided an interface through the standard switch port group that is used for management and control data, see Figure 1.

    Figure 1: Standard Switch Setup

    Standard Switch Setup

    To set up the ContrailVM in this mode, the standard switch and port group must be configured in the contrail_vm section in testbed.py.

    If not configured, the default values of vSwitch0 and contrail-fab-pg are used for the standard switch and port group, respectively.

    The following in an example of a standard switch and port group configuraiton in testbed.py.

    ‘contrail_vm’: {
              		'name' : "computevm-24-6",
              		'mac': "00:50:56:05:ba:ba",
              		'host': "root@10.84.24.32",
              		‘fabric_vswitch’: “vSwitch0”, 
              		‘fabric_port_group’: “contrail-fab-pg”,
    

    Note: By default, the management and control_data interfaces are the same in this configuration. To have separate a path for control_data, a VMXNET3 interface must be added manually in the ContrailVM, and control_data must be configured in the global section in testbed.py.

    Distributed Switch Setup

    A distributed switch functions as a single virtual switch across associated hosts.

    In the distributed switch setup, the ContrailVM is provided an interface through the distributed switch port group that is used for management and control data, see Figure 2.

    Figure 2: Distributed Switch Setup

    Distributed Switch Setup

    To set up the ContrailVM in this mode, configure the distributed switch, port group, number of ports in the port group, and the uplink in the vcenter_servers section in testbed.py.

    Note: The uplink can be a link aggregation group (LAG).

    The following are required before using testbed.py to set up a distributed switch:

    • Data center
    • Cluster
    • Distributed switch
    • Host associated with the distributed switch

    The following is an example distributed switch configuration in testbed.py.

    	env.vcenter: {
    		‘server’: ‘<ip address>’,
    		‘port’: ‘443’,
    		‘username’: ‘administrator@vsphere.local’,
    		‘password’: ‘<password>’,
    		‘datacenter’: ‘kd_dc’,
    		‘cluster’: [‘kd_cluster1’, ‘kd_cluster2’]
    		‘dv_switch_fab’: {‘dv_switch_name’: ‘dvs-fab’},
    		‘dv_port_group_fab’: { 
                        ‘dv_portgroup_name’: ‘contrail-fab-pg’,
                        ‘number_of_ports’: ‘3”, 
                        ‘uplink’: ‘uplink11’, 
    		}	
    }
    
    

    Note: By default, the management and control_data interfaces are the same in this configuration. To have a separate path for control_data, the VMXNET3 interface must be added manually in the ContrailVM, and control_data must be configured in the global section of testbed.py.

    PCI Pass-Through Setup

    PCI pass-through is a virtualization technique in which a physical Peripheral Component Interconnect (PCI) device is directly connected to a virtual machine, bypassing the hypervisor. Drivers in the VM can directly access the PCI device, resulting in a high rate of data transfer.

    In the pass-through setup, the ContrailVM is provided management and control data interfaces. Pass-through interfaces are used for control data. Figure 3 shows a PCI pass-through setup with a single control_data interface.

    Figure 3: PCI Pass-Through with Single Control Data Interface

    PCI Pass-Through with Single Control
Data Interface

    To set up the ContrailVM with pass-through interfaces, all testbed.py configurations used for the standard switch setup are also used for the pass-through setup, providing the management connectivity to the ContrailVM.

    To provide the control_data interfaces, configure the pci_id of the pass-through interfaces in the contrail_vm section, and configure control_data in the global section of testbed.py.

    Upon provisioning ESXi hosts in the installation process, the PCI pass-through interfaces are exposed as Ethernet interfaces in the ContrailVM, and are identified in the control_data device field.

    The following is an example PCI pass-through configuration with a single control_data interface:

           'contrail_vm': {
             	  	'name’: "computevm-24-6",
             	  	'mac': "00:50:56:05:ba:ba",
             	  	'host': "root@10.84.24.232", #this is mgmt intf
             	  	'pci_devices': {
                          		 'nic': [“04:00.0"],
                		 },
             	  	'vmdk_download_path': "http://10.84.5.120/cs-shared/contrail-vcenter/vmdk/LATEST/ContrailVM-disk1.vmdk”,
     	}
           control_data = {
                       host4 : { 'ip': '10.84.20.232/24', 'gw' : '10.84.20.254', 'device':'eth20'},
    }
    

    Figure 4 shows a PCI pass-through setup with a bond_control data interface, which has multiple pass-through NICs.

    Figure 4: PCI Pass-Through Setup with Bond Control Interface

    PCI Pass-Through Setup with Bond Control
Interface

    The following is an example PCI pass-through configuration with a bond control_data interface:

    'contrail_vm': {
             	  	'name’: "computevm-24-6",
             	  	'mac': "00:50:56:05:ba:ba",
              		 'host': "root@10.84.24.232", #this is mgmt intf
                  'pci_devices': {
                          		 'nic': [“04:00.0", “04:00.1”],
                		 },
                		 'vmdk_download_path': "http://10.84.5.120/cs-shared/contrail-vcenter/vmdk/LATEST/ContrailVM-disk1.vmdk”,
     	}
             control_data = {
              	  	 		  host4 : { 'ip': '10.84.20.232/24', 'gw' : ’10.84.20.254', 'device':'bond0'},
    }
    	         bond= {
             	  	   		 host2 : { 'name': 'bond0', 'member': ['eth20’, ‘eth21’], 'mode': '802.3ad', 'xmit_hash_policy': 'layer3+4' },
    	}
    

    SR-IOV Setup

    A single root I/O virtualization (SR-IOV) interface allows a network adapter device to separate access to its resources among various hardware functions.

    In the SR-IOV setup, the ContrailVM is provided management and control data interfaces. SR-IOV interfaces are used for control data. See Figure 5.

    Figure 5: SR-IOV Setup

    SR-IOV Setup

    In VMware, the port-group is mandatory for SR-IOV interfaces because the ability to configure the networks is based on the active policies for the port holding the virtual machines. For more information, refer to VMware’s SR-IOV Component Architecture and Interaction.

    The port-group is created as part of provisioning; however, before the provisioning, the distributed virtual switch (DVS) for the port-group should be created by the user.

    To set up the ContrailVM with SR-IOV interfaces, all testbed.py configurations used for the standard switch setup are also used for the pass-through setup, providing the management connectivity to the ContrailVM.

    To provide the control_data interfaces, configure the SR-IOV-enabled physical interfaces in the contrail_vm section, and configure the control_data in the global section of testbed.py.

    Configure the port group (dv_port_group_sr_iov) and the DVS (dv_switch_sr_iov) in the env.vcenter_servers section in the testbed.py.

    Upon provisioning ESXi hosts in the installation process, the SR-IOV interfaces are exposed as Ethernet interfaces in the ContrailVM, and are identified in the control_data device field.

    Figure 6 shows a SR-IOV setup with a single control_data interface.

    Figure 6: SR-IOV With Single Control Data Interface

    SR-IOV With Single Control Data Interface

    The following is an example SR-IOV configuration with a single control_data interface:

           'contrail_vm': {
               'name’: "computevm-24-6",
               'mac': "<mac address>",
               'host': "host@<ip address>", #this is mgmt intf
                ‘sr_iov_nics’: [‘vmnic0’],
                'vmdk_download_path': "http://<ip address>/cs-shared/contrail-vcenter/vmdk/LATEST/ContrailVM-disk1.vmdk”,
           }
           control_data = {
               host4 : { 'ip': '<ip address>', 'gw' : '<ip address>', 'device':'eth20'},
    }
    	       env.vcenter = {
           		 'server':'<ip address>',
           		 'port': '443',
           		 'username': 'administrator@vsphere.local',
           		 'password': '<password>',
    		…
    		…
    		…
    		…
           		 ‘dv_switch_sr_iov’: {
           		 ‘dv_switch_name’: dvs-sriov’,
           		 },
           		 ‘dv_port_group_sr_iov’: {
           		      ‘dv_portgroup_name: ‘dvs-sriov-pg’,
           		      ‘number_of_ports’: ‘2’,
    		},
               }
    

    Figure 7 shows an SR-IOV configuration with a bond control_data interface, which has multiple SR-IOV NICs.

    Figure 7: SR-IOV With Bond Control Data Interface

    SR-IOV With Bond Control Data Interface

    The following is an example SR-IOV configuration with a bond control_data interface:

               'contrail_vm': {
                  'name’: "computevm-24-6",
                  'mac': "<mac address>",
                  'host': "host@<ip address>", #this is mgmt intf
                  ‘sr_iov_nics’: [‘vmnic0, ‘vmnic1’],
                  'vmdk_download_path': "http://<ip address>/cs-shared/contrail-vcenter/vmdk/Ubuntu-14.04/Ubuntu-14.04-disk1.vmdk”,
                 }
               control_data = {
                   host4 : { 'ip': '<ip address>', 'gw' : ’<ip address>', 'device':'bond0'},
    }
               bond= {
                   host2 : { 'name': 'bond0', 'member': ['eth20’, ‘eth21’], 'mode': 'active-backup', 'fail_over_mac’: ‘1’},
    	}
               env.vcenter = {
                   'server':'<ip address>',
                   'port': '443',
                   'username': 'administrator@vsphere.local',
                   'password': '<password>',
    		 …
    		…
    		…
    		…
               ‘dv_switch_sr_iov’: {
                   ‘dv_switch_name’: dvs-sriov’,
               },
               ‘dv_port_group_sr_iov’: {
                   ‘dv_portgroup_name: ‘dvs-sriov-pg’,
                   ‘number_of_ports’: ‘2’,
    		},
               }
    

    Modified: 2016-12-15