Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All
     
     

    Supporting Multiple Interfaces on Servers and Nodes

    This section describes how to set up and manage multiple interfaces.

    Support for Multiple Interfaces

    Servers and nodes with multiple interfaces should be deployed with exclusive management and control and data networks. In the case of multiple interfaces per server, the expectation is that the management network provides only management connectivity to the cluster, and the control and data network carries the control plane information and the guest traffic data.

    Examples of control traffic include the following:

    • XMPP traffic between the control nodes and the compute nodes.
    • BGP protocol messages across the control nodes.
    • Statistics, monitoring, and health check data collected by the analytics engine from different parts of the system.

    In Contrail Release 1.10 and later, control and data must share the same interface, configured in the testbed.py file in a section named control_data.

    Number of cfgm Nodes Supported

    The Contrail system can have any number of cfgm nodes.​

    Uneven Number of Database Nodes Required

    In Contrail Release 1.10 and later, Apache ZooKeeper resides on the database node. Because a ZooKeeper ensemble operates most effectively with an odd number of nodes, it is required to have an odd number (3, 5, 7, and so on) of database nodes in a Contrail system.

    Support for VLAN Interfaces

    A VLAN ID can also be specified in the testbed.py file under the control_data section, similar to the following example: ​

    control_data= { host1: { 'ip': '<ip address>', 'gw': '<ip address>', 'device': 'bond0', ‘vlan’: ‘20’}, 
                                        host2: { 'ip': '<ip address>', 'gw': '<ip address>, 'device': 'bond0', ‘vlan’: ‘20’} }

    Support for Bonding Options

    ​Contrail provides support for bond interface options.

    The default bond interface options are:

    miimon=100, mode=802.3ad(lacp), xmit_hash_policy=layer3+4

    In the testbed.py bond section, anything other than name and member are treated as a bond interface option, and provisioned as such. The following is an example:

    bond= { host1: { ‘name’: ‘bond0’, ‘member’: [‘p2p0p2’, ‘p2p0p3’], ‘lacp_rate’: ‘slow’ }

    Support for Static Route Options

    ​Contrail provides support for adding static routes on target systems. This option is ideal for use cases in which a system has servers with multiple interfaces and has control data or management connections that span multiple networks.

    The following shows the use of the static_route stanza in the testbed.py file to configure static routes in host2 and host5.

    static_route  = {
    
        host2 : [{ 'ip': '<ip address>', 'netmask' : '<ip address>',                                         'gw':'<ip address>', 'intf': 'bond0' },
    
                 { 'ip': '<ip address>', 'netmask' : '<ip address>',                                    
    
                   'gw':'<ip address>', 'intf': 'bond0' }],
    
        host5 : [{ 'ip': '<ip address>, 'netmask' : '<ip address>',                                                   'gw':'<ip address>', 'intf': 'bond0' }],
    
    }

    Server Interface Examples

    In Contrail Release 1.10 and later, control and data are required to share the same interface. A set of servers can be deployed in any of the following combinations for management, control, and data:

    • mgmt=control=data -- Single interface use case
    • mgmt, control=data -- Exclusive management access, with control and data sharing a single network.

    In Contrail, the following server interface combinations are not allowed:

    • mgmt=control, data--Dual interfaces in Layer 3 mode, management and control shared on a single network
    • mgmt, control, data–Complete exclusivity across management, control, and data traffic.

    Interface Naming and Configuration Management

    On a standard Linux installation there is no guarantee that a physical interface will come up with the same name after a system reboot. Linux NetworkManager tries to accommodate this behavior by linking the interface configurations to the hardware addresses of the physical ports. However, Contrail avoids using hardware-based configuration files because this type of solution cannot scale when using remote provisioning and management techniques.

    The Contrail alternative is a threefold interface-naming scheme based on <bus, device, port (or function)>. As an example, on a server operating system that typically assigns interface names such as p4p0 and p4p1 for onboard interfaces, the Contrail system assigns p4p0p0 and p4p0p1, when using the optional contrail-interface-name package.

    When the contrail-interface-name package is installed, it uses the threefold naming scheme to provide consistent interface naming after reboots. The contrail-interface-name package is installed by default when a Contrail ISO image is installed. If you are using an RPM-based installation, you should install the contrail-interface-name package before doing any network configuration.

    If your system already has another mechanism for getting consistent interface names after a reboot, it is not necessary to install the contrail-interface-name package.

    Setting Up Interfaces and Installing

    As part of the provisioning scheme, there are two additional commands that the administrator can use to set up control and data interfaces.

    The fab setup_interface command creates bond interface configurations, if there is a corresponding configuration in the testbed.py file (see the sample testbed.py file in Sample testbed.py File With Exclusive Interfaces.

    When you use the fab setup_interface command, the interface configurations are generated with the syntax (ifcfg-* files), which is needed for the network service.

    The fab add_static_route command creates static routes in a node, if there is a corresponding configuration in the testbed.py file (see the sample testbed.py file in Sample testbed.py File With Exclusive Interfaces.

    The following is a typical work flow for setting up a cluster with multiple interfaces:

    • Set env.interface_rename = True in the testbed.py file (meaning: install the contrail-interface-name package on compute nodes)
    • fab install_contrail (meaning: change the testbed.py file with the renamed interface name)
    • fab setup_interface
    • fab add_static_route
    • fab setup_all

    Note: The fab setup_interface command and fab add_static_route command can be executed simultaneously by using the fab setup_network command.

    In cases where the fab setup_interface command is not used for setting up the interfaces, configurations for the data interface are migrated as part of the vrouter installation on the compute nodes.

    If the data interface is a bond interface, the bond member interfaces are reconfigured into network service based configurations using appropriate ifcfg script files.

    Sample testbed.py File With Exclusive Interfaces

    The following is a sample testbed.py definitions file that shows the configuration for exclusive interfaces for management and control and for data networks.

    #testbed file  from fabric.api import env 
    os_username = 'admin'
    os_password = '<password'
    os_tenant_name = 'demo'
    
    host1 = 'host@<ip address>'
    host2 = 'host@<ip address>'
    host3 = 'host@<ip address>'
    host4 = 'host@<ip address>'
    host5 = 'host@<ip address>'
    host6 = 'host@<ip address>'
    host7 = 'host@<ip address>'
    host8 = 'host@<ip address>'
    
    ext_routers = [('mx1', '<ip address>')] router_asn = <asn> public_vn_rtgt = 10003 public_vn_subnet = '<ip address>'
    
    host_build = ''host@<ip address>'
    
    env.roledefs = {
        'all': [host1, host2, host3, host4, host5, host6, host7, host8],
        'cfgm': [host1],
        'openstack': [host6],
        'webui': [host7],
        'control': [host4, host3],
        'compute': [host2, host5],
        'collector': [host2, host3],
        'database': [host8],
        'build': [host_build],
    }
    
    env.hostnames = {
        'all': ['nodea10', 'nodea4', 'nodea2', 'nodeb2', 'nodeb12','nodea32','nodec36','nodec31']
    }
    
    bond= {
        host2 : { 'name': 'bond0', 'member': ['p2p0p0','p2p0p1','p2p0p2','p2p0p3'], 'mode':'balance-xor' },
        host5 : { 'name': 'bond0', 'member': ['p4p0p0','p4p0p1','p4p0p2','p4p0p3'], 'mode':'balance-xor' }, }
    
    control_data = {
        host1 : { 'ip': '<routing prefix  address>', 'gw' : '<ip address>', 'device':'eth0' },
        host2 : { 'ip': '<routing prefix address>', 'gw' : '<ip address>', 'device':'p0p25p0' },
        host3 : { 'ip': '<routing prefix address>', 'gw' : '<ip address>', 'device':'eth0' },
        host4 : { 'ip': '<routing prefix address>', 'gw' : '<ip address>', 'device':'eth3' },
        host5 : { 'ip': '<routing prefix address>', 'gw' : '<ip address>', 'device':'p6p0p1' },
        host6 : { 'ip': '<routing prefix address>', 'gw' : '<ip address>', 'device':'eth0' },
        host7 : { 'ip': '<routing prefix address>', 'gw' : '<ip address>', 'device':'eth1' },
        host8 : { 'ip': '<routing prefix address>', 'gw' : '<ip address>', 'device':'eth1' }, }
     
    env.password = :'secret'  #Required only for releases prior to 1.10 
    
    env.passwords = {
        host1:'secret',
        host2:'secret',
        host3:'secret',
        host4:'secret',
        host5:'secret',
        host6:'secret',
        host7:'secret',
        host8:'secret',
    
        host_build: 'secret'
    }
     
     
     

    Modified: 2016-06-10