Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

    High Availability Support

    This section describes how to set up Contrail options for high availability support.

    • In Ubuntu setups, OpenStack high availability and Contrail high availability are both supported, for ​Contrail Release 1.10 and greater.
    • In CentOS setups, only Contrail high availability is supported, and only for ​Contrail Release 1.20 and greater.

    Contrail High Availability Features

    The Contrail OpenStack high availability design and implementation provides:

    • A high availability active-active implementation for scale-out of the cloud operation and for flexibility to expand the controller nodes to service the compute fabric.
    • Anytime availability of the cloud for operations, monitoring, and workload monitoring and management.
    • Self-healing of the service and states.
    • VIP-based access to the cloud operations API provides an easy way to introduce new controllers and an API to the cluster with zero downtime. Improved capital efficiencies compared with dedicated hardware implementations, by using nodes assigned to controllers and making them a federated node in the cluster.
    • Operational load distribution across the nodes in the cluster.

    ​​For more details about high availability implementation in Contrail, see High Availability Support.

    Configuration Options for Enabling Contrail High Availability

    The following are options available to configure high availability within the Contrail configuration file (testbed.py).

    Option

    Description

    internal_vip

    The virtual IP of the OpenStack high availability nodes in the control data network. In a single interface setup, the internal_vip will be in the management data control network.

    external_vip

    The virtual IP of the OpenStack high availability​ nodes in the management network. In a single interface setup, the external_vip is not required.

    contrail_internal_vip

    The virtual IP of the Contrail high availability​​ nodes in the control data network. In a single interface setup, the contrail_internal_vip will be in the management data control network.

    contrail_external_vip

    The virtual IP of the Contrail high availability​​ ​nodes in the management network. In a single interface setup, the contrail_external_vip is not required.

    nfs_server

    The IP address of the NFS server that will be mounted to /var/lib/glance/images fof the openstack node. The default is to env.roledefs['compute'][0] .

    nfs_glance_path

    The NFS server path to save images. The default is to /var/tmp/glance-images/ .

    manage_amqp

    A flag to tell the setup_all task to provision separate rabbitmq setups for openstack services in openstack nodes.

    Supported Cluster Topologies for High Availability

    This section describes configurations for the cluster topologies supported, including:

    • OpenStack and Contrail on the same high available nodes
    • OpenStack and Contrail on different high available nodes
    • Contrail only on high available nodes

    Deploying OpenStack and Contrail on the Same High Available Nodes

    OpenStack and Contrail services can be deployed in the same set of high available nodes by setting the internal_vip parameter in the env.ha dictionary of the testbed.py.

    Because the high available nodes are shared by both OpenStack and Contrail services, it is sufficient to specify only internal_vip. However, if the nodes have multiple interfaces with management and data control traffic separated by provisioning multiple interfaces, then the external_vip also needs to be set in the testbed.py.

    Example

    env.ha = {
    
        ‘internal_vip’ : ‘an-ip-in-control-data-network’,
    
        ‘external_vip’ : ‘an-ip-in-management-network’,
    
    }

    Deploying OpenStack and Contrail on Different High Available Nodes

    OpenStack and Contrail services can be deployed on different high available nodes by setting the​ internal_vip and the contrail_internal_vip parameter in the env.ha dictionary of the testbed.py.

    Because the OpenStack and Contrail services use different high available nodes, it is required to separately specify internal_vip for OpenStack high available nodes and contrail_internal_vip for Contrail high available nodes. If the nodes have multiple interfaces, with management and data control traffic separated by provisioning multiple interfaces, then the external_vip and contrail_external_vip options also must be set in the testbed.py.

    Example

    env.ha = {
    
        ‘internal_vip’ : ‘an-ip-in-control-data-network’,
    
        ‘external_vip’ : ‘an-ip-in-management-network’,
    
        ‘contrail_internal_vip’ : ‘another-ip-in-control-data-network’,
    
        ‘contrail_external_vip’ : ‘another-ip-in-management-network’,
    
    } 

    To manage separate rabbitmq clusters in the OpenStack high available nodes for OpenStack services to communicate, specify manage_amqp in the env.openstack dictionary of testbed.py. If manage_amqp is not specified, the default is for the OpenStack services to use the rabbitmq cluster available in the Contrail high available nodes for communication.

    Example:

    env.openstack = {
    
        ‘manage_amqp’ : ‘yes’
    
    }

    Deploying Contrail Only on High Available Nodes

    Contrail services can be deployed only on a set of high available nodes by setting the contrail_internal_vip parameter in the env.ha dictionary of the testbed.py.

    Because the high available nodes are used by only Contrail services, it is sufficient to specify only contrail_internal_vip. If the nodes have multiple interfaces with management and data control traffic are separated by provisioning multiple interfaces, the contrail_external_vipalso needs to be set in the testbed.py.

    Example

    env.ha = {
    
        ‘contrail_internal_vip’ : ‘an-ip-in-control-data-network’,
    
        ‘contrail_external_vip’ : ‘an-ip-in-management-network’,
    
    }

    To manage separate rabbitmq clusters in the OpenStack node for the OpenStack services to communicate, specify manage_amqp in the env.openstack dictionary of the testbed.py. If the manage_amqp​ is not specified, the default is the OpenStack services will use the cluster available in the Contrail high available nodes for communication.

    Example:

    env.openstack = {
    
        ‘manage_amqp’ : ‘yes’
    
    }

    Modified: 2015-10-01