Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Deploying Contrail with Red Hat OpenStack Platform Director 13

This document explains how to integrate a Contrail 5.0.1 installation with Red Hat OpenStack Platform Director 13.

Overview

Red Hat OpenStack Platform provides an installer named Director (RHOSPD). The Red Hat Director installer is based on the OpenStack project TripleO (OOO, OpenStack on OpenStack). TripleO is an open source project that uses features of OpenStack to deploy a fully functional, tenant-facing OpenStack environment.

TripleO can be used to deploy a RDO based OpenStack environment integrated with Tungsten Fabric.Red Hat OpenStack Platform Director (RHOSPd) can be used to deploy a RHOSP based OpenStack environment integrated with Contrail.

OSPd Features

OSPd uses the concepts of undercloud and overcloud. OSPd sets up an undercloud, an operator-facing deployment cloud that contains the OpenStack components needed to deploy and manage an overcloud, a tenant-facing cloud that hosts user workloads.

The overcloud is the deployed solution that can represent a cloud for any purpose, such as production, staging, test, and so on. The operator can select to deploy to their environment any of the available overcloud roles, such as controller, compute, and the like.

OSPd leverages existing core components of OpenStack including Nova, Ironic, Neutron, Heat, Glance, and Ceilometer to deploy OpenStack on bare metal hardware.

  • Nova and Ironic are used in the undercloud to manage the bare metal instances that comprise the infrastructure for the overcloud.

  • Neutron is used to provide a networking environment in which to deploy the overcloud.

  • Glance stores machine images.

  • Ceilometer collects metrics about the overcloud.

For more information about OSPd architecture, see OSPd documentation

Composable Roles

OSPd enables composable roles. Each role is a group of services that are defined in Heat templates. Composable roles gives the operator the flexibility to add and modify roles as needed.

The following are the Contrail roles used for integrating Contrail to the overcloud environment:

  • Contrail Controller

  • Contrail Analytics

  • Contrail Analytics Database

  • Contrail-TSN

  • Contrail-DPDK

Figure 1 shows the relationship and components of an undercloud and overcloud architecture for Contrail.

Figure 1: undercloud and overcloud with Rolesundercloud and overcloud with Roles

Preparing the Environment for Deployment

The overcloud roles can be deployed to bare metal servers or to virtual machines (VMs). The compute nodes must be deployed to bare metal systems.

Ensure your environment is prepared for the Red Hat deployment. Refer to Red Hat documentation.

Preparing for the Contrail Roles

Ensure the following requirements are met for the Contrail nodes per role.

  • Non-high availability: A minimum of 4 overcloud nodes are needed for control plane roles for a non-high availability deployment:

    • 1x contrail-config (includes Contrail control)

    • 1x contrail-analytics

    • 1x contrail-analytics-database

    • 1x OpenStack controller

  • High availability: A minimum of 12 overcloud nodes are needed for control plane roles for a high availability deployment:

    • 3x contrail-config (includes Contrail control)

    • 3x contrail-analytics

    • 3x contrail-analytics-database

    • 3x OpenStack controller

    • If the control plane roles will be deployed to VMs, use 3 separate physical servers and deploy one role of each kind to each physical server.

RHOSP Director expects the nodes to be provided by the administrator, for example, if you are deploying to VMs, the administrator must create the VMs before starting with deployment.

Preparing for the Underlay Network

Refer to Red Hat documentation for planning and implementing underlay networking, including the kinds of networks used and the purpose of each:

At a high level, every overcloud node must support IPMI.

Preparing for the Provisioning Network

Ensure the following requirements are met for the provisioning network.

  • One NIC from every machine must be in the same broadcast domain of the provisioning network, and it should be the same NIC on each of the overcloud machines. For example, if you use the second NIC on the first overcloud machine, you should use the second NIC on each additional overcloud machine.

    During installation, these NICs will be referenced by a single name across all overcloud machines.

  • The provisioning network NIC should not be the same NIC that you are using for remote connectivity to the undercloud machine. During the undercloud installation, an Open vsSwitch bridge will be created for Neutron and the provisioning NIC will be bridged to the Open vSwitch bridge. Consequently, connectivity would be lost if the provisioning NIC was also used for remote connectivity to the undercloud machine.

  • The provisioning NIC on the overcloud nodes must be untagged.

  • You must have the MAC address of the NIC that will PXE boot the IPMI information for the machine on the provisioning network. The IPMI information will include such things as the IP address of the IPMI NIC and the IPMI username and password.

  • All of the networks must be available to all of the Contrail roles and computes.

Network Isolation

OSPd enables configuration of isolated overcloud networks. Using this approach, it is possible to host traffic in isolated networks for specific types of network traffic, such as tenants, storage, API, and the like. This enables assigning network traffic to specific network interfaces or bonds.

When isolated networks are configured, the OpenStack services are configured to use the isolated networks. If no isolated networks are configured, all services run on the provisioning network.

The following networks are typically used when using network isolation topology:

  • Provisioning- for the undercloud control plane

  • Internal API- for OpenStack internal APIs

  • Tenant

  • Storage

  • Storage Management

  • External

    • Floating IP- Can either be merged with external or can be a separate network.

  • Management

Supported Combinations

The following combinations of Operating System/OpenStack/Deployer/Contrail are supported:

Table 1: Compatibility Matrix

Operating System

OpenStack

Deployer

Contrail

RHEL 7.5

OSP13

OSPd13

Contrail 5.0.1

CentOS 7.5

RDO queens/stable

tripleo queens/stable

Tungsten Fabric latest

Creating Infrastructure

There are many different ways on how to create the infrastructure providing the control plane elements. The following example illustrates all control plane functions as Virtual Machines hosted on KVM hosts.

Table 2: Control Plane Functions

KVM Host

Virtual Machines

KVM1

undercloud

KVM2

OpenStack Controller 1, Contrail Contoller 1

KVM3

OpenStack Controller 2, Contrail Contoller 2

KVM4

OpenStack Controller 2, Contrail Contoller 2

Sample Topology

Layer 1: Physical Layer

Layer 2: Logical Layer

undercloud Configuration

Physical Switch

Use the following information to create ports and Trunked VLANs

Table 3: Physical Switch

Port

Trunked VLAN

Native VLAN

ge0

-

-

ge1

700, 720

-

ge2

700, 710, 720, 730, 740, 750

-

ge3

-

-

ge4

710, 730

700

ge5

-

-

undercloud and overcloud KVM Host Configuration

undercloud and overcloud KVM hosts will need virtual switches and virtual machine definitions configured. You can deploy any KVM host operating system version which supports KVM and OVS. The following example shows a RHEL/CentOS based system. If you are using RHEL, the system much be subscribed.

  • Install Basic Packages

    yum install -y libguestfs \ libguestfs-tools \ openvswitch \ virt-install \ kvm libvirt \ libvirt-python \ python-virtualbmc \ python-virtinst

  • Start libvirtd and ovs

    systemctl start libvirtd systemctl start openvswitch

  • Configure vSwitch

    Table 4: Configure vSwitch

    Bridge

    Trunked VLAN

    Native VLAN

    br0

    710, 720, 730 740, 750

    700

    br1

    -

    -

    Create bridges

    ovs-vsctl add-br br0 ovs-vsctl add-br br1 ovs-vsctl add-port br0 NIC1 ovs-vsctl add-port br1 NIC2 cat << EOF > br0.xml <network> <name>br0</name> <forward mode='bridge'/> <bridge name='br0'/> <virtualport type='openvswitch'/> <portgroup name='overcloud'/> <vlan trunk='yes'> <tag id='700' nativeMode='untagged'/> <tag id='710'/> <tag id='720'/> <tag id='730'/> <tag id='740'/> <tag id='750'/> </vlan> </potgroup> </network> EOF cat << EOF > br1.xml <network> <name>br1</name> <forward mode=’bridge’/> <bridge name='br1'/> <virtualport type='openvswitch'/> </network> EOF virsh net-define br0.xml virsh net-start br0 virsh net-autostart br0 virsh net-define br1.xml virsh net-start br1 virsh net-autostart br1

  • Create overcloud VM Definitions on the overcloud KVM Hosts (KVM2-KVM4)

    Note:

    overcloud VM definition is required to create on each overcloud KVM host.

    Note:

    Use the following formula to create the number of roles per overcloud KVM host:

    ROLES=compute:2,contrail-controller:1,control:1

    The following example defines:

    2x compute nodes 1x cotrail controller node 1x openstack controller node

    num=0 ipmi_user=<user> ipmi_password=<password> libvirt_path=/var/lib/libvirt/images port_group=overcloud prov_switch=br0 /bin/rm ironic_list IFS=',' read -ra role_list <<< "${ROLES}" for role in ${role_list[@]}; do role_name=`echo $role|cut -d ":" -f 1` role_count=`echo $role|cut -d ":" -f 2` for count in `seq 1 ${role_count}`; do echo $role_name $count qemu-img create -f qcow2 ${libvirt_path}/${role_name}_${count}.qcow2 99G virsh define /dev/stdin <<EOF $(virt-install --name ${role_name}_${count} \ --disk ${libvirt_path}/${role_name}_${count}.qcow2 \ --vcpus=4 \ --ram=16348 \ --network network=br0,model=virtio,portgroup=${port_group} \ --network network=br1,model=virtio \ --virt-type kvm \ --cpu host \ --import \ --os-variant rhel7 \ --serial pty \ --console pty,target_type=virtio \ --graphics vnc \ --print-xml) EOF vbmc add ${role_name}_${count} --port 1623${num} --username ${ipmi_user} --password ${ipmi_password} / vbmc start ${role_name}_${count} prov_mac=`virsh domiflist ${role_name}_${count}|grep ${prov_switch}|awk '{print $5}'` vm_name=${role_name}-${count}-`hostname -s` kvm_ip=`ip route get 1 |grep src |awk '{print $7}'` echo ${prov_mac} ${vm_name} ${kvm_ip} ${role_name} 1623${num}>> ironic_list num=$(expr $num + 1) done done

    CAUTION:

    One ironic_list file per KVM host will be created. You need to combine all the ironic_list files from each KVM host on the undercloud.

    The following output shows combined list from all the three Overvcloud KVM hosts:

    52:54:00:e7:ca:9a compute-1-5b3s31 10.87.64.32 compute 16230 52:54:00:30:6c:3f compute-2-5b3s31 10.87.64.32 compute 16231 52:54:00:9a:0c:d5 contrail-controller-1-5b3s31 10.87.64.32 contrail-controller 16232 52:54:00:cc:93:d4 control-1-5b3s31 10.87.64.32 control 16233 52:54:00:28:10:d4 compute-1-5b3s30 10.87.64.31 compute 16230 52:54:00:7f:36:e7 compute-2-5b3s30 10.87.64.31 compute 16231 52:54:00:32:e5:3e contrail-controller-1-5b3s30 10.87.64.31 contrail-controller 16232 52:54:00:d4:31:aa control-1-5b3s30 10.87.64.31 control 16233 52:54:00:d1:d2:ab compute-1-5b3s32 10.87.64.33 compute 16230 52:54:00:ad:a7:cc compute-2-5b3s32 10.87.64.33 compute 16231 52:54:00:55:56:50 contrail-controller-1-5b3s32 10.87.64.33 contrail-controller 16232 52:54:00:91:51:35 control-1-5b3s32 10.87.64.33 control 16233

  • Create undercloud VM Definitions on the undercloud KVM host (KVM1)

    Note:

    undercloud VM definitions is required to create only on undercloud KVM.

    1. Create images directory

      mkdir ~/images cd images

    2. Retrieve the image

      Note:

      The image must be retrieved based on the operating system:

      • CentOS

        curl https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1802.qcow2.xz \ -o CentOS-7-x86_64-GenericCloud-1802.qcow2.xz zx -d images/CentOS-7-x86_64-GenericCloud-1802.qcow2.xz cloud_image=~/images/CentOS-7-x86_64-GenericCloud-1804_02.qcow2

      • RHEL

        Download rhel-server-7.5-update-1-x86_64-kvm.qcow2 from Red Hat portal to ~/images cloud_image=~/images/rhel-server-7.5-update-1-x86_64-kvm.qcow2

    3. Customize the undercloud image

      undercloud_name=queensa undercloud_suffix=local root_password=<password> stack_password=<password> export LIBGUESTFS_BACKEND=direct qemu-img create -f qcow2 /var/lib/libvirt/images/${undercloud_name}.qcow2 100G virt-resize --expand /dev/sda1 ${cloud_image} /var/lib/libvirt/images/${undercloud_name}.qcow2 virt-customize -a /var/lib/libvirt/images/${undercloud_name}.qcow2 \ --run-command 'xfs_growfs /' \ --root-password password:${root_password} \ --hostname ${undercloud_name}.${undercloud_suffix} \ --run-command 'useradd stack' \ --password stack:password:${stack_password} \ --run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack' \ --chmod 0440:/etc/sudoers.d/stack \ --run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \ --run-command 'systemctl enable sshd' \ --run-command 'yum remove -y cloud-init' \ --selinux-relabel

    4. Define the undercloud virsh template

      vcpus=8 vram=32000 virt-install --name ${undercloud_name} \ --disk /var/lib/libvirt/images/${undercloud_name}.qcow2 \ --vcpus=${vcpus} \ --ram=${vram} \ --network network=default,model=virtio \ --network network=br0,model=virtio,portgroup=overcloud \ --virt-type kvm \ --import \ --os-variant rhel7 \ --graphics vnc \ --serial pty \ --noautoconsole \ --console pty,target_type=virtio

    5. Start the undercloud VM

      virsh start ${undercloud_name}

    6. Retrieve the undercloud IP

      It might take several seconds before the IP is available.

      undercloud_ip=`virsh domifaddr ${undercloud_name} |grep ipv4 |awk '{print $4}' |awk -F"/" '{print $1}'` ssh-copy-id ${undercloud_ip}

undercloud Configuration

  1. Login to the undercloud VM from the undercloud KVM host

    ssh ${undercloud_ip}

  2. Configure Hostname

    undercloud_name=`hostname -s` undercloud_suffix=`hostname -d` hostnamectl set-hostname ${undercloud_name}.${undercloud_suffix} hostnamectl set-hostname --transient ${undercloud_name}.${undercloud_suffix}

    Note:

    Make sure to set undercloud IP in the host file located at etc\hosts.

    The commands will be as follows assuming the mgmt NIC is eth0:

    undercloud_ip=`ip addr sh dev eth0 |grep "inet " |awk '{print $2}' |awk -F"/" '{print $1}'` echo ${undercloud_ip} ${undercloud_name}.${undercloud_suffix} ${undercloud_name} >> /etc/hosts`

  3. Setup Repositories

    Note:

    The repository must be setup based on the operating system:

    • CentOS

      tripeo_repos=`python -c 'import requests;r = requests.get("https://trunk.rdoproject.org/centos7-queens/current"); print r.text ' |grep python2-tripleo-repos|awk -F"href=\"" '{print $2}'|awk -F"\"" '{print $1}'` yum install -y https://trunk.rdoproject.org/centos7-queens/current/${tripeo_repos} tripleo-repos -b queens current

    • RHEL

      #Register with Satellite (can be done with CDN as well) satellite_fqdn=device.example.net act_key=xxx org=example yum localinstall -y http://${satellite_fqdn}/pub/katello-ca-consumer-latest.noarch.rpm subscription-manager register --activationkey=${act_key} --org=${org}

  4. Install Tripleo Client

    yum install -y python-tripleoclient tmux

  5. Copy undercloud.conf

    su - stack cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf

undercloud Installation

Run the following command to install the undercloud:

openstack undercloud install source stackrc

undercloud Post Configuration

Complete the following configurations post undercloud installation:

  • Configure forwarding:

    sudo iptables -A FORWARD -i br-ctlplane -o eth0 -j ACCEPT sudo iptables -A FORWARD -i eth0 -o br-ctlplane -m state --state RELATED,ESTABLISHED -j ACCEPT sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

  • Add external API interface:

    sudo ip link add name vlan720 link br-ctlplane type vlan id 720 sudo ip addr add 10.2.0.254/24 dev vlan720 sudo ip link set dev vlan720 up

  • Add stack user to the docker group:

    newgrp docker exit su - stack source stackrc

overcloud Configuration

Configuration

  • Configure nameserver for overcloud nodes

    undercloud_nameserver=8.8.8.8 openstack subnet set `openstack subnet show ctlplane-subnet -c id -f value` --dns-nameserver ${undercloud_nameserver}

  • overcloud images

    1. Create image directory

      mkdir images cd images

    2. Get overcloud images

      • TripleO

        curl -O https://images.rdoproject.org/queens/rdo_trunk/current-tripleo-rdo/ironic-python-agent.tar curl -O https://images.rdoproject.org/queens/rdo_trunk/current-tripleo-rdo/overcloud-full.tar tar xvf ironic-python-agent.tar tar xvf overcloud-full.tar

      • OSP13

        sudo yum install -y rhosp-director-images rhosp-director-images-ipa for i in /usr/share/rhosp-director-images/overcloud-full-latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-13.0.tar ; do tar -xvf $i; done

    3. Upload overcloud images

      cd openstack overcloud image upload --image-path /home/stack/images/

  • Prepare Ironic

    OpenStack bare metal provisioning a.k.a Ironic is an integrated OpenStack program which aims to provision bare metal machines instead of virtual machines, forked from the Nova baremetal driver. It is best thought of as a bare metal hypervisor API and a set of plugins which interact with the bare metal hypervisors

    Note:

    Make sure to combine ironic_list files from the three overcloud KVM hosts.

    1. Add the overcloud VMs to Ironic

    2. Introspect overcloud node

    3. Add Baremetal Server (BMS) to Ironic

      • Automated profiling

        Evaluate the attributes of the physical server. The server will be automatically profiled based on the rules.

        The following example shows how to create a rule for system manufacturer as “Supermicro” and memory greater or equal to 128GByte

        You can import the rule by:

      • Scanning of BMC ranges

        Scan the BMC IP range and automatically add new servers matching the above rule by:

  • Create Flavor

  • Create TripleO-Heat-Template Copy

  • Create and Upload Containers

    • OpenStack Contrainers

      1. Create OpenStack container file

        Note:

        The container must be created based on the OpenStack program:

        • TripleO

        • OSP13

      2. Upload OpenStack Containers

    • Contrail Containers

      1. Create Contrail container file

        Note:

        This step is optional. The Contrail containers can be downloaded from external registries later.

        Here are few examples of importing Contrail containers from different sources:

        • Import from password protected public registry:

        • Import from Dockerhub:

        • Import from private secure registry:

        • Import from private insecure registry:

      2. Upload Contrail containers to undercloud registry

Templates

Different YAML templates can be used to customize the overcloud

  • Contrail Services customization

  • Contrail registry settings

    Here are few examples of default values for various registries:

    • Public Juniper registry

    • Insecure registry

    • Private secure registry

  • Contrail Container image settings

  • Network customization

    In order to customize the network, define different networks and configure the overcloud nodes NIC layout. TripleO supports a flexible way of customizing the network.

    The following networking customization example uses network as:

    Table 5: Network Customization

    Network

    VLAN

    overcloud Nodes

    provisioning

    -

    All

    internal_api

    710

    All

    external_api

    720

    OpenStack CTRL

    storage

    740

    OpenStack CTRL, Computes

    storage_mgmt

    750

    OpenStack CTRL

    tenant

    -

    Contrail CTRL, Computes

    • Network activation in roles_data

      The networks must be activated per role in the roles_data file:

      • OpenStack Controller

      • Compute Node

      • Contrail Controller

      • Compute DPDK

      • Compute SRIOV

      • Compute CSN

    • Network parameter configuration

    • Network interface configuration

      There are NIC configuration files per role.

      • OpenStack Controller

      • Contrail Controller

      • Compute Node

    • Advanced Network Configuration

      • Advanced vRouter Kernel Mode Configurations

        In addition to the standard NIC configuration, the vRouter kernel mode supports the following modes:

        • VLAN

        • Bond

        • Bond + VLAN

        NIC Template Configurations

        The snippets below only shows the relevant section of the NIC configuration for each mode.

        • VLAN

        • Bond

        • Bond + VLAN

      • Advanced vRouter DPDK Mode Configurations

        In addition to the standard NIC configuration, the vRouter DPDK mode supports the following modes:

        • Standard

        • VLAN

        • Bond

        • Bond + VLAN

        Network Environment Configuration

        Enable the number of hugepages:

        NIC Template Configurations

        • Standard

        • VLAN

        • Bond

        • Bond + VLAN

      • Advanced vRouter SRIOV Mode Configurations

        vRouter SRIOV can be used in the following combinations:

        • SRIOV + Kernel mode

          • Standard

          • VLAN

          • Bond

          • Bond + VLAN

        • SRIOV + DPDK mode

          • Standard

          • VLAN

          • Bond

          • Bond + VLAN

        Network environment configuration

        Enable the number of hugepages

        • SRIOV + Kernel mode

        • SRIOV + DPDK mode

        SRIOV PF/VF settings

        NIC template configurations:

        The SRIOV NICs are not configured in the NIC templates. However, vRouter NICs must still be configured.

        See following NIC Template Configurations for vRouter kernel mode.

        The snippets below only shows the relevant section of the NIC configuration for each mode.

        • VLAN

        • Bond

        • Bond + VLAN

        See following NIC Template Configurations for vRouter DPDK mode:

        • Standard

        • VLAN

        • Bond

        • Bond + VLAN

  • Advanced Scenarios

    Remote Compute

    Remote Compute extends the data plane to remote locations (POP) whilest keeping the control plane central. Each POP will have its own set of Contrail control services, which are running in the central location. The difficulty is to ensure that the compute nodes of a given POP connect to the Control nodes assigned to that POC. The Control nodes must have predictable IP addresses and the compute nodes have to know these IP addresses. In order to achieve that the following methods are used:

    • Custom Roles

    • Static IP assignment

    • Precise Node placement

    • Per Node hieradata

    Each overcloud node has a unique DMI UUID. This UUID is known on the undercloud node as well as on the overcloud node. Hence, this UUID can be used for mapping node specific information. For each POP, a Control role and a Compute role has to be created.

    Overview

    Mapping Table

    Table 6: Mapping Table

    Nova Name

    Ironic Name

    UUID

    KVM

    IP Address

    POP

    overcloud-contrailcontrolonly-0

    control-only-1-5b3s30

    Ironic UUID: 7d758dce-2784-45fd-be09-5a41eb53e764

    DMI UUID: 73F8D030-E896-4A95-A9F5-E1A4FEBE322D

    5b3s30

    10.0.0.11

    POP1

    overcloud-contrailcontrolonly-1

    control-only-2-5b3s30

    Ironic UUID: d26abdeb-d514-4a37-a7fb-2cd2511c351f

    DMI UUID: 14639A66-D62C-4408-82EE-FDDC4E509687

    5b3s30

    10.0.0.14

    POP2

    overcloud-contrailcontrolonly-2

    control-only-1-5b3s31

    Ironic UUID: 91dd9fa9-e8eb-4b51-8b5e-bbaffb6640e4

    DMI UUID: 28AB0B57-D612-431E-B177-1C578AE0FEA4

    5b3s31

    10.0.0.12

    POP1

    overcloud-contrailcontrolonly-3

    control-only-2-5b3s31

    Ironic UUID: 09fa57b8-580f-42ec-bf10-a19573521ed4

    DMI UUID: 09BEC8CB-77E9-42A6-AFF4-6D4880FD87D0

    5b3s31

    10.0.0.15

    POP2

    overcloud-contrailcontrolonly-4

    control-only-1-5b3s32

    Ironic UUID: 4766799-24c8-4e3b-af54-353f2b796ca4

    DMI UUID: 3993957A-ECBF-4520-9F49-0AF6EE1667A7

    5b3s32

    10.0.0.13

    POP1

    overcloud-contrailcontrolonly-5

    control-only-2-5b3s32

    Ironic UUID: 58a803ae-a785-470e-9789-139abbfa74fb

    DMI UUID: AF92F485-C30C-4D0A-BDC4-C6AE97D06A66

    5b3s32

    10.0.0.16

    POP2

    ControlOnly preparation

    Add ControlOnly overcloud VMs to overcloud KVM host

    Note:

    This has to be done on the overcloud KVM hosts

    Two ControlOnly overcloud VM definitions will be created on each of the overcloud KVM hosts.

    Note:

    The generated ironic_list will be needed on the undercloud to import the nodes to Ironic.

    Get the ironic_lists from the overcloud KVM hosts and combine them.

    Import:

    ControlOnly node introspection

    Get the ironic UUID of the ControlOnly nodes

    The first ControlOnly node on each of the overcloud KVM hosts will be used for POP1, the second for POP2, and so and so forth.

    Get the ironic UUID of the POP compute nodes:

    The first two compute nodes belong to POP1 the second two compute nodes belong to POP2.

    Create an input YAML using the ironic UUIDs:

    Note:

    Only control_nodes, compute_nodes, dpdk_nodes and sriov_nodes are supported.

    Generate subcluster environment:

    Check subcluster environment file:

    Deployment

    Add contrail-subcluster.yaml, contrail-ips-from-pool-all.yaml and contrail-scheduler-hints.yaml to the OpenStack deploy command:

overcloud Installation

Deployment:

Validation Test: