Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Deploy Contrail Cloud

Prerequisites for Contrail Cloud Deployment

Before you deploy Contrail Cloud, ensure that your system meets the following prerequisites:

Infrastructure Networking

  • Every system must have access to the Contrail Cloud repository satellite. The satellite is used to distribute packages and control software versions.

  • The Contrail Cloud jump host must have access to the Intelligent Platform Management Interface (IPMI) of every managed server.

  • The jump host must be in the same broadcast domain as each managed server’s management interface to allow Preboot Execution Environment (PXE) booting.

    Note:

    When running multiple networks that use different switching devices per rack, PXE booting is accomplished by stretching a VLAN across the interfaces. BOOTP forwarding in the network fabric is not supported. The undercloud is the only DHCP server in this network.

  • You must set the jump host hostname to a long name (FQDN).
    • You must also set proper /etc/hosts entry on the jump host for the given FQDN.
  • The jump host FQDN should also be resolvable by DNS, returning and IP that is reachable/routable from the whole cloud environment.

Contrail Cloud Jump Host Setup

The Red Hat Openstack Platform Director (also known as the undercloud) is deployed as a virtual machine on a Linux kernel-based virtual machine (KVM) Contrail Cloud jump host. You must ensure that the KVM host OS:

  • Runs Red Hat Enterprise Linux 8.2 or earlier with only base packages installed. Contrail Cloud installs RHEL 8.2 and all necessary packages as part of the installation process.

  • Is not running other virtual machines.

  • Has a network connection that can reach the Contrail Cloud Repository Satellite and has IPMI access to physical hardware.

  • Has a network connection that can be used for provisioning other infrastructure resources.

  • Has at least 500 GB space in the /var directory to host virtual machines, packages, and images.

  • Has at least 40 GB RAM and 24 vCPUs.

  • Supports users such as a root user with password-less sudo permissions.

  • Provides password-less SSH access in loopback for users with sudo permissions.

  • Resolves Internet and satellite sites with DNS.

  • Has the time synchronized with an NTP source.

Deployment Sequence for Contrail Cloud Deployment

The following deployment sequence describes how to install, configure, and deploy Contrail Cloud.

Note:

If you an encounter an error in any step in the sequence, you can undo the step with the clean up feature. You reverse the installation sequence by running each script (using the “-c” argument) to get back to the desired state in the sequence. For example, to redeploy the Contrail Cloud and OpenStack clusters:

When you clean the k8s-tf-operator-deploy.sh script, you must also clean the openstack-deploy.sh script and then deploy both to ensure that each has a consistent state.

Install Contrail Cloud Installer on the Jump Host

The jump host is the Contrail Cloud host and is the starting point for deploying Contrail Cloud. Before you begin the installation, do the following:

  1. Send a request to mailto:contrail_cloud_subscriptions@juniper.net to obtain the activation keys for Contrail Cloud. You will receive an email containing:
    • A unique satellite activation key

    • The satellite DNS name

    • The satellite organization

    Note:

    Contrail Cloud Satellite is the repository that contains the bundle for Contrail Cloud.

  2. Create new SSH keys. Verify that the root user has SSH keys before performing the installation.
  3. Create a passphrase-protected key.

    If a passphrase is set on the SSH key, you can use the ssh-agent to cache the passphrase. For example, as the contrail user on the jump host:

  4. Ensure that the root user can connect through SSH to the localhost without a password. To authorize access, a password might be required the first time.

Install Contrail Cloud

  1. Untar the contrail_cloud_installer.sh on the jump host.

    You can download the installer at: Juniper Networks Contrail Cloud Download Site.

  2. Specify the activation key by setting the environment variables.

    For example:

  3. Verify that the installer script has the required permissions to install the packages. The packages are installed in the /var/lib/contrail_cloud directory.

  4. Define site-specific information in the Ansible variables:

    1. Change the directory to /var/lib/contrail_cloud/config.

    2. Copy the sample /var/lib/contrail_cloud/samples/*.yml configuration files to the /var/lib/contrail_cloud/config directory.

      Note:

      You can skip this step if you have existing configuration files in the config directory.

    3. Add your activation key (satellite organization and FQDN) to the site.yml configuration file.

    4. Customize the site.yml configuration file (/var/lib/contrail_cloud/config/site.yml) with site-specific settings for your environment. Ensure that the following fields are changed for each site:

    Note:

    If you are deploying DPDK on an Intel X710 NIC, set the DPDK driver to vfio-pci in the site.yml configuration file as follows:

    For a complete matrix of supported NIC and driver mapping, see theContrail Networking NIC Support Matrix.

  5. Prepare the Ansible Vault. The vault allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plaintext in playbooks or roles.

    1. Customize the vault-data.yml configuration file:

    2. Change the password for the vault-encrypted file. The default password is c0ntrail123.

    Note:

    Use a plain-text password to create the vault password. Using a plain-text password prevents Ansible from asking you for a password every time. When creating a the file for the contrail user, make sure it is read-only. We recommend that you delete the file after the deployment completes.

  6. Add the satellite key to the vault by using the ansible-vault edit config/vault-data.yml command.

  7. Run the Ansible provisioning. The provisioning includes setting up the jump host, RHV Manager, and the undercloud (RHOSPd).

    1. Make sure that you can establish an SSH connection without specifying a password:

      sudo ssh localhost true

    2. Install the automation scripts:

      sudo /var/lib/contrail_cloud/scripts/install_contrail_cloud_manager.sh

      When the provisioning is finished, a new user with the username contrail is created on the jump host and a new set of SSH keys is generated that gives the user access to the undercloud VM and the control hosts. The overcloud nodes, including the Contrail Insights nodes, are accessible by the heat-admin user and use a separate pair of keys stored on the undercloud VM, by default.

      Make sure that you change the default password in your vault-data.yml file as discussed above.

Contrail Cloud adds entries to the /home/contrail/.ssh/config directory that includes the username used for each of the overcloud nodes (and the undercloud). This means you can use ssh undercloud or ssh <address> without specifying a user.

You can authorize the contrail user keys for a heat-admin by defining them in the site.yml configuration file:

Use the contrail user to run all subsequent operations in Contrail Cloud from the /var/lib/contrail_cloud/scripts directory:

Note:

The contrail user’s SSH keys are authorized by the root. This means that the contrail user can SSH to root on the jump host.

Prepare the Configuration Files

Table 1 describes the configuration files that you use in Contrail Cloud. See Appendix A for the corresponding sample YAML files.

You can validate your configuration files at any time by running the /var/lib/contrail_cloud/scripts/node-configuration.py script. This script loads all the configuration files, checks the syntax, and verifies that the structures and values conform to the schema. You can use different arguments with the Python script depending on the results you are looking for.

Secured registry is used as of Contrail Cloud Release 16.3. You must provide your container image registry credentials in your vault-data.yml file. From the jump host, see /var/lib/contrail_cloud/samples/unencrypted-vault-data.yml for more information and vault data example. Always update your vault-data.yml file with your most recent credentials before performing any deployment activities.

Note:

You can copy the sample files from the /var/lib/contrail_cloud/samples/ directory on your jump host.

Table 1: Contrail Cloud Configuration Files

Configuration Settings

Filename and Location

Description

Site settings

  • site.yml (/var/lib/contrail_cloud/config/site.yml)

  • sample file: sample site.yml

Defines the properties for your deployment environment. The properties in this file are unique for every deployment and need to be customized.

Inventory settings

  • inventory.yml (/var/lib/contrail_cloud/ config/inventory.yml

  • sample file: sample inventory.yml

Defines all servers used by Contrail Cloud.

Control hosts settings

  • control-host-nodes.yml (/var/lib/contrail_cloud/ config/control-host-nodes.yml)

  • sample file: sample control-host-nodes.yml

To ensure high availability of the control functions, you must define the three control hosts in your configuration and also in the inventory.yml file.

Defines the server and network properties for each control host. Runs virtual machines for all Contrail Cloud control functions. The VMs created on the control hosts include:

  • OpenStack and Ceph Controller

  • K8s hosts nodes

  • Contrail Insights Controller

To host the control VMs, the control host must meet the following minimum specifications:

  • 256 GB RAM

  • Minimum 100 GB first disk for the operating system

  • Minimum 1 TB hard disk for VM storage (multiple SSDs with RAID is recommended)

  • Hardware RAID controller set to the right RAID level for your operating environment. The operating environment includes: operating system disk, VM storage, and VM journals.

Kubernetes host settings

  • k8s-host-nodes.yml (/var/lib/contrail_cloud /config/k8s-host-nodes.yml)

  • sample file: sample k8s-host-nodes.yml

Defines the Kubernetes VMs host nodes.

Overcloud network settings

  • overcloud-nics.yml (/var/lib/contrail_cloud/ config/overcloud-nics.yml)

  • sample file: sample overcloud-nics.yml

Roles that are deployed to the OpenStack and Contrail Insights VMs. Defines the network layout for each role.

Compute node settings

  • compute-nodes.yml (/var/lib/contrail_cloud/ config/compute-nodes.yml)

  • sample file: sample compute-nodes.yml

Defines the compute resources and host aggregates. You also manage host aggregates and match them with availability zones in this file.

You must also define the compute nodes in the inventory.yml configuration file.

Storage node settings

  • storage-nodes.yml (/var/lib/contrail_cloud/ config/storage-nodes.yml)

  • sample file: sample storage-nodes.yml

Defines the storage nodes that run Ceph storage services.

You must define a minimum of three storage hosts to ensure high availability of the storage functions. You must also define the storage nodes in the inventory.yml configuration file.

Vault data settings

  • vault-data.yml (/var/lib/contrail_cloud/ config/vault-data.yml)

  • sample file: sample vault-data.yml

Encrypted file that holds all sensitive user data, such as passwords, product keys, user data and secured registry information.

Add Nodes to the Openstack Ironic Inventory

The /var/lib/contrail_cloud/scripts/inventory-assign.sh script adds all nodes you define in the inventory.yml file to the ironic inventory. The nodes added to the ironic inventory are managed by Contrail Cloud.

To add nodes to the ironic inventory:

  1. Log in to the jump host with the username contrail and password c0ntrail123.
  2. Run the inventory-assign.sh script.

    /var/lib/contrail_cloud/scripts/inventory-assign.sh

  3. Generate a report of the available resource properties.

    These details are helpful when configuring roles, disk devices, and network interfaces. Nodes must be loaded into the Ironic inventory before running the node-configuration.py script. This script is also used to validate configurations against the schema, and can be used after editing any of the configuration files. The report can be generated by running:

    /var/lib/contrail_cloud/scripts/node-configuration.py group.

    You can generate more detailed reports for a specific resource (where <resource> is the inventory resource name) as follows:

Deploy Control Hosts

A control host is a hypervisor running on a server that hosts virtualized control functions as controller nodes. Controller nodes are VMs responsible for managing server functions. The control-hosts-deploy.sh script assigns all nodes that are defined in the /var/lib/contrail_cloud/config/control-host-nodes.yml file as control hosts. The hosts are imaged, booted, configured, and prepared to host the overcloud control plane VMs.

To deploy control host roles to the inventory:

  1. Log in to the jump host with the username contrail and password c0ntrail123.
  2. Run the control-hosts-deploy.sh script.

    /var/lib/contrail_cloud/scripts/control-hosts-deploy.sh.

Create VMs for all Control Roles

The control-vms-deploy.sh script imports VM details into the ironic inventory.

To create VMs for control roles:

  1. Log in to the jump host with the username contrail and password c0ntrail123.
  2. Run the control-vms-deploy.sh script.

    /var/lib/contrail_cloud/scripts/control-vms-deploy.sh.

Assign Compute Nodes

A compute node is a server that hosts virtual machines that provide services over the network. The compute-nodes-assign.sh script assigns the Nova compute role for all nodes that you define in the compute-nodes.yml configuration file (/var/lib/contrail_cloud/config/compute-nodes.yml).

To assign compute nodes:

  1. Log in to the jump host with the username contrail and password c0ntrail123.
  2. Run the compute-nodes-assign.sh script.

    /var/lib/contrail_cloud/scripts/compute-nodes-assign.sh

Assign Storage Nodes

Storage nodes are servers whose purpose is storing data. Storage nodes run Red Hat Ceph storage software in Contrail Cloud. The storage-nodes-assign.yml playbook assigns the Ceph storage role for all nodes that are defined in the storage-nodes.yml(/var/lib/contrail_cloud/config/sorage-nodes.yml) file.

To assign storage nodes:

  1. Log in to the jump host with the username contrail and password c0ntrail123.
  2. Run the storage-nodes-assign.sh script.

    /var/lib/contrail_cloud/scripts/storage-nodes-assign.sh

Deploy the Kubernetes Cluster

The Kubernetes (k8s) cluster is used to provide the infrastructure for the Contrail control plane, which is deployed adjacently (i.e. outside of) the overcloud OpenStack cluster. The k8s-cluster-deploy.sh script initiates the deployment of the Kubernetes cluster.

To deploy the Kubernetes cluster:

  1. Log in to the Contrail Cloud jump host with the username contrail and password c0ntrail123.
  2. Run the k8s-cluster-deploy.sh script.

    /var/lib/contrail_cloud/scripts/k8s-cluster-deploy.sh.

Deploy the OpenStack Cluster

The openstack-deploy.sh script deploys the OpenStack overcloud with all control functions and all compute and storage resources that were defined in the previous playbooks.

To deploy the OpenStack cluster:

  1. Log in to the Contrail Cloud jump host with the username contrail and password c0ntrail123.
  2. Run the validate-node.sh script to verify that the environment is set correctly.

    Run the script on the jump host in /var/lib/contrail_cloud/scripts to validate the YAML configuration files for:

    • Network for OpenStack Controllers

    • Networking for controller hosts and compute hosts

    • Disk resource and configuration validation

  3. Run the openstack-deploy.sh script.

    /var/lib/contrail_cloud/scripts/openstack-deploy.sh.

Deploy the Contrail Cloud Control Plane

The k8s-tf-operator-deploy.sh script deploys the services that make up the Contrail control plane into Kubernetes.

Note:

This deployment is best done in parallel with the openstack-deploy.sh to allow both clusters to more efficiently synchronize with each other.

To deploy the control plane:

  1. Log in to the Contrail Cloud jump host with the username contrail and password c0ntrail123.
  2. Run the k8s-tf-operator-deploy.sh script.

    /var/lib/contrail_cloud/scripts/k8s-tf-operator-deploy.sh.

Validate the OpenStack Environment

You can validate and check that the environment By default, tests that require floating IPs (FIPs) are skipped. You can execute the provision-sdn-gateway.sh script before validation to provision SDN gateways and external network that can be used by Tempest. Tempest is set of integration tests to be run against a live OpenStack cluster. You can find examples of the object definitions in the site.yml file (/var/lib/contrail_cloud/samples/features/provision-sdn-gateway/site.yml).

Use the overcloud-validation.sh script to run Tempest test collections in newly deployed environments. The script downloads a CirrOS VM image, uploads it to the overcloud, and creates new flavors. After the script execution, results of the test can be found in the undercloud home directory where two files are created:

  • tempest-subunit-smoke.xml

  • tempest-subunit-full.xml

The first line of the file shows the number of failures and the total count of conducted tests.

Install VNF Images and Templates

You can use Horizon or OpenStack command-line clients to install Glance images and Heat templates for the VNF services.

Add New Compute and Storage Nodes

To add new compute and storage nodes to an existing environment:

  1. Update the inventory.yml configuration file and run the inventory-assign.sh script.

  2. Update the compute-nodes.yml configuration file with the new nodes, and run the compute-nodes-assign.sh script.

  3. Update the storage-nodes.yml configuration file with the new nodes, and run the storage-nodes-assign.sh script.

  4. Rerun the openstack-deploy.sh script.

Gather Logs

You can run a script that gathers important log, configuration, and status data from your deployed nodes all into one place. his script is useful tif you need specific information for troubleshooting or while making support calls.

We recommend you use the script after a successful deployment to provide a baseline that can be compared against future upgrades or failures. To archive the configuration, status, and logs from the deployment:

The usage description of the collect_data.sh script is as follows: