Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Deploying Contrail Cloud

 

Use the following procedure to deploy Contrail Cloud Release 10.0.1.

Prerequisites for Deploying Contrail Cloud

Before you deploy Contrail Cloud 10.0.1, ensure that your system meets the following prerequisites:

  • Infrastructure Networking

    • Every system must have access to the Contrail Cloud repository satellite. The satellite is used to distribute packages and control software versioning.

    • The undercloud host must have access to the Intelligent Platform Management Interface (IPMI) of every managed server.

    • The undercloud host must be in the same broadcast domain as each managed server’s management interface to allow PXE booting. When you use multiple networks that use different switching per rack, this could be accomplished by stretching a VLAN across the interfaces. Currently BOOTP forwarding in the network fabric is not supported. The undercloud is the only DHCP server in this network.

      Additional networks are created for control plane, tenant traffic, storage access, and storage backend as described in Red Hat OpenStack Platform director (OSPd) installation and usage.

  • Undercloud Host Setup

    The undercloud is deployed as a virtual machine on a Linux kernel-based virtual machine (KVM) host. You must ensure the KVM host:

    • Runs Red Hat Enterprise Linux (RHEL) 7.4 with only base packages installed.

    • Does not run other virtual machines.

    • Has a network connection that can reach the Contrail Cloud Satellite and has IPMI access to physical hardware.

    • Has a network connection that can be used for provisioning other infrastructure resources.

    • Has at least 500 GB space in the /var directory to host virtual machines, packages, and images.

    • Has at least 32 GB RAM and 16 vCPUs.

    • Supports a root user who has password-free sudo abilities.

    • Provides password-free SSH access in loopback for the user with sudo abilities.

    • Resolves Internet and satellite sites with DNS.

    • Has time synchronized with an NTP source.

Deployment Sequence for Deploying Contrail Cloud

The following sections describe the Contrail Cloud deployment sequence in detail:

Install Contrail Cloud Installer on the Contrail Cloud Manager Host

Download the Contrail Cloud installer in the .sh format. You can then configure site settings to use the Juniper Satellite as the repository to access packages such as Red Hat OpenStack, Ceph Storage, Contrail Networking, and AppFormix packages to install Contrail Cloud.

Complete the following steps to perform the installation:

  1. Download the Contrail Cloud Installer script from the Contrail Cloud — Download Software page and host the script on the Contrail Cloud Manager.

  2. Specify the Contrail Cloud activation key by setting the environment variables as shown in the following example:

    SATELLITE=”contrail-cloud-satellite.juniper.net”
    SATELLITE_KEY=”ak-my-account-key”
    SATELLITE_ORG=”Contrail”
    Note

    You can request for Contrail Cloud activation keys by sending an e-mail to contrail_cloud_subscriptions@juniper.net. You will then receive an e-mail containing a unique satellite activation key, satellite host, and satellite organization information.

  3. Ensure that the Contrail Cloud Installer script has the required permissions to run the following command to install the Contrail Cloud packages:

    The Contrail Cloud packages are installed in the /var/lib/contrail_cloud directory.

  4. Define site specific information in the Ansible variables:

    • Change directory to /var/lib/contrail_cloud/config.

    • Copy the sample /var/lib/contrail_cloud/samples/all.yml variables file to the /var/lib/contrail_cloud/ansible/playbooks/inventory/group_vars/all.yml file.

    • Copy the sample /var/lib/contrail_cloud/samples/*.yml variables files to the /var/lib/contrail_cloud/config directory if they are not present.

    • Customize the /var/lib/contrail_cloud/config/site.yml file with site specific settings to reflect the environment. Ensure that the following fields are changed for each site:

    Field

    Description

    SATELLITE_FQDN

    Satellite host

    SATELLITE_KEY

    Satellite Activation Key

    SATELLITE_ORG

    Satellite Organization

    ccd_host_ip

    Host IP for the Contrail Cloud jumphost

    cluster_domain

    Unique DNS domain name for Contrail Cloud

    ntp_servers

    NTP time sources

    ExternalNetCidr

    A routable subnet to be used for external access to overcloud infrastructure

    ExternalInterfaceDefaultRoute

    Route for the External network

    ExternalAllocationPoolsStart

    DHCP range start

    ExternalAllocationPoolsEnd

    DHCP range end

    ExternalNetworkVlanID

    VLAN ID for external network

    InternalApiNetworkVlanID

    VLAN ID for internal_api network

    TenantNetworkVlanID

    VLAN ID for tenant network

    StorageNetworkVlanID

    VLAN ID for storage network

    StorageMgmtNetworkVlanID

    VLAN ID for storage management network

    ManagementNetworkVlanID

    VLAN ID for management network

    PublicVirtualFixedIPs

    External VIP

  5. Run the Contrail Cloud Ansible provisioning:

    • Verify that you can establish an SSH connection without specifying a password.

      sudo ssh localhost true

    • Run the following command with sudo using the root user account.

      sudo /var/lib/contrail_cloud/scripts/install_contrail_cloud_manager.sh

      A new user with the user name contrail is created. The default password is c0ntrail123. Use this user name to run all subsequent operations in Contrail Cloud from the /var/lib/contrail_cloud/scripts directory.

Prepare the Deployment Templates

  • Inventory Settings

    The inventory defines all the servers that are used by Contrail Cloud. The /var/lib/contrail_cloud/config/inventory.yml file contains the description of all the inventory. You can copy a sample inventory file from /var/lib/contrail_cloud/samples/.

    Sample inventory file

    The parameter status is optional. When status is not defined or is set to creating, the nodes are imported into the ironic inventory and used for overcloud roles. When status is set to deleting, the node is removed from the ironic inventory.

  • Control Hosts Settings

    The control hosts run virtual machines for all Contrail Cloud control functions. The following are the various Contrail Cloud control VMs that will be created on the control hosts:

    • OpenStack Controller

    • Contrail Controller

    • Contrail Analytics

    • Contrail Analytics Database

    • AppFormix Controller

    The /var/lib/contrail_cloud/config/control-host-nodes.yml file defines the server and network properties for each control host. To ensure high availability of the control functions, three control hosts must be defined. The control hosts reference nodes defined in the inventory.yml file. You can copy a sample control-host-nodes.yml file from the /var/lib/contrail_cloud/samples/ directory.

    Note

    The control host systems must have sufficient resources to host the control VMs. Ensure the following resources are available:

    • 256 GB RAM

    • Minimum 100 GB first disk for the operating system. The first disk is the first physically connected disk device.

    • Minimum 1 TB hard disk for VM storage (multiple disks are recommended)

    • Minimum 200 GB SSD drive for VM journals

    Sample control hosts file

  • Storage Node Settings

    The /var/lib/contrail_cloud/config/storage-nodes.yml file defines the storage nodes that run Ceph storage services. You need to define a minimum of three storage hosts to ensure high availability of the storage functions. Nodes must also be defined in the inventory.yml file. You can copy a sample storage-nodes.yml file from /var/lib/contrail_cloud/samples/.

    Sample storage node file

  • Compute Node Settings

    The compute nodes are used for Nova compute resources. The /var/lib/contrail_cloud/config/compute-nodes.yml file defines the compute resources. Nodes must also be defined in the inventory.yml file.

    Sample compute nodes file

  • AppFormix Hosts Settings

    AppFormix hosts are used for AppFormix controllers.

    The /var/lib/contrail_cloud/config/appformix-nodes.yml file defines the AppFormix controller resources.

    Sample appformix-hosts file

Provision Contrail Cloud jumphost

The following example describes jumphost disk partitioning:

[contrail@csgsnc049 ~]$ sudo vgscan
[contrail@csgsnc049 ~]$ sudo lvscan
[contrail@csgsnc049 ~]$ lsblk
[contrail@csgsnc049 ~]$
[contrail@csgsnc049 ~]$ df -kh

Adding Nodes to the Inventory

The /var/lib/contrail_cloud/scripts/inventory-assign.sh script adds all nodes defined in the /var/lib/contrail_cloud/config/inventory.yml file to the ironic inventory. The nodes added to the ironic inventory are managed by Contrail Cloud.

To add nodes to the ironic inventory:

  1. Log in to the Contrail Cloud host with the user name contrail and password c0ntrail123.
  2. Run the inventory-assign.sh script.

    /var/lib/contrail_cloud/scripts/inventory-assign.sh

Assign Control Host Roles to the Inventory

The control-hosts-deploy.sh script assigns all nodes defined in the /var/lib/contrail_cloud/config/control-host-nodes.yml file as control hosts. The hosts are then imaged and booted.

To assign control host roles to the inventory:

  1. Log in to the Contrail Cloud host with the user name contrail and password c0ntrail123.
  2. Run the control-hosts-deploy.sh script.

    /var/lib/contrail_cloud/scripts/control-hosts-deploy.sh

Create VMs for all Control Roles

The control-vms-deploy.sh script creates VMs for every overcloud control role and imports the VM details into the ironic inventory.

To create VMs for control roles:

  1. Log in to the Contrail Cloud host with the user name contrail and password c0ntrail123.
  2. Run the control-vms-deploy.sh script.

    /var/lib/contrail_cloud/scripts/control-vms-deploy.sh

Assign Compute Hosts

The compute-nodes-assign.sh script assigns the Nova compute role for all nodes defined in the /var/lib/contrail_cloud/config/compute-nodes.yml file.

To assign compute hosts:

  1. Log in to the Contrail Cloud host with the user name contrail and password c0ntrail123.
  2. Run the compute-nodes-assign.sh script.

    /var/lib/contrail_cloud/scripts/compute-nodes-assign.sh

Assign Storage Hosts

The storage-nodes-assign.sh script assigns the Ceph storage role for all nodes defined in the ~contrail/storage-nodes.yml file.

To assign storage hosts:

  1. Log in to the Contrail Cloud host with the user name contrail and password c0ntrail123.
  2. Run the storage-nodes-assign.sh script.

    /var/lib/contrail_cloud/scripts/storage-nodes-assign.sh

Deploy the OpenStack Cluster

The openstack-deploy.sh script deploys the OpenStack overcloud with all control functions and all compute and storage resources that have been defined in the previous playbooks.

To deploy the OpenStack cluster:

  1. Log in to the Contrail Cloud host with the user name contrail and password c0ntrail123.
  2. Run the openstack-deploy.sh script.

    /var/lib/contrail_cloud/scripts/openstack-deploy.sh

Deploy the AppFormix Cluster

The appformix-deploy.sh script deploys the AppFormix controllers based on the servers defined in the appformix-nodes.yml file.

Copy the AppFormix license file to /var/lib/contrail_cloud/appformix/appformix.sig.

To deploy the AppFormix cluster:

  1. Log in to the Contrail Cloud host with the user name contrail and password c0ntrail123.
  2. 2. Run the appformix-deploy.sh script.

    /var/lib/contrail_cloud/scripts/appformix-deploy.sh

Install VNF images and Templates

You can use Horizon or OpenStack command line clients to install Glance images and Heat templates for the VNF services.

Adding New Compute and Storage Nodes

To add new compute and storage nodes:

  1. Update the inventory.yml file.

  2. Run the inventory-assign.sh script.

  3. Update compute-nodes.yml with the new nodes, and run the compute-nodes-assign.sh script.

  4. Update storage-nodes.yml with the new nodes, and run the storage-nodes-assign.sh script.

  5. Finally, rerun the openstack-deploy.sh script.