Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

How to Install Contrail Networking and Red Hat OpenShift 4.6

Note:

This topic covers Contrail Networking in Red Hat Openshift environments that are using Contrail Networking Release 21-based releases.

Starting in Release 22.1, Contrail Networking evolved into Cloud-Native Contrail Networking. Cloud-Native Contrail offers significant enhancements to optimize networking performance in Kubernetes-orchestrated environments. Cloud-Native Contrail supports Red Hat Openshift and we strongly recommend using Cloud-Native Contrail for networking in environments using Red Hat Openshift.

For general information about Cloud-Native Contrail, see the Cloud-Native Contrail Networking Techlibrary homepage.

Starting in Contrail Networking Release 2011.L1, you can install Contrail Networking with Red Hat Openshift 4.6 in multiple environments.

This document shows one method of installing Red Hat Openshift 4.6 with Contrail Networking in two separate contexts—on a VM running in a KVM module and within Amazon Web Services (AWS).

There are many implementation and configuration options available for installing and configuring Red Hat OpenShift 4.6 and the scope of all options is beyond this document. For additional information on Red Hat Openshift 4.6 implementation options, see the OpenShift Container Platform 4.6 Documentation from Red Hat.

This document includes the following sections:

How to Install Contrail Networking and Red Hat OpenShift 4.6 using a VM Running in a KVM Module

This section illustrates how to install Contrail Networking with Red Hat OpenShift 4.6 orchestration, where Contrail Networking and Red Hat Openshift are running on virtual machines (VMs) in a Kernel-based Virtual Machine (KVM) module.

This procedure can also be performed to configure an environment where Contrail Networking and Red Hat OpenShift 4.6 are running in an environment with bare metal servers. You can, for instance, use this procedure to establish an environment where the master nodes host the VMs that run the control plane on KVM while the worker nodes operate on physical bare metal servers.

When to Use This Procedure

This procedure is used to install Contrail Networking and Red Hat OpenShift 4.6 orchestration on a virtual machine (VM) running in a Kernel-based Virtual Machine (KVM) module. Support for Contrail Networking installations onto VMs in Red Hat OpenShift 4.6 environments is introduced in Contrail Networking Release 2011.L1. See Contrail Networking Supported Platforms.

You can also use this procedure to install Contrail Networking and Red Hat OpenShift 4.6 orchestration on a bare metal server.

You cannot incrementally upgrade from an environment using an earlier version of Red Hat OpenShift and Contrail Networking to an environment using Red Hat OpenShift 4.6. You must use this procedure to install Contrail Networking with Red Hat Openshift 4.6.

This procedure should work with all versions of Openshift 4.6.

Prerequisites

This document makes the following assumptions about your environment:

  • the KVM environment is operational.

  • the server meets the platform requirements for the Contrail Networking installation. See Contrail Networking Supported Platforms.

  • Minimum server requirements:

    • Master nodes: 8 CPU, 40GB RAM, 250GB SSD storage

      Note:

      The term master node refers to the nodes that build the control plane in this document.

    • Worker nodes: 4 CPU, 16GB RAM, 120GB SSD storage

      Note:

      The term worker node refers to nodes running compute services using the data plane in this document.

    • Helper node: 4 CPU, 8GB RAM, 30GB SSD storage

  • In single node deployments, do not use spinning disk arrays with low Input/Output Operations Per Second (IOPS) when using Contrail Networking with Red Hat Openshift. Higher IOPS disk arrays are required because the control plane always operates as a high availability setup in single node deployments.

    IOPS requirements vary by environment due to multiple factors beyond Contrail Networking and Red Hat Openshift. We, therefore, provide this guideline but do not provide direct guidance around IOPS requirements.

Install Contrail Networking and Red Hat Openshift 4.6

Perform these steps to install Contrail Networking and Red Hat OpenShift 4.6 using a VM running in a KVM module:

Create a Virtual Network or a Bridge Network for the Installation

To create a virtual network or a bridge network for the installation:

  1. Log onto the server that will host the VM that will run Contrail Networking.

    Download the virt-net.xml virtual network configuration file from the Red Hat repository.

  2. Create a virtual network using the virt-net.xml file.

    You may need to modify your virtual network for your environment.

    Example:

  3. Set the OpenShift 4 virtual network to autostart on bootup:
    Note:

    If the worker nodes are running on physical bare metal servers in your environment, this virtual network will be a bridge network with IP address allocations within the same subnet. This addressing scheme is similar to the scheme for the KVM server.

Create a Helper Node with a Virtual Machine Running CentOS 7 or 8

This procedure requires a helper node with a virtual machine that is running either CentOS 7 or 8.

To create this helper node:

  1. Download the Kickstart file for the helper node from the Red Hat repository:

    CentOS 8

    CentOS 7

  2. If you haven’t already configured a root password and the NTP server on the helper node, enter the following commands:

    Example Root Password

    Example NTP Configuration

  3. Edit the helper-ks.cfg file for your environment and use it to install the helper node.

    The following examples show how to install the helper node without having to take further actions:

    CentOS 8

    CentOS 7

    The helper node is installed with the following settings, which are pulled from the virt-net.xml file:

    • HELPER_IP: 192.168.7.77

    • NetMask: 255.255.255.0

    • Default Gateway: 192.168.7.1

    • DNS Server: 8.8.8.8

  4. Monitor the helper node installation progress in the viewer:

    When the installation process is complete, the helper node shuts off.

  5. Start the helper node:

Prepare the Helper Node

To prepare the helper node after the helper node installation:

  1. Login to the helper node:
    Note:

    The default HELPER_IP, which was pulled from the virt-net.xml file, is 192.168.7.77.

  2. Install Enterprise Linux and update CentOS.
  3. Install Ansible and Git and clone the helpernode repository onto the helper node.
  4. Copy the vars.yaml file into the top-level directory:

    Review the vars.yml file. Consider changing any value that requires changing in your environment.

    The following values should be reviewed especially carefully:

    • The domain name, which is defined using the domain: parameter in the dns: hierarchy. If you are using local DNS servers, modify the forwarder parameters—forwarder1: and forwarder2: are used in this example—to connect to these DNS servers.

    • Hostnames for master and worker nodes. Hostnames are defined using the name: parameter in either the primaries: or workers: hierarchies.

    • IP and DHCP settings. If you are using a custom bridge network, modify the IP and DHCP settings accordingly.

    • VM and BMS settings.

      If you are using a VM, set the disk: parameter as disk: vda.

      If you are using a BMS, set the disk: parameter as disk: sda.

    A sample vars.yml file:

    Note:

    If you are using physical servers to host worker nodes, change the provisioning interface for the worker nodes to the mac address.

  5. Review the vars/main.yml file to ensure the file reflects the correct version of Red Hat OpenShift. If you need to change the Red Hat Openshift version in the file, change it.

    In the following sample main.yml file, Red Hat Openshift 4.6 is installed:

  6. Run the playbook to setup the helper node:
  7. After the playbook is run, gather information about your environment and confirm that all services are active and running:

Create the Ignition Configurations

To create Ignition configurations:

  1. On your hypervisor and helper nodes, check that your NTP server is properly configured in the /etc/chrony.conf file:

    The installation fails with a X509: certificate has expired or is not yet valid message when NTP is not properly configured.

  2. Create a location to store your pull secret objects:
  3. From Get Started with Openshift website, download your pull secret and save it in the ~/.openshift/pull-secret directory.
  4. (Contrail containers in password protected registries only) If the Contrail containers in your environment are in password protected registries, also add the authentication information for the registries in the root/.openshift/pull-secret directory.
  5. An SSH key is created for you in the ~/.ssh/helper_rsa directory after completing the previous step. You can use this key or create a unique key for authentication.
  6. Create an installation directory.
  7. Create an install-config.yaml file.

    An example file:

  8. Create the installation manifests:
  9. Set the mastersSchedulable: variable to false in the manifests/cluster-scheduler-02-config.yml file.

    A sample cluster-scheduler-02-config.yml file after this configuration change:

    This configuration change is needed to prevent pods from being scheduled on control plane machines.

  10. Download the tf-openshift installer (tf-openshift-release-tag.tgz) and the tf-operator (tf-operator-release-tag.tgz) for your release from the Contrail Networking Software Download Site.

    See the README Access to Contrail Registry 20XX to obtain the release tags for the installer for your version of Contrail Networking.

  11. Install the YAML files to apply the Contrail configuration:

    Configure the YAML file for your environment, paying particular attention to the registry, container tag, cluster name, and domain fields.

    The container tag for any R2011 and R2011.L release can be retrieved from README Access to Contrail Registry 20XX.

  12. NTP synchronization on all master and worker nodes is required for proper functioning.

    If your environment has to use a specific NTP server, set the environment using the steps in the Openshift 4.x Chrony Configuration document.

  13. Generate the Ignition configurations:
  14. Copy the Ignition files in the Ignition directory for the webserver:

Launch the Virtual Machines

To launch the virtual machines:

  1. From the hypervisor, use PXE booting to launch the virtual machine or machines. If you are using a bare metal server, use PXE booting to boot the servers.
  2. Launch the bootstrap virtual machine:

    The following actions occur as a result of this step:

    • a bootstrap node virtual machine is created.

    • the bootstrap node VM is connected to the PXE server. The PXE server is our helper node.

    • an IP address is assigned from DHCP.

    • A Red Hat Enterprise Linux CoreOS (RHCOS) image is downloaded from the HTTP server.

    The ignition file is embedded at the end of the installation process.

  3. Use SSH to run the helper RSA:
  4. Review the logs:
  5. On the bootstrap node, a temporary etcd and bootkube is created.

    You can monitor these services when they are running by entering the sudo crictl ps command.

    Note:

    Output modified for readability.

  6. From the hypervisor, launch the VMs on the master nodes:

    You can login to the master nodes from the helper node after the master nodes have been provisioned:

    Enter the sudo crictl ps at any point to monitor pod creation as the VMs are launching.

Monitor the Installation Process and Delete the Bootstrap Virtual Machine

To monitor the installation process:

  1. From the helper node, navigate to the ~/ocp4 directory.
  2. Track the install process log:

    Look for the DEBUG Bootstrap status: complete and the INFO It is now safe to remove the bootstrap resources messages to confirm that the installation is complete.

    Do not proceed to the next step until you see these messages.

  3. From the hypervisor, delete the bootstrap VM and launch the worker nodes.
    Note:

    If you are using physical bare metal servers as worker nodes, skip this step.

    Boot the bare metal servers using PXE instead.

Finish the Installation

To finish the installation:

  1. Login to your Kubernetes cluster:
  2. Your installation might be waiting for worker nodes to approve the certificate signing request (CSR). The machineconfig node approval operator typically handles CSR approval.

    CSR approval, however, sometimes has to be performed manually.

    To check pending CSRs:

    To approve all pending CSRs:

    You may have to approve all pending CSRs multiple times, depending on the number of worker nodes in your environment and other factors.

    To monitor incoming CSRs:

    Do not move to the next step until incoming CSRs have stopped.

  3. Set your cluster management state to Managed:
  4. Setup your registry storage.

    For most environments, see Configuring registry storage for bare metal in the Red Hat Openshift documentation.

    For proof of concept labs and other smaller environments, you can set storage to emptyDir.

  5. If you need to make the registry accessible:
  6. Wait for the installation to finish:
  7. Add a user to the cluster. See How to Add a User After Completing the Installation.

How to Install Contrail Networking and Red Hat OpenShift 4.6 on Amazon Web Services

Follow these procedures to install Contrail Networking and Red Hat Openshift 4.6 on Amazon Web Services (AWS):

When to Use This Procedure

This procedure is used to install Contrail Networking and Red Hat OpenShift 4.6 orchestration in AWS. Support for Contrail Networking and Red Hat OpenShift 4.6 environments is introduced in Contrail Networking Release 2011.L1. See Contrail Networking Supported Platforms.

Prerequisites

This document makes the following assumptions about your environment:

  • the server meets the platform requirements for the Contrail Networking installation. See Contrail Networking Supported Platforms.

  • You have the Openshift binary version 4.4.8 files or later. See the Openshift Installation site if you need to update your binary files.

  • You can access Openshift image pull secrets. See Using image pull secrets from Red Hat.

  • You have an active AWS account.

  • AWS CLI is installed. See Installing the AWS CLI from AWS.

  • You have an SSH key that you can generate or provide on your local machine during the installation.

Configure DNS

A DNS zone must be created and available in Route 53 for your AWS account before starting this installation. You must also register a domain for your Contrail cluster in AWS Route 53. All entries created in AWS Route 53 are expected to be resolvable from the nodes in the Contrail cluster.

For information on configuring DNS zones in AWS Route 53, see the Amazon Route 53 Developer Guide from AWS.

Configure AWS Credentials

The installer used in this procedure creates multiple resources in AWS that are needed to run your cluster. These resources include Elastic Compute Cloud (EC2) instances, Virtual Private Clouds (VPCs), security groups, IAM roles, and other necessary network building blocks.

AWS credentials are needed to access these resources and should be configured before starting this installation.

To configure AWS credentials, see the Configuration and credential file settings section of the AWS Command Line Interface User Guide from AWS.

Download the OpenShift Installer and the Command Line Tools

To download the installer and the command line tools:

  1. Check which versions of the OpenShift installer are available:
  2. Set the version and download the OpenShift installer and the CLI tool.

    In this example output, the Openshift version is 4.6.12.

Deploy the Cluster

To deploy the cluster:

  1. Generate an SSH private key and add it to the agent:
  2. Create a working folder:

    In this example, a working folder named aws-ocp4 is created and the user is then moved into the new directory.

  3. Create an installation configuration file. See Creating the installation configuration file section of the Installing a cluster on AWS with customizations document from Red Hat OpenShift.

    An install-config.yaml file needs to be created and added to the current directory. A sample install-config.yaml file is provided below.

    Be aware of the following factors while creating the install-config.yaml file:

    • The networkType field is usually set as OpenShiftSDN in the YAML file by default.

      For configuration pointing at Contrail cluster nodes, the networkType field needs to be configured as Contrail.

    • OpenShift master nodes need larger instances. We recommend setting the type to m5.2xlarge or larger for OpenShift nodes.

    • Most OpenShift worker nodes can use the default instance sizes. You should consider using larger instances, however, for high demand performance workloads.

    • Many of the installation parameters in the YAML file are described in more detail in the Installation configuration parameters section of the Installing a cluster on AWS with customizations document from Red Hat OpenShift.

    • You may want to add the credentials to the Contrail secured registry at hub.juniper.net at this point of the procedure.

    A sample install-config.yaml file:

  4. Create the installation manifests:
  5. Download the tf-openshift installer (tf-openshift-release-tag.tgz) and the tf-operator (tf-operator-release-tag.tgz) for your release from the Contrail Networking Software Download Site.

    See the README Access to Contrail Registry 20XX to obtain the release tags for the installer for your version of Contrail Networking.

  6. Install the YAML files to apply the Contrail configuration.

    Configure the YAML file for your environment, paying particular attention to the registry, container tag, cluster name, and domain fields.

    The container tag for any R2011 and R2011.L release can be retrieved from README Access to Contrail Registry 20XX.

  7. Modify the YAML files for your environment.

    The scope of each potential configuration changes is beyond the scope of this document.

    Common configuration changes include:

    • If you are using non-default network-CIDR subnets for your pods or services, open the deploy/openshift/manifests/cluster-network-02-config.yml file and update the CIDR values.

    • The default number of master nodes in a Kubernetes cluster is 3. If you are using a different number of master nodes, modify the deploy/openshift/manifests/00-contrail-09-manager.yaml file and set the spec.commonConfiguration.replicas field to the number of master nodes.

  8. Create the cluster:
    • Contrail Networking needs to open some networking ports for operation within AWS. These ports are opened by adding rules to security groups.

      Follow this procedure to add rules to security groups when AWS resources are manually created:

      1. Build the Contrail CLI tool for managing security group ports on AWS. This tool allows you to automatically open ports that are required for Contrail to manage security group ports on AWS that are attached to Contrail cluster resources.

        To build this tool:

        1. Clone the tool operator into AWS. In this sample output, the operator is cloned for Contrail Networking Release 2011:

        2. Build the operator tool:

        3. Start the tool:

          After entering this command, you should be in the tf-sc-open tool in your directory. This interface is the compiled tool.

      2. Verify that the service has been created:

        Proceed to the next step after confirming the service was created.

  9. When the service router-default is created in openshift-ingress, use the following command to patch the configuration:

  10. Monitor the screen messages.

    Look for the INFO Install complete!.

    The final messages from a sample successful installation:

  11. Access the cluster:
  12. Add a user to the cluster. See How to Add a User After Completing the Installation.

How to Add a User After Completing the Installation

The process for adding an Openshift user is identical in KVM or on AWS.

Redhat OpenShift 4.6 supports a single kubeadmin user by default. This kubeadmin user is used to deploy the initial cluster configuration.

You can use this procedure to create a Custom Resource (CR) to define a HTTPasswd identity provider.

  1. Generate a flat file that contains the user names and passwords for your cluster by using the HTPasswd identity provider:

    A file called users.httpasswd is created.

  2. Define a secret password that contains the HTPasswd user file:

    This custom resource shows the parameters and acceptable values for an HTPasswd identity provider.

  3. Apply the defined custom resource:
  4. Add the user and assign the cluster-admin role:
  5. Login using the new user credentials:

    The kubeadmin user can now safely be removed. See the Removing the kubeadmin user document from Red Hat OpenShift.

How to Install Earlier Releases of Contrail Networking and Red Hat OpenShift

If you have a need to install Contrail Networking with earlier versions of Red Hat Openshift, earlier versions of Contrail Networking are also supported with Red Hat Openshift versions 4.5, 4.4, and 3.11.

For information on installing Contrail Networking with Red Hat Openshift 4.5, see How to Install Contrail Networking and Red Hat OpenShift 4.5.

For information on installing Contrail Networking with Red Hat Openshift 4.4, see How to Install Contrail Networking and Red Hat OpenShift 4.4.

For information on installing Contrail Networking with Red Hat Openshift 3.11, see the following documentation:

Note:

The session affinity by client with ClusterIP service is not supported. Contrail Networking implementation of ClusterIP service uses ECMP load balancer and supports stickiness per flow, not per client IP address.