Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Installing a Standalone Red Hat OpenShift Container Platform 3.11 Cluster with Contrail Using Contrail OpenShift Deployer

 

You can install Contrail Networking together with a standalone Red Hat OpenShift Container Platform 3.11 cluster using Contrail OpenShift deployer. Consider the topology illustrated here.

Figure 1: Sample installation topology
Sample installation topology

Prerequisites

The recommended system requirements are:

System Requirements

Master Node

Infrastructure Node

Compute Node

CPU/RAM

8 vCPU, 16 GB RAM

16 vCPU, 64 GB RAM

As per OpenShift recommendations.

Disk

100 GB

250 GB

Note

If you use NFS mount volumes, check disk capacity and mounts. Also, openshift-logging with NFS is not recommended.

Perform the following steps to install a standalone OpenShift 3.11 cluster along with Contrail Networking using contrail-openshift-deployer.

  1. Set up environment nodes for RHEL OpenShift enterprise installations:

    1. Subscribe to RHEL.

      (all-nodes)# subscription-manager register --username <> --password <> --force

    2. From the list of available subscriptions, find and attach the pool ID for the OpenShift Container Platform subscription.

      (all-nodes)# subscription-manager attach --pool=pool-ID

    3. Disable all yum repositories.

      (all-nodes)# subscription-manager repos --disable="*"

    4. Enable only the required repositories.
    5. Install required packages, such as python-netaddr, iptables-services, and so on.

      (all-nodes)# yum install -y tcpdump wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct python-netaddr openshift-ansible

    Note

    CentOS OpenShift Origin installations are not supported.

  2. Get the files from the latest tar ball. Download the OpenShift Container Platform install package from Juniper software download site and modify the contents of the openshift-ansible inventory file.
    1. Download the Openshift Deployer (contrail-openshift-deployer-release-tag.tgz) installer from the Juniper software download site, https://www.juniper.net/support/downloads/?p=contrail#sw. See README Access for Contrail Networking Registry 19xx   for appropriate release tags.
    2. Copy the install package to the node from where Ansible is deployed. Ensure that the node has password-free access to the OpenShift master and slave nodes.

      scp contrail-openshift-deployer-release-tag.tgz openshift-ansible-node:/root/

    3. Log in to the Ansible node and untar the contrail-openshift-deployer-release-tag.tgz package.

      tar -xzvf contrail-openshift-deployer-release-tag.tgz -C /root/

    4. Verify the contents of the openshift-ansible directory.

      cd /root/openshift-ansible/

    5. Modify the inventory/ose-install file to match your OpenShift environment.

      Populate the inventory/ose-install file with Contrail configuration parameters specific to your system. The following mandatory parameters must be set. For example:

      Note

      The contrail_container_tag value for this release can be found in the README Access to Contrail Registry 19XX   file.

      Juniper Networks recommends that you obtain the Ansible source files from the latest release.

    This procedure assumes that there is one master node, one infrastructure node, and one compute node.

  3. Edit /etc/hosts to include all the nodes information.
  4. Set up password-free SSH access to the Ansible node and all the nodes.
  5. Run Ansible playbook to install OpenShift Container Platform with Contrail. Before you run Ansible playbook, ensure that you have edited inventory/ose-install file.

    For a sample inventory/ose-install file, see Sample inventory/ose-install File.

  6. Create a password for the admin user to log in to the UI from the master node.
    Note

    If you are using a load balancer, you must manually copy the htpasswd file into all your master nodes.

  7. Assign cluster-admin role to admin user.
  8. Open a Web browser and type the entire fqdn name of your master node or load balancer node, followed by :8443/console.

    Use the user name and password created in step 6 to log in to the Web console.

    Your DNS should resolve the host name for access. If the host name is not resolved, modify the /etc/hosts file to route to the above host.

Note

OpenShift 3.11 cluster upgrades are not supported.

Sample inventory/ose-install File

Note

The /etc/resolv.conf must have write permissions.

Caveats and Troubleshooting Instructions

  • If a Java error occurs, install the yum install java-1.8.0-openjdk-devel.x86_64 package and rerun deploy_cluster.

  • If the service_catalog parameter does not pass but the cluster is operational, check whether the /etc/resolv.conf has cluster.local in its search line, and the nameserver as host IP address.

  • NTP is installed by OpenShift and must be synchronized by the user. This does not affect any Contrail functionality but is displayed in the contrail-status output.

  • If the ansible_service_broker component of OpenShift is not up and its ansible_service_broker_deploy displays an error, it means that the ansible_service_broker pod did not come up properly. The most likely reason is that the ansible_service_broker pod failed its liveliness and readiness checks. Modify the liveliness and readiness checks of this pod when it’s brought online to make it operational. Also, verify that the ansible_service_broker pod uses the correct URL from Red Hat.