Note: This topic covers Contrail Networking in Red Hat Openshift
environments that are using Contrail Networking Release 21-based releases.
Starting in Release 22.1, Contrail Networking evolved into Cloud-Native
Contrail Networking. Cloud-Native Contrail offers significant enhancements
to optimize networking performance in Kubernetes-orchestrated environments.
Cloud-Native Contrail supports Red Hat Openshift and we strongly recommend
using Cloud-Native Contrail for networking in environments using Red
Hat Openshift.
For general information about Cloud-Native Contrail, see the Cloud-Native Contrail Networking Techlibrary homepage.
Starting in Contrail Networking Release 2011.L1, you can install
Contrail Networking with Red Hat Openshift 4.6 in multiple environments.
This document shows one method of installing Red Hat Openshift
4.6 with Contrail Networking in two separate contexts—on a VM
running in a KVM module and within Amazon Web Services (AWS).
There are many implementation and configuration options available
for installing and configuring Red Hat OpenShift 4.6 and the scope
of all options is beyond this document. For additional information
on Red Hat Openshift 4.6 implementation options, see the OpenShift Container Platform 4.6 Documentation from Red Hat.
This document includes the following
sections:
How to Install Contrail Networking and Red Hat OpenShift 4.6
using a VM Running in a KVM Module
This section illustrates how to install Contrail
Networking with Red Hat OpenShift 4.6 orchestration, where Contrail
Networking and Red Hat Openshift are running on virtual machines (VMs)
in a Kernel-based Virtual Machine (KVM) module.
This procedure can also be performed to configure an environment
where Contrail Networking and Red Hat OpenShift 4.6 are running in
an environment with bare metal servers. You can, for instance, use
this procedure to establish an environment where the master nodes
host the VMs that run the control plane on KVM while the worker nodes
operate on physical bare metal servers.
When to Use This Procedure
This procedure is used to install Contrail Networking and Red
Hat OpenShift 4.6 orchestration on a virtual machine (VM) running
in a Kernel-based Virtual Machine (KVM) module. Support for Contrail
Networking installations onto VMs in Red Hat OpenShift 4.6 environments
is introduced in Contrail Networking Release 2011.L1. See Contrail Networking Supported Platforms.
You can also use this procedure to install Contrail Networking
and Red Hat OpenShift 4.6 orchestration on a bare metal server.
You cannot incrementally upgrade from an environment using an
earlier version of Red Hat OpenShift and Contrail Networking to an
environment using Red Hat OpenShift 4.6. You must use this procedure
to install Contrail Networking with Red Hat Openshift 4.6.
This procedure should work with all versions of Openshift 4.6.
Prerequisites
This document makes the following assumptions about your environment:
the KVM environment is operational.
the server meets the platform requirements for the Contrail
Networking installation. See Contrail Networking Supported Platforms.
Minimum server requirements:
Master nodes: 8 CPU, 40GB RAM, 250GB SSD storage
Note: The term master node refers to the
nodes that build the control plane in this document.
Worker nodes: 4 CPU, 16GB RAM, 120GB SSD storage
Note: The term worker node refers to nodes
running compute services using the data plane in this document.
Helper node: 4 CPU, 8GB RAM, 30GB SSD storage
In single node deployments, do not use spinning disk arrays
with low Input/Output Operations Per Second (IOPS) when using Contrail
Networking with Red Hat Openshift. Higher IOPS disk arrays are required
because the control plane always operates as a high availability setup
in single node deployments.
IOPS requirements vary by environment due to multiple factors
beyond Contrail Networking and Red Hat Openshift. We, therefore, provide
this guideline but do not provide direct guidance around IOPS requirements.
Install Contrail Networking and Red Hat Openshift 4.6
Perform these steps to install Contrail Networking
and Red Hat OpenShift 4.6 using a VM running in a KVM module:
Create a Virtual Network or a Bridge Network for the Installation
To create a virtual network or a bridge network for the
installation:
- Log onto the server that will host the VM that will run
Contrail Networking.
Download the virt-net.xml virtual network
configuration file from the Red Hat repository.
# wget https://raw.githubusercontent.com/RedHatOfficial/ocp4-helpernode/master/docs/examples/virt-net.xml
- Create a virtual network using the virt-net.xml file.
You may need to modify your virtual network for your environment.
Example:
# virsh net-define --file virt-net.xml
- Set the OpenShift 4 virtual network to autostart on bootup:
# virsh net-autostart openshift4
# virsh net-start openshift4
Note: If the worker nodes are running on physical bare metal
servers in your environment, this virtual network will be a bridge
network with IP address allocations within the same subnet. This addressing
scheme is similar to the scheme for the KVM server.
Create a Helper Node with a Virtual Machine Running CentOS
7 or 8
This procedure requires a helper node with a virtual
machine that is running either CentOS 7 or 8.
To create this helper node:
- Download the Kickstart file for the helper node from the
Red Hat repository:
CentOS 8
# wget https://raw.githubusercontent.com/RedHatOfficial/ocp4-helpernode/master/docs/examples/helper-ks8.cfg -O helper-ks.cfg
CentOS 7
# wget https://raw.githubusercontent.com/RedHatOfficial/ocp4-helpernode/master/docs/examples/helper-ks.cfg -O helper-ks.cfg
- If you haven’t already configured a root password
and the NTP server on the helper node, enter the following commands:
Example Root Password
rootpw --plaintext password
Example NTP Configuration
timezone America/Los_Angeles --isUtc --ntpservers=0.centos.pool.ntp.org,1.centos.pool.ntp.org,2.centos.pool.ntp.org,3.centos.pool.ntp.org
- Edit the helper-ks.cfg file for your
environment and use it to install the helper node.
The following examples show how to install the helper node without
having to take further actions:
CentOS 8
# virt-install --name="ocp4-aHelper" --vcpus=2 --ram=4096 \
--disk path=/var/lib/libvirt/images/ocp4-aHelper.qcow2,bus=virtio,size=50 \
--os-variant centos8 --network network=openshift4,model=virtio \
--boot hd,menu=on --location /var/lib/libvirt/iso/CentOS-8.2.2004-x86_64-dvd1.iso \
--initrd-inject helper-ks.cfg --extra-args "inst.ks=file:/helper-ks.cfg" --noautoconsole
CentOS 7
# virt-install --name="ocp4-aHelper" --vcpus=2 --ram=4096 \
--disk path=/var/lib/libvirt/images/ocp4-aHelper.qcow2,bus=virtio,size=30 \
--os-variant centos7.0 --network network=openshift4,model=virtio \
--boot hd,menu=on --location /var/lib/libvirt/iso/CentOS-7-x86_64-Minimal-2003.iso \
--initrd-inject helper-ks.cfg --extra-args "inst.ks=file:/helper-ks.cfg" --noautoconsole
The helper node is installed with the following settings, which
are pulled from the virt-net.xml file:
- Monitor the helper node installation progress in the viewer:
# virt-viewer --domain-name ocp4-aHelper
When the installation process is complete, the helper node shuts
off.
- Start the helper node:
# virsh start ocp4-aHelper
Prepare the Helper Node
To prepare the helper node after the helper node installation:
- Login to the helper node:
# ssh -l root HELPER_IP
Note: The default HELPER_IP, which was
pulled from the virt-net.xml file, is 192.168.7.77.
- Install Enterprise Linux and update CentOS.
# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-$(rpm -E %rhel).noarch.rpm
# yum -y update
# reboot
- Install Ansible and Git and clone the helpernode repository onto the helper node.
# yum -y install ansible git
# git clone https://github.com/RedHatOfficial/ocp4-helpernode
# cd ocp4-helpernode
- Copy the vars.yaml file into the top-level directory:
# cp docs/examples/vars.yaml .
Review the vars.yml file. Consider changing any value that requires
changing in your environment.
The following values should be reviewed especially carefully:
The domain name, which is defined using the domain: parameter in the dns: hierarchy.
If you are using local DNS servers, modify the forwarder parameters—forwarder1: and forwarder2: are used
in this example—to connect to these DNS servers.
Hostnames for master and worker nodes. Hostnames are defined
using the name: parameter in either the primaries: or workers: hierarchies.
IP and DHCP settings. If you are using a custom bridge
network, modify the IP and DHCP settings accordingly.
VM and BMS settings.
If you are using a VM, set the disk: parameter
as disk: vda.
If you are using a BMS, set the disk: parameter
as disk: sda.
A sample vars.yml file:
disk: vda
helper:
name: "helper"
ipaddr: "192.168.7.77"
dns:
domain: "example.com"
clusterid: "ocp4"
forwarder1: "8.8.8.8"
forwarder2: "8.8.4.4"
dhcp:
router: "192.168.7.1"
bcast: "192.168.7.255"
netmask: "255.255.255.0"
poolstart: "192.168.7.10"
poolend: "192.168.7.30"
ipid: "192.168.7.0"
netmaskid: "255.255.255.0"
bootstrap:
name: "bootstrap"
ipaddr: "192.168.7.20"
macaddr: "52:54:00:60:72:67"
masters:
- name: "master0"
ipaddr: "192.168.7.21"
macaddr: "52:54:00:e7:9d:67"
- name: "master1"
ipaddr: "192.168.7.22"
macaddr: "52:54:00:80:16:23"
- name: "master2"
ipaddr: "192.168.7.23"
macaddr: "52:54:00:d5:1c:39"
workers:
- name: "worker0"
ipaddr: "192.168.7.11"
macaddr: "52:54:00:f4:26:a1"
- name: "worker1"
ipaddr: "192.168.7.12"
macaddr: "52:54:00:82:90:00"
Note: If you are using physical servers to host worker nodes,
change the provisioning interface for the worker nodes to the mac
address.
- Review the vars/main.yml file to
ensure the file reflects the correct version of Red Hat OpenShift.
If you need to change the Red Hat Openshift version in the file, change
it.
In the following sample main.yml file,
Red Hat Openshift 4.6 is installed:
ssh_gen_key: true
install_filetranspiler: false
staticips: false
force_ocp_download: false
remove_old_config_files: false
ocp_bios: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.6/4.6.8/rhcos-4.6.8-x86_64-live-rootfs.x86_64.img"
ocp_initramfs: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.6/4.6.8/rhcos-4.6.8-x86_64-live-initramfs.x86_64.img"
ocp_install_kernel: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.6/4.6.8/rhcos-4.6.8-x86_64-live-kernel-x86_64"
ocp_client: "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.6.12/openshift-client-linux-4.6.12.tar.gz"
ocp_installer: "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.6.12/openshift-install-linux-4.6.12.tar.gz"
helm_source: "https://get.helm.sh/helm-v3.5.0-linux-amd64.tar.gz"
chars: (\\_|\\$|\\\|\\/|\\=|\\)|\\(|\\&|\\^|\\%|\\$|\\#|\\@|\\!|\\*)
ppc64le: false
uefi: false
chronyconfig:
enabled: false
setup_registry:
deploy: false
autosync_registry: false
registry_image: docker.io/library/registry:2
local_repo: "ocp4/openshift4"
product_repo: "openshift-release-dev"
release_name: "ocp-release"
release_tag: "4.6.1-x86_64"
- Run the playbook to setup the helper node:
# ansible-playbook -e @vars.yaml tasks/main.yml
- After the playbook is run, gather information about your
environment and confirm that all services are active and running:
# /usr/local/bin/helpernodecheck services
Status of services:
===================
Status of dhcpd svc -> Active: active (running) since Mon 2020-09-28 05:40:10 EDT; 33min ago
Status of named svc -> Active: active (running) since Mon 2020-09-28 05:40:08 EDT; 33min ago
Status of haproxy svc -> Active: active (running) since Mon 2020-09-28 05:40:08 EDT; 33min ago
Status of httpd svc -> Active: active (running) since Mon 2020-09-28 05:40:10 EDT; 33min ago
Status of tftp svc -> Active: active (running) since Mon 2020-09-28 06:13:34 EDT; 1s ago
Unit local-registry.service could not be found.
Status of local-registry svc ->
Create the Ignition Configurations
To create Ignition configurations:
- On your hypervisor and helper nodes, check that your NTP
server is properly configured in the /etc/chrony.conf file:
chronyc tracking
The installation fails with a X509: certificate has
expired or is not yet valid message when NTP is not properly
configured.
- Create a location to store your pull secret objects:
- From Get Started
with Openshift website, download your pull secret and save it
in the ~/.openshift/pull-secret directory.
# ls -1 ~/.openshift/pull-secret
/root/.openshift/pull-secret
- (Contrail containers in password protected registries
only) If the Contrail containers in your environment are in password
protected registries, also add the authentication information for
the registries in the root/.openshift/pull-secret directory.
# cat ~/.openshift/pull-secret
{
"auths": {
"hub.juniper.net": {
"email": "example@example.com",
"auth": "<base64 encoded concatenated line username:password>"
},
"cloud.openshift.com": {
"auth": "…",
…},
…
}
- An SSH key is created for you in the ~/.ssh/helper_rsa directory after completing the previous step. You can use this key
or create a unique key for authentication.
# ls -1 ~/.ssh/helper_rsa
/root/.ssh/helper_rsa
- Create an installation directory.
# mkdir ~/ocp4
# cd ~/ocp4
- Create an install-config.yaml file.
An example file:
# cat <<EOF > install-config.yaml
apiVersion: v1
baseDomain: example.com
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: ocp4
networking:
clusterNetworks:
- cidr: 10.254.0.0/16
hostPrefix: 24
networkType: Contrail
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
pullSecret: '$(< ~/.openshift/pull-secret)'
sshKey: '$(< ~/.ssh/helper_rsa.pub)'
EOF
- Create the installation manifests:
# openshift-install create manifests
- Set the mastersSchedulable: variable to false in the manifests/cluster-scheduler-02-config.yml file.
# sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' manifests/cluster-scheduler-02-config.yml
A sample cluster-scheduler-02-config.yml file after this configuration change:
# cat manifests/cluster-scheduler-02-config.yml
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
creationTimestamp: null
name: cluster
spec:
mastersSchedulable: false
policy:
name: ""
status: {}
This configuration change is needed to prevent pods from being
scheduled on control plane machines.
- Download the
tf-openshift installer (tf-openshift-release-tag.tgz) and the tf-operator
(tf-operator-release-tag.tgz) for your release from the Contrail
Networking Software Download Site.
- Install the YAML files to apply the Contrail configuration:
Configure the YAML file for your environment, paying particular
attention to the registry, container tag, cluster name, and domain
fields.
The container tag for any R2011 and R2011.L release can be retrieved
from README Access to Contrail Registry 20XX.
yum -y install git jq python3
python3 -m pip install jinja2
export INSTALL_DIR=$PWD
./tf-openshift/scripts/apply_install_manifests.sh $INSTALL_DIR
export CONTRAIL_CONTAINER_TAG="2011.L1.297"
export CONTAINER_REGISTRY="hub.juniper.net/contrail"
export DEPLOYER="openshift"
export KUBERNETES_CLUSTER_NAME="ocp4"
export KUBERNETES_CLUSTER_DOMAIN="example.com"
export CONTRAIL_REPLICAS=3
./tf-operator/contrib/render_manifests.sh
for i in $(ls ./tf-operator/deploy/crds/) ; do
cp ./tf-operator/deploy/crds/$i $INSTALL_DIR/manifests/01_$i
done
for i in namespace service-account role cluster-role role-binding cluster-role-binding ; do
cp ./tf-operator/deploy/kustomize/base/operator/$i.yaml $INSTALL_DIR/manifests/02-tf-operator-$i.yaml
done
oc kustomize ./tf-operator/deploy/kustomize/operator/templates/ | sed -n 'H; /---/h; ${g;p;}' > $INSTALL_DIR/manifests/02-tf-operator.yaml
oc kustomize ./tf-operator/deploy/kustomize/contrail/templates/ > $INSTALL_DIR/manifests/03-tf.yaml
- NTP synchronization on all master and worker nodes is
required for proper functioning.
- Generate the Ignition configurations:
# openshift-install create ignition-configs
- Copy the Ignition files in the Ignition directory for
the webserver:
# cp ~/ocp4/*.ign /var/www/html/ignition/
# restorecon -vR /var/www/html/
# restorecon -vR /var/lib/tftpboot/
# chmod o+r /var/www/html/ignition/*.ign
Launch the Virtual Machines
To launch the virtual machines:
- From the hypervisor, use PXE booting to launch the virtual
machine or machines. If you are using a bare metal server, use PXE
booting to boot the servers.
- Launch the bootstrap virtual machine:
# virt-install --pxe --network bridge=openshift4 --mac=52:54:00:60:72:67 --name ocp4-bootstrap --ram=16384 --vcpus=4 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-bootstrap.qcow2,size=120 --vnc
The following actions occur as a result of this step:
a bootstrap node virtual machine is created.
the bootstrap node VM is connected to the PXE server.
The PXE server is our helper node.
an IP address is assigned from DHCP.
A Red Hat Enterprise Linux CoreOS (RHCOS) image is downloaded
from the HTTP server.
The ignition file is embedded at the end of the installation
process.
- Use SSH to run the helper RSA:
# ssh -i ~/.ssh/helper_rsa core@192.168.7.20
- Review the logs:
- On the bootstrap node, a temporary etcd and bootkube is
created.
You can monitor these services when they are running by entering
the sudo crictl ps command.
[core@bootstrap ~]$ sudo crictl ps
CONTAINER IMAGE CREATED STATE NAME POD ID
33762f4a23d7d 976cc3323... 54 seconds ago Running manager 29a...
ad6f2453d7a16 86694d2cd... About a minute ago Running kube-apiserver-insecure-readyz 4cd...
3bbdf4176882f quay.io/... About a minute ago Running kube-scheduler b3e...
57ad52023300e quay.io/... About a minute ago Running kube-controller-manager 596...
a1dbe7b8950da quay.io/... About a minute ago Running kube-apiserver 4cd...
5aa7a59a06feb quay.io/... About a minute ago Running cluster-version-operator 3ab...
ca45790f4a5f6 099c2a... About a minute ago Running etcd-metrics 081...
e72fb8aaa1606 quay.io/... About a minute ago Running etcd-member 081...
ca56bbf2708f7 1ac19399... About a minute ago Running machine-config-server c11...
Note: Output modified for readability.
- From the hypervisor, launch the VMs on the master nodes:
# virt-install --pxe --network bridge=openshift4 --mac=52:54:00:e7:9d:67 --name ocp4-master0 --ram=40960 --vcpus=8 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-master0.qcow2,size=250 --vnc
# virt-install --pxe --network bridge=openshift4 --mac=52:54:00:80:16:23 --name ocp4-master1 --ram=40960 --vcpus=8 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-master1.qcow2,size=250 --vnc
# virt-install --pxe --network bridge=openshift4 --mac=52:54:00:d5:1c:39 --name ocp4-master2 --ram=40960 --vcpus=8 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-master2.qcow2,size=250 --vnc
You can login to the master nodes from the helper node after
the master nodes have been provisioned:
# ssh -i ~/.ssh/helper_rsa core@192.168.7.21
# ssh -i ~/.ssh/helper_rsa core@192.168.7.22
# ssh -i ~/.ssh/helper_rsa core@192.168.7.23
Enter the sudo crictl ps at any point
to monitor pod creation as the VMs are launching.
Monitor the Installation Process and Delete the Bootstrap Virtual
Machine
To monitor the installation process:
- From the helper node, navigate to the ~/ocp4 directory.
- Track the install process log:
# openshift-install wait-for bootstrap-complete --log-level debug
Look for the DEBUG Bootstrap status: complete and the INFO It is now safe to remove the bootstrap resources messages to confirm that the installation is complete.
INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp4.example.com:6443...
INFO API v1.13.4+838b4fa up
INFO Waiting up to 30m0s for bootstrapping to complete...
DEBUG Bootstrap status: complete
INFO It is now safe to remove the bootstrap resources
Do not proceed to the next step until you see these messages.
- From the hypervisor, delete the bootstrap VM and launch
the worker nodes.
Note: If you are using physical bare metal servers as worker
nodes, skip this step.
Boot the bare metal servers using PXE instead.
# virt-install --pxe --network bridge=openshift4 --mac=52:54:00:f4:26:a1 --name ocp4-worker0 --ram=16384 --vcpus=4 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-worker0.qcow2,size=120 --vnc
# virt-install --pxe --network bridge=openshift4 --mac=52:54:00:82:90:00 --name ocp4-worker1 --ram=16384 --vcpus=4 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-worker1.qcow2,size=120 --vnc
Finish the Installation
To finish the installation:
- Login to your Kubernetes cluster:
# export KUBECONFIG=/root/ocp4/auth/kubeconfig
- Your installation might be waiting for worker nodes to
approve the certificate signing request (CSR). The machineconfig node
approval operator typically handles CSR approval.
CSR approval, however, sometimes has to be performed manually.
To check pending CSRs:
# oc get csr
To approve all pending CSRs:
# oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
You may have to approve all pending CSRs multiple times, depending
on the number of worker nodes in your environment and other factors.
To monitor incoming CSRs:
# watch -n5 oc get csr
Do not move to the next step until incoming CSRs have stopped.
- Set your cluster management state to Managed:
# oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'
- Setup your registry storage.
For most environments, see Configuring registry storage for bare metal in the Red Hat
Openshift documentation.
For proof of concept labs and other smaller environments, you
can set storage to emptyDir.
# oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
- If you need to make the registry accessible:
# oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}'
- Wait for the installation to finish:
# openshift-install wait-for install-complete
INFO Waiting up to 30m0s for the cluster at https://api.ocp4.example.com:6443 to initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/ocp4/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp4.example.com
INFO Login to the console with user: kubeadmin, password: XXX-XXXX-XXXX-XXXX
- Add a user to the cluster. See How to Add a User After Completing the Installation.