Provisioning Red Hat OpenShift Container Platform Clusters Using Ansible Deployer
Contrail Release 5.0.2 supports the following ways of installing and provisioning standalone and nested Red Hat OpenShift Container Platform version 3.9 clusters. These instructions are valid for systems with Microsoft Azure, Amazon Web Services (AWS), or bare metal server (BMS).
Installing a Standalone OpenShift Cluster Using Ansible Deployer
Prerequisites
Ensure the following system requirements.
Master Node (x1 or x3 for high availability)
Image: RHEL 7.5
CPU/RAM: 4 CPU, 32 GB RAM
Disk: 250 GB
Security Group: Allow all traffic from everywhere
Slave Node (xn)
Image: RHEL 7.5
CPU/RAM: 8 CPU, 64 GB RAM
Disk: 250 GB
Security Group: Allow all traffic from everywhere
Load Balancer Node (x1, only when using high availability. Not needed for single master node installation.)
Image: RHEL 7.5
CPU/RAM: 2 CPU, 16 GB RAM
Disk: 100 GB
Security Group: Allow all traffic from everywhere
Ensure that you launch the instances in the same subnet.
Installing a standalone OpenShift cluster using Ansible deployer
Perform the following steps to install a standalone OpenShift cluster with Contrail as networking provider and provision the cluster using contrail-ansible-deployer.
You can test the system by launching pods, services, namespaces, network-policies, ingress, and soon. For more information, see the examples listed in https://github.com/juniper/openshift-contrail/tree/master/openshift/examples.
Sample ose-install File
Use the following sample ose-install file for reference.
[OSEv3:children] masters nodes etcd openshift_ca [OSEv3:vars] ansible_ssh_user=root ansible_become=yes debug_level=2 deployment_type=origin #openshift-enterprise for Redhat openshift_release=v3.9 #openshift_repos_enable_testing=true containerized=false openshift_install_examples=true openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] osm_cluster_network_cidr=10.32.0.0/12 openshift_portal_net=10.96.0.0/12 openshift_use_dnsmasq=true openshift_clock_enabled=true openshift_hosted_manage_registry=false openshift_hosted_manage_router=false openshift_enable_service_catalog=false openshift_use_openshift_sdn=false os_sdn_network_plugin_name='cni' openshift_disable_check=memory_availability,package_availability,disk_availability,package_version,docker_storage openshift_docker_insecure_registries=opencontrailnightly openshift_web_console_install=false #openshift_web_console_nodeselector={'region':'infra'} openshift_web_console_contrail_install=true openshift_use_contrail=true nested_mode_contrail=false contrail_version=5.0 contrail_container_tag=queens-5.0-156 contrail_registry=opencontrailnightly # Username /Password for private Docker regiteries #contrail_registry_username=test #contrail_registry_password=test # Below option presides over contrail masters if set #vrouter_physical_interface=ens160 #docker_version=1.13.1 ntpserver=10.1.1.1 # a proper ntpserver is required for contrail. # Contrail_vars # below variables are used by contrail kubemanager to configure the cluster, # you can configure all options below. All values are defaults and can be modified. #kubernetes_api_server=10.84.13.52 # in our case this is the master, which is default #kubernetes_api_port=8080 #kubernetes_api_secure_port=8443 #cluster_name=myk8s #cluster_project={} #cluster_network={} #pod_subnets=10.32.0.0/12 #ip_fabric_subnets=10.64.0.0/12 #service_subnets=10.96.0.0/12 #ip_fabric_forwarding=false #ip_fabric_snat=false #public_fip_pool={} #vnc_endpoint_ip=20.1.1.1 #vnc_endpoint_port=8082 [masters] 10.84.13.52 openshift_hostname=openshift-master [etcd] 10.84.13.52 openshift_hostname=openshift-master [nodes] 10.84.13.52 openshift_hostname=openshift-master 10.84.13.53 openshift_hostname=openshift-compute 10.84.13.54 openshift_hostname=openshift-infra openshift_node_labels="{'region': 'infra'}" [openshift_ca] 10.84.13.52 openshift_hostname=openshift-master
Sample ose-install File for a HA setup
Use the following sample ose-install file for reference.
[OSEv3:children] masters nodes etcd lb openshift_ca [OSEv3:vars] ansible_ssh_user=root ansible_become=yes debug_level=2 deployment_type=openshift-enterprise openshift_release=v3.9 openshift_repos_enable_testing=true containerized=false openshift_install_examples=true openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] osm_cluster_network_cidr=10.32.0.0/12 openshift_portal_net=10.96.0.0/12 openshift_use_dnsmasq=true openshift_clock_enabled=true openshift_enable_service_catalog=false openshift_use_openshift_sdn=false os_sdn_network_plugin_name='cni' openshift_disable_check=disk_availability,package_version,docker_storage openshift_docker_insecure_registries=ci-repo.englab.juniper.net:5010 openshift_web_console_install=false openshift_web_console_contrail_install=true openshift_web_console_nodeselector={'region':'infra'} openshift_hosted_manage_registry=true openshift_hosted_registry_selector="region=infra" openshift_hosted_manage_router=true openshift_hosted_router_selector="region=infra" ntpserver=10.84.5.100 # Openshift HA openshift_master_cluster_method=native openshift_master_cluster_hostname=lb openshift_master_cluster_public_hostname=lb # Below are Contrail variables. Comment them out if you don't want to install Contrail through ansible-playbook contrail_version=5.0 openshift_use_contrail=true #rhel-queens-5.0-latest #contrail_container_tag=rhel-queens-5.0-319 #contrail_registry=ci-repo.englab.juniper.net:5010 contrail_registry=hub.juniper.net/contrail contrail_registry_username=JNPR-Customer200 contrail_registry_password=F********************f contrail_container_tag=5.0.2-0.309-rhel-queens contrail_nodes=[10.0.0.7, 10.0.0.8, 10.0.0.13] vrouter_physical_interface=eth0 [masters] 10.0.0.7 openshift_hostname=master1 10.0.0.8 openshift_hostname=master2 10.0.0.13 openshift_hostname=master3 [lb] 10.0.0.5 openshift_hostname=lb [etcd] 10.0.0.7 openshift_hostname=master1 10.0.0.8 openshift_hostname=master2 10.0.0.13 openshift_hostname=master3 [nodes] 10.0.0.7 openshift_hostname=master1 10.0.0.8 openshift_hostname=master2 10.0.0.13 openshift_hostname=master3 10.0.0.10 openshift_hostname=slave1 10.0.0.4 openshift_hostname=slave2 10.0.0.6 openshift_hostname=infra1 openshift_node_labels="{'region': 'infra'}" 10.0.0.11 openshift_hostname=infra2 openshift_node_labels="{'region': 'infra'}" 10.0.0.12 openshift_hostname=infra3 openshift_node_labels="{'region': 'infra'}" [openshift_ca] 10.0.0.7 openshift_hostname=master1 10.0.0.8 openshift_hostname=master2 10.0.0.13 openshift_hostname=master3
Provisioning of Nested OpenShift Clusters Using Ansible Deployer—Beta
When Contrail provides networking for an OpenShift cluster that is provisioned on a Contrail-OpenStack cluster, it is called a nested OpenShift cluster. Contrail components are shared between the two clusters.
The following steps describe how to provision a nested OpenShift cluster.
Provisioning of nested OpenShift Clusters is supported only as a Beta feature. Ensure that you have an operational Contrail-OpenStack cluster based on Contrail Release 5.0 before provisioning a nested OpenShift cluster.
- Configure network connectivity to Contrail configuration and data plane functions
- Installing Nested OpenShift Cluster using Ansible Deployer
Configure network connectivity to Contrail configuration and data plane functions
A nested OpenShift cluster is managed by the same Contrail control processes that manage the underlying OpenStack cluster. The nested OpenShift cluster needs IP reachability to the Contrail control processes. Because the OpenShift cluster is actually an overlay on the OpenStack cluster, you can use the link local service feature or a combination of link local service with fabric Source Network Address Translation (SNAT) feature of Contrail to provide IP reachability to and from the OpenShift cluster on the overlay and the OpenStack cluster.
Use one of the following options to create link local services.
Fabric SNAT with link local service
To provide IP reachability to and from the Kubernetes cluster using the fabric SNAT with link local service, perform the following steps.
Enable fabric SNAT on the virtual network of the VMs.
The fabric SNAT feature must be enabled on the virtual network of the virtual machines on which the Kubernetes master and minions are running.
Create one link local service for the Container Network Interface (CNI) to communicate with its vRouter using the Contrail GUI.
The following link local service is required.
Contrail Process
Service IP
Service Port
Fabric IP
Fabric Port
vRouter
Service_IP for the active node
9091
127.0.0.1
9091
Note:Fabric IP address is 127.0.0.1 since you must make the CNI communicate with the vRouter on its underlay node.
For example, the following link local services must be created:
Link Local Service Name
Service IP
Service Port
Fabric IP
Fabric Port
K8s-cni-to-agent
10.10.10.5
9091
127.0.0.1
9091
Note:Here 10.10.10.5 is the Service IP address that you chose. This can be any unused IP in the cluster. This IP address is primarily used to identify link local traffic and has no other significance.
Link local only
To configure a Link local service, you need a Service IP address and a Fabric IP address. The fabric IP address is the node IP address on which the Contrail processes are running. Service IP address along with port number is used by the data plane to identify the fabric IP address. Service IP address is required to be a unique and unused IP address in the entire OpenStack cluster. For each node of the OpenStack cluster, one service IP address must be identified.
The following are the link local services are required:
Contrail Process
Service IP
Service Port
Fabric IP
Fabric Port
Contrail Config
Service_IP for the active node
8082
Node_IP for the active node
8082
Contrail Analytics
Service_IP for the active node
8086
Node_IP for the active node
8086
Contrail Msg Queue
Service_IP for the active node
5673
Node_IP for the active node
5673
Contrail VNC DB
Service_IP for the active node
9161
Node_IP for the active node
9161
Keystone
Service_IP for the active node
35357
Node_IP for the active node
35357
vRouter
Service_IP for the active node
9091
127.0.0.1
9091
For example, consider the following hypothetical OpenStack cluster:
Contrail Config : 192.168.1.100 Contrail Analytics : 192.168.1.100, 192.168.1.101 Contrail Msg Queue : 192.168.1.100 Contrail VNC DB : 192.168.1.100, 192.168.1.101, 192.168.1.102 Keystone: 192.168.1.200 Vrouter: 192.168.1.300, 192.168.1.400, 192.168.1.500
This cluster is made of seven nodes. You must allocate seven unused IP addresses for these nodes:
192.168.1.100 --> 10.10.10.1 192.168.1.101 --> 10.10.10.2 192.168.1.102 --> 10.10.10.3 192.168.1.200 --> 10.10.10.4 192.168.1.300 --> 10.10.10.5 192.168.1.400 --> 10.10.10.6 192.168.1.500 --> 10.10.10.7
The following link local services must be created:
Link Local Service Name |
Service IP |
Service Port |
Fabric IP |
Fabric Port |
Contrail Config |
10.10.10.1 |
8082 |
192.168.1.100 |
8082 |
Contrail Analytics |
10.10.10.1 |
8086 |
192.168.1.100 |
8086 |
Contrail Analytics 2 |
10.10.10.2 |
8086 |
192.168.1.101 |
8086 |
Contrail Msg Queue |
10.10.10.1 |
5673 |
192.168.1.100 |
5673 |
Contrail VNC DB 1 |
10.10.10.1 |
9161 |
192.168.1.100 |
9161 |
Contrail VNC DB 2 |
10.10.10.2 |
9161 |
192.168.1.101 |
9161 |
Contrail VNC DB 3 |
10.10.10.3 |
9161 |
192.168.1.102 |
9161 |
Keystone |
10.10.10.4 |
35357 |
192.168.1.200 |
35357 |
VRouter-192.168.1.300 |
10.10.10.5 |
9091 |
127.0.0.1 |
9091 |
VRouter-192.168.1.400 |
10.10.10.6 |
9091 |
127.0.0.1 |
9091 |
VRouter-192.168.1.500 |
10.10.10.7 |
9091 |
127.0.0.1 |
9091 |
Installing Nested OpenShift Cluster using Ansible Deployer
Perform the steps on #id-provisioning-of-a-standalone-kubernetes-cluster-3.9__openshift3.9-cluster to continue installing and provisioning the OpenShift cluster.
Sample ose-install File
Add the following information to the #id-provisioning-of-a-standalone-kubernetes-cluster-3.9__sample-ose-install-3.9.
#Nested mode vars nested_mode_contrail=true auth_mode=keystone keystone_auth_host=192.168.24.12 keystone_auth_admin_tenant=admin keystone_auth_admin_user=admin keystone_auth_admin_password=MAYffWrX7ZpPrV2AMAa9zAUvG keystone_auth_admin_port=35357 keystone_auth_url_version=/v3 #k8s_nested_vrouter_vip is a service IP for the running node which we configured above k8s_nested_vrouter_vip=10.10.10.5 #k8s_vip is kubernetes api server ip k8s_vip=192.168.1.3 #cluster_network is the one which vm network belongs to cluster_network="{'domain': 'default-domain', 'project': 'admin', 'name': 'net1'}"
For more information, see https://github.com/Juniper/contrail-kubernetes-docs/tree/master/install/openshift/3.9.