ON THIS PAGE
Installing cRPD on Kubernetes
Kubernetes is a open-source platform for managing containerized workloads and services. Containers are a good way to bundle and run the applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Kubernetes provides you with a framework to run distributed systems resiliently. Kubernetes provides a platform for deployment automation, scaling, and operations of application containers across clusters of host containers.
Prerequisite
Install Kubernetes on Linux system and also to deploy Kubernetes on a two-node Linux cluster, see Kubernetes Installation.
When you deploy Kubernetes, you get a cluster. A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node. The worker node(s) host the pods that are the components of the application.
This section outlines the steps to create the cRPD Docker image on Kubernetes.
Installing Kubernetes
To install Kubernetes:
Kubernetes Cluster
Kubernetes coordinates a cluster of computers that are connected to work as a single unit. Kubernetes automates the deployment and scheduling of cRPD across a cluster in an efficient way.
A Kubernetes cluster consists of two types of resources:
The Primary coordinates the cluster
Nodes are the workers that run applications
The Primary is responsible for managing the cluster. The primary coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates.
A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as Docker or rkt. A Kubernetes cluster that handles production traffic should have a minimum of three nodes.
When you deploy cRPD on Kubernetes, the primary starts the application containers. The primary schedules the containers to run on the cluster's nodes. The nodes communicate with the primary using the Kubernetes API, which the primary exposes. End users can also use the Kubernetes API directly to interact with the cluster.
A Pod always runs on a Node. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Primary. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster.
Every Kubernetes Node runs at least:
Kubelet, a process responsible for communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine.
A container runtime (like Docker, rkt) responsible for pulling the container image from a registry, unpacking the container, and running the application.
To create minikube cluster:
Download cRPD Docker Image
-
Before you import the cRPD software, ensure that Docker is installed on the Linux host and that the Docker Engine is running.
-
Ensure to register with Juniper Support before you download the cRPD software.
To download the docker image:
Creating a cRPD Pod using Deployment
A Kubernetes Pod is a group of one or more Containers, tied together for the purposes of administration and networking. A Kubernetes Deployment checks on the health of your Pod and restarts the Pod’s Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods.
When you describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
Creating a cRPD Pod using YAML
A Pod is the basic execution unit of a Kubernetes application–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources. Docker is the most common container runtime used in a Kubernetes Pod.
You can directly create a Pod or indirectly using a Controller in Kubernetes. A Controller can create and manage multiple Pods. Controllers use a Pod template that you provide to create the Pods. Pod templates are pod specifications which are included in other objects, such as Replication Controllers, Jobs, and DaemonSets.
To create the cRPD pod using the YAML file
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run multiple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as replication.
See Also
Creating a cRPD Pod using Job Resource
A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. When a specified number of successful completions is reached, the task is complete. You can also use a Job to run multiple Pods in parallel. Deleting a Job will clean up the Pods it created. Suspending a Job will delete its active Pods until the Job is resumed again. To create the cRPD Pod using the crpd_job.yaml file:
Creating a cRPD Pod using DaemonSet
DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
Creating the cRPD pod using the crpd_daemonset.yaml file
Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run multiple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as replication.
See Also
Scaling of cRPD
You can create multiple instances of cRPD based on the demand
using the –replicas
parameter for the kubectl
run
command. Deployment is an object which can own and manage
their ReplicaSets.
We should have one pod existing before scaling.
To scale up:
To scale down:
-
Run the following command to scale down the Service to 2 replicas:
root@kubernetes-master:~# kubectl scale deployments crpdref --replicas=2
deployment.apps/crpdref scaled
-
Run the following command to list the deployments:
root@kubernetes-master:~# kubectl get deployments
-
Run the following command to list the number of Pods. You can view the 2 Pods were terminated:
root@kubernetes-master:~# kubectl get pods -o wide
Rolling Update of cRPD Deployment
You can update Pod instances with new versions. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods are scheduled on Nodes with available resources. Rollback updates promotes an application from one environment to another with continuous integration and continuous delivery of applications with zero downtime. In Kubernetes, updates are versioned and any Deployment update can be reverted to previous stable version.
To update cRPD deployment with new image and preserve the configuration after update:
cRPD Pod Deployment with Allocated Resources
Pods provide two kinds of shared resources namely networking and storage for the containers. When
containers in a Pod communicate with entities outside the Pod, they must coordinate how
they use the shared network resources (such as ports). Within a Pod, containers
communicate through localhost
using an IP address and port.
Containers within the Pod view the system hostname as same as the configured
name
for the Pod.
Any container in a Pod can enable privileged mode, using the privileged
flag on the container spec. This is useful for containers that use operating system
administrative capabilities such as manipulating the network stack or accessing hardware
devices. Processes within a privileged container get almost the same privileges that are
available to processes outside a container.
To view the Pod deployment with resources:
cRPD Pod Deployment using Mounted Volume
An emptyDir
is one among the several types of volumes supported on K8s and is
first created when a Pod is assigned to a node, and exists as long as that Pod is running
on that node. As the name says, the emptyDir
volume is initially empty.
All containers in the Pod can read and write the same files in the
emptyDir
volume, though that volume can be mounted at the same or
different paths in each container. When a Pod is removed from a node for any reason, the
data in the emptyDir
is deleted permanently.
To view cRPD Pod deployment by mounting the storage path on Kubernetes: