So far you’ve been reading about the master and node and the main processes running in each. Now it’s time to visualize how things work together, as shown in Figure 1.
At the top of Figure 1, via
kubectl commands, you talk
to the Kubernetes master, which manages the two node boxes on the
right. Kubectl interacts with the master process kube-apiserver via
its REST-API exposed to the user and other processes in the system.
Let’s send some kubectl commands – something like
kubectl create x, to spawn a new container. You can
provide details about the container to be spawned along with its running
behaviors, and those specifications can be provided either as
kubectl command line parameters, or options and values
defined in a configuration file (an example on this appears shortly).
The workflow would be:
The kubectlclient will first translate your CLI command to one more REST-API call(s) and send it to kube-apiserver.
After validating these REST-API calls, kube-apiserver understands the task and calls kube-scheduler process to select one node from the available ones to execute the job. This is the scheduling procedure.
Once kube-scheduler returns the target node, and kube-apiserver will dispatch the task with all of the details describing the task.
The kubelet process in the target node receives the task and talks to the container engine, for example, the Docker engine in Figure 1, to spawn a container with all provided parameters.
This job and its specification will be recorded in a centralized database etcd. Its job is to preserve and provide access to all data in the cluster.
Actually a master can also be a fully-featured node and
carry pods workforce just like a node does. Therefore, kubelet and
kube proxy components existing in node can also exist in the master.
In Figure 1, we
didn’t include these components in the master, in order to provide
a simplified conceptual separation of master and node. In your setup
you can use command
kubectl get pods --all-namespaces
-o wide to list all pods with their location. Pods spawned
in the master are usually running as part of the Kubernetes system
itself – typically within kube-system namespace. The Kubernetes
namespace is discussed in Chapter 3.
Of course this is a simplified workflow, but you should get the basic idea. In fact, with the power of Kubernetes, you rarely need to work directly with containers. You work with higher level objects that tend to hide most of the low level operation details.
For example, in Figure 1 when you give the task to spawn containers, instead of saying: create two containers and make sure to spawn new ones if either one would fail, in practice you just say: create a RC object (replication controller) with replica two.
Once the two Docker containers are up and running, kubeapiserver will interact with kube-controller-manager to keep monitoring the job status and take all necessary actions to make sure the running status is what it was defined as. For example, if any of the Docker containers go down, a new container will automatically be spawned and the broken one will be removed.
The RC in this example is one of the objects that is provided by the Kubernetes kube-controller-manager process. Kubernetes objects provide an extra layer of abstraction that gets the same (and usually more) work done under the hood, in a simpler and cleaner way. And because you are working at a higher level and staying away from the low-level details, Kubernetes objects sharply reduce your overall deployment time, brain effort, and troubleshooting pains. Let’s examine.