Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Service

 

When a pod gets instantiated, terminated, and moved from one node to another, and in so doing changes its IP address, how do we keep track of that to get uninterrupted functionalities from the pod? Even if the pod isn’t moving, how does traffic reach groups of pods via a single entity?

The answer to both questions is Kubernetes service.

Service is an abstraction that defines a logical set of pods and a policy, by which you can access them. Think of services as your waiter in a big restaurant – this waiter isn’t cooking, instead he’s an abstraction of everything happing in the kitchen and you only have to deal with this single waiter.

Service is a Layer 4 load balancer and exposes pod functionalities via a specific IP and port. The service and pods are linked via labels like rs. And there’s three different types of services:

  • ClusterIP

  • NodePort

  • LoadBalancer

ClusterIP Service

The clusterIP service is the simplest service, and the default mode if the ServiceType is not specified. Figure 1 illustrates how clusterIP service works.

Figure 1: ClusterIP Service
ClusterIP Service

You can see that the ClusterIP Service is exposed on a clusterIP and a service port. When client pods need to access the service it sends requests towards this clusterIP and service port. This model works great if all requests are coming from inside of the same cluster. The nature of the clusterIP limits the scope of the service to only be within the cluster. Overall, by default, the clusterIP is not externally reachable .

Create ClusterIP Service

The YAML file looks pretty simple and self-explanatory. It defines a service/service-web-clusterip with the service port 8888, mapping to targetPort, which means container port 80 in some pod. The selector indicates that whichever pod with a label and app: webserver will be the backend pod responding service request.

Okay, now generate the service:

Use kubectl commands to quickly verify the service and backend pod objects:

The service is created successfully, but there are no pods for the service. This is because there is no pod with a label matching the selector in the service. So you just need to create a pod with the proper label.

Now, you can define a pod directly but given the benefits of rc and the deployment over pods, as discussed earlier, using rc or deployment is more practical (you’ll soon see why).

As an example, let’s define a Deployment object named webserver:

The Deployment webserver has a label app: webserver, matching the selector defined in our service. The replicas: 1 instructs the controller to launch only one pod at the moment. Let’s see:

And immediately the pod is chosen to be the backend.

Other brief summaries about the previous kubectl get svc command output are:

  • The service got a clusterIP, or service IP, of 10.101.150.135 allocated from the service IP pool.

  • The service port is 8888 as what is defined in YAML.

  • By default, the protocol type is TCP if not declared in the YAML file. You can use protocol: UDP to declare a UDP service.

  • The backend pod can be located with the label selector.

Tip

The example shown here uses an equality-based selector (-l) to locate the backend pod, but you can also use a set-based syntax to archive the same effect. For example: kubectl get pod -o wide -l 'app in (webserver)'.

Verify ClusterIP Service

To verify that the service actually works, let’s start another pod as a client to initiate a HTTP request toward the service. For this test, we’ll launch and log in to a client pod and use the curl command to send an HTTP request toward the service. You’ll see the same pod being used as a client to send requests throughout this book:

Create the client pod:

Tip

The client pod is just another spawned pod based on the exact same image whatever the webserver Deployment and its pods do. This is the same as with physical servers and VMs: nothing stops a server from doing the client’s job:

The HTTP request toward the service reaches a backend pod running the web server application, which responds with a HTML page.

To better demonstrate which pod is providing the service, let’s set up a customized pod image that runs a simple web server. The web server is configured in such a way that when receiving a request it will return a simple HTML page with local pod IP and hostname embedded. This way the curl returns something more meaningful in our test.

The returned HTML looks relatively okay to read, but there is a way to make it easier to see, too:

The w3m tool is a lightweight console-based web browser installed in the host. With w3m you can render a HTML webpage into text, which is more readable than the HTML page.

Now that service is verified, requests to service have been redirected to the correct backend pod, with a pod IP of 10.47.255.238 and a pod name of webserver-7c7c458cc5-vl6zs.

Specify a ClusterIP

If you want to have a specific clusterIP, you can mention it in the spec. IP addresses should be in the service IP pool.

Here’s some sample YAML with specific clusterIP:

NodePort Service

The second general type of service, NodePort, exposes a service on each node’s IP at a static port. It maps the static port on each node with a port of the application on the pod as shown in Figure 2.

Figure 2: NodePort Service
NodePort Service

Here are some highlights in this services YAML file:

  • selector: The label selector that determines which set of pods is targeted by this service; here, any pod with the label app: webserver will be selected by this service as the backend.

  • Port: This is the service port.

  • TargetPort: The actual port used by the application in the container. Here, it’s port 80, as we are planning to run a web server.

  • NodePort: The port on the host of each node in the cluster.

Let’s create the service:

  • Type: The default service type is ClusterIP. In this example, we set the type to NodePort.

  • NodePort: By default, Kubernetes allocates node ports in the 30000-32767 range, if it is not mentioned in the spec. This can be changed using the flag --service-node-port-range. The NodePort value can also be set, but make sure it’s in the configured range

  • Endpoints: The podIP and the exposed container port. The request toward service IP and service port will be directed here, and 10.47.255.252:80 indicates that we have created a pod that has a matching label with the service, so its IP is selected as one of the backends.

Note

For this test, make sure there is at least one pod with the label app:webserver running. Pods in previous sections are all created with this label. Recreating the client pod suffices if you’ve removed them already.

Now we can test this by using the curl command to trigger an HTTP request toward any node IP address:

With the power of the NodePort service, you can access the web server running in the pod from any node via the nodePort 32001:

Load Balancer Service

The third service, the load balancer service, goes one step beyond the NodePort service by exposing the service externally using a cloud provider’s load balancer. The load balancer service by its nature automatically includes all the features and functions of NodePort and ClusterIP services.

Kubernetes clusters running on cloud providers support the automatic provision of a load balancer. The only difference between the three services is the type value. To reuse the same NodePort service YAML file, and create a load balancer service, just set the type to LoadBalancer:

The cloud will see this keyword and a load balancer will be created. Meanwhile, an external public load balancerIP is allocated to serve as the frontend virtual IP. Traffic coming to this loadbalancerIP will be redirected to the service backend pod. Please keep in mind that this redirection process is solely a transport layer operation. The loadbalancerIP and port will be translated to private backend clusterIP and it’s targetPort. It does not involve any application layer activities. There is nothing like parsing a URL, proxy HTTP request, and etc., like what happens in the HTTP proxying process. Because the loadbalancerIP is publicly reachable, any Internet host that has access to it (and the service port) can access the service provided by the Kubernetes cluster.

From an Internet host’s perspective, when it requests service, it refers this public external loadbalancerIP plus service port, and the request will reach the backend pod. The loadbalancerIP is acting as a gateway between service inside of the cluster and the outside world.

Some cloud providers allow you to specify the loadBalancerIP. In those cases, the load balancer is created with the user-specified loadBalancerIP. If the loadBalancerIP field is not specified, the load balancer is set up with an ephemeral IP address. If you specify a loadBalancerIP but your cloud provider does not support the feature, the loadbalancerIP field that you set is ignored.

How a load balancer is implemented in the load balancer service is vendor-specific. A GCE load balancer may work in a totally different way with an AWS load balancer. There is a detailed demonstration of how the load balancer service works in a Contrail Kubernetes environment in Chapter 4.

External IPs

Exposing service outside of the cluster can also be achieved via the externalIPs option. Here’s an example:

In the Service spec, externalIPs can be specified along with any of the service types. External IPs are not managed by Kubernetes and are the responsibility of the cluster administrator.

Note

External IPs are different from loadbalancerIP, which is the IP assigned by the cluster administrator, while external IPs come with the load balancer created by the cluster that supports it.

Service Implementation: Kube-proxy

By default, Kubernetes uses the kube-proxy module for services, but CNI providers can have their own implementations for services.

Kube-proxy can be deployed in one of the three modes:

  • user-space proxy-mode

  • iptables proxy-mode

  • ipvs proxy-mode

When traffic hits the node, it’s forwarded to one of the backend pods via a deployed kube-proxy forwarding plane. Detailed explanations and comparisons of these three modes will not be covered in this book, but you can check Kubernetes official website for more information. Chapter 4 illustrates how Juniper Contrail as a Container Network Interface (CNI) provider implements the service.