Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Contrail Service

 

Chapter 3 introduced Kubernetes’ default implementation of service through kube-proxy. In Chapter 3 we mentioned that CNI providers can have their own implementations. Well, in Contrail, nodePort service is implemented by kube-proxy. However, clusterIP and loadbalancer services are implemented by Contrail’s loadbalancer (LB).

Before diving into the details of Kubernetes service in Contrail, let’s review the legacy OpenStack-based load balancer concept in Contrail.

Tip

For brevity, sometimes loadbalancer is also referred to as LB.

Contrail Openstack Load Balancer

Contrail load balancer is a foundation feature supported since its first release. It enables the creation of a pool of VMs serving applications, sharing one virtual-ip (VIP) as the front-end IP towards clients. Figure 1 illustrates Contrail load balancer and its components.

Figure 1: Contrail OpenStack Load Balancer
Contrail OpenStack
Load Balancer

Some highlights of Figure 1 are:

  • The LB is created with an internal VIP 30.1.1.1. An LB listener is also created for each listening port.

  • Together, all back-end VMs compose a pool which is with subnet 30.1.1.0/24, the same as LB’s internal VIP.

  • Each back-end VM in the pool, also called a member, is allocated an IP from the pool subnet 30.1.1.0/24.

  • To expose the LB to the external world, it has allocated another VIP which is external VIP 20.1.1.1.

  • A client only sees one external VIP 20.1.1.1, representing the whole service.

  • When LB sees a request coming from the client, it does TCP connection proxying. That means it establishes the TCP connection with the client, extracts the client’s HTTP/HTTPS requests, creates a new TCP connection towards one of the back-end VMs from the pool, and sends the request in the new TCP connection.

  • When LB gets its response from the VM, it forwards the response to the client.

  • And when the client closes the connection to the LB, the LB may also close its connection with the back-end VM.

Tip

When the client closes its connection to the LB, the LB may or may not close its connection to the back-end VM. Depending on the performance, or other considerations, it may use a timeout before it tears down the session.

You can see that this load balancer model is very similar to the Kubernetes service concept:

  • VIP is the service IP.

  • backend VM becomes backend pods.

  • members are added by Kubernetes instead of OpenStack.

In fact, Contrail re-uses a good part of this model in its Kubernetes service implementation. To support service load balancing, Contrail extends the load balancer with a new driver. Along with the driver, service will be implemented as an equal cost multiple path (ECMP) load balancer working in Layer 4 (transport layer). This is the primary difference when compared with the proxy mode used by the OpenStack load balancer type.

  • Actually any load balancer can be integrated with Contrail via the Contrail component conrail-svc-monitor.

  • Each load balancer has a load balancer driver that is registered to Contrail with a loadbalancer_provider type.

  • The contrail-svc-monitor listens to Contrail loadbalancer, listener, pool, and member objects. It also calls the registered load balancer driver to do other necessary jobs based on the loadbalancer_provider type.

  • Contrail by default provides ECMP load balancer (loadbalancer_provider is native) and haproxy load balancer (loadbalancer_provider is opencontrail).

  • The OpenStack load balancer is using haproxy load balancer.

  • Ingress, on the other hand, is conceptually even closer to the OpenStack load balancer in the sense that both are Layer 7 (Application Layer) proxy-based. More about ingress will be discussed in later sections.