Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Contrail Floating IP


Communication has been discussed and tested between pods in the same or different namespace, but so far, it’s been inside of the same cluster. What about communication with devices outside of the cluster?

You may already know that in the traditional (OpenStack) Contrail environment, there are many ways for the overlay entities (typically a VM) to access the Internet. The three most frequent methods are:

  • Floating IP

  • Fabric SNAT

  • Logical router

The preferred Kubernetes solution is to expose any service via service and Ingress objects, which you’ve read about in Chapter 3. In the Contrail Kubernetes environment, floating IP is used in the service and ingress implementation to expose them to what’s outside of the cluster. Later this chapter discusses each of these two objects. But first, let’s review the floating IP basis and look at how it works with Kubernetes.


The fabric SNAT and logical router are used by overlay workloads (VM and pod) to reach the Internet but initializing communication from the reverse direction is not possible. However floating IP supports traffic initialized from both directions – you can configure it to support ingress traffic, egress traffic, or both, and the default is bi-directional. This book focuses only on floating IP. Refer to Contrail documentation for detailed information about the fabric SNAT and logical router: documentation/en_US/contrail5.0/information-products/pathway-pages/contrail-feature-guide-pwp.html.

Floating IP and Floating IP Pool

The floating IP, or FIP for short, is a traditional concept that Contrail has supported since its very early releases. Essentially, it’s an OpenStack concept to map a VM IP, which is typically a private IP address, to a public IP (the floating IP in this context) that is reachable from the outside of the cluster. Internally, the one-to-one mapping is implemented by NAT. Whenever a vRouter receives packets from outside of the cluster destined to the floating IP, it will translate it to the VM’s private IP and forward the packet to the VM. Similarly, it will do the translation on the reverse direction. Eventually both VM and Internet host can talk to each other, and both can initiate the communication.


The vRouter is a Contrail forwarding plane that resides in each compute node handling workload traffic.

Figure 1 illustrates the basic workflow of floating IP.

Figure 1: Floating IP Workflow
Floating IP Workflow

Here are some highlights regarding floating IP to keep in mind:

  • A floating IP is associated with a VM’s port, or a VMI (Virtual Machine Interface).

  • A floating IP is allocated from a FIP pool.

  • A floating IP pool is created based on a virtual network (FIP-VN).

  • The FIP-VN will be available to outside of the cluster, by setting matching route-target (RT) attributes of the gateway router’s VRF table.

  • When a gateway router sees a match with its route import policy in the RT, it will load the route into its VRF table. All remote clients connected to the VRF table will be able to communicate with the floating IP.

There is nothing new in the Contrail Kubernetes environment regarding the floating IP concept and role. But the use of floating IP has been extended in Kubernetes service and ingress object implementation, and it plays an important role for accessing Kubernetes service and ingress externally. You can check later sections in this chapter for more details.

Create FIP Pool

Let’s create a floating IP pool in a three-step process:

  1. Create a public floating IP-VN.

  2. Set RT (route-target) for the virtual network so it can be advertised and imported into the gateway router’s VRF table.

  3. Create a floating IP pool based on the public floating IP-virtual network.

Again, there is nothing new here. The same steps would be required in other Contrail environments without Kubernetes. However, as you’ve learned in previous sections, with Contrail Kubernetes integration a floating IP-virtual network can now be created in Kubernetes style.

Create a Public Floating IP-Virtual Network Named vn-ns-default

Now set the routing target.

If you need the floating IP to be reachable from the Internet through the gateway router, you’ll need to set a route target for the virtual network prefix getting imported in the gateway router’s VRF table (see Figure 2). This step is necessary whenever Internet access is required.

The UI navigation path to set the RT is: Contrail Command > Main Menu > Overlay > Virtual Networks > k8s-vn-ns-default-pod-network > Edit > Routing, Bridging and Policies.

Figure 2: Contrail Command: Setting RT
Contrail Command:
Setting RT

Now let’s create a floating IP pool based on the public virtual network.

This is the final step. From the Contrail Command UI, create a floating IP pool based on the public virtual network. The UI navigation path for this setting shown in Figure 3 is: Contrail Command > Main Menu > Overlay > Floating IP > Create.


The Contrail UI also allows you to set the external flag in virtual network advanced options, so that a floating IP pool named public will automatically be created.

Figure 3: Contrail Command: Create a Floating IP Pool
Command: Create a Floating IP Pool

Floating IP Pool Scope

There are different ways you can refer to a floating IP pool in the Contrail Kubernetes environment, and correspondingly the scope of the pools will also be different. The three possible levels with descending priority are:

  • Object specific

  • Namespace level

  • Global level

Object Specific

This is the most specific level of scope. An object specific floating IP pool binds itself only to the object that you specified, it does not affect any other objects in the same namespace or cluster. For example, you can specify a service object web to get floating IP from the floating IP pool pool1, a service object dns to get floating IP from another floating IP pool pool2, etc. This gives the most granular control of where the floating IP will be allocated from for an object – the cost is that you need to explicitly specify it in your YAML file for every object.

Namespace Level

In a multi-tenancy environment each namespace would be associated to a tenant, and each tenant would have a dedicated floating IP pool. In that case, it is better to have an option to define a floating IP pool at the NS level, so that all objects created in that namespace will get floating IP assignment from that pool. With the namespace level pool defined (for example, pool-ns-default), there is no need to specify the floating IP-pool name in each object’s YAML file any more. You can still give a different pool name, say my-webservice-pool in an object webservice. In that case, object webservice will get the floating IP from my-webservice-pool instead of from the namespace level pool pool-ns-default, because the former is more specific.

Global Level

The scope of the global level pool would be the whole cluster. Objects in any namespaces can use the global floating IP pool.

You can combine all three methods to take advantage of their combined flexibility. Here’s a practical example:

  • Define a global pool pool-global-default, so any objects in a namespace that has no namespace-level or ob- ject-level pool defined, will get a floating IP from this pool.

  • For ns dev, define a floating IP pool pool-dev, so all objects created in ns dev will by default get floating IP from pool-dev.

  • For ns sales, define a floating IP pool pool-sales, so all objects created in ns sales will by default get floating IP from pool-sales .

  • For ns test-only, do not define any namespace-level pool, so by default objects created in it will get floating IP from the pool-global-default.

  • When a service dev-webservice in ns dev needs a floating IP from pool-sales instead of pool-dev, specifying pool-sales in dev-webservice object YAML file will achieve this goal.


Just keep in mind the rule of thumb – the most specific scope will always prevail.

Object Floating IP Pool

Let’s first take a look at the object-specific floating IP pool:

In this example, service service-web-lb-pool-public-1 will get an floating IP from pool pool-public-1, which is created based on virtual network vn-public-1 under current project k8s-ns-user-1. The corresponding Kubernetes namespace is ns-user-1. Since object-level floating IP pool is assigned for this specific object only, with this method each new object needs to be explicitly assigned a floating IP pool.

NS Floating IP Pool

The next floating IP pool scope is in the namespace level. Each namespace can define its own floating IP pool. In the same way as a Kubernetes annotations object is used to give a subnet to a virtual network, it is also used to specify a floating IP pool. The YAML file looks like this:

Here ns-user-1 is given a namespace-level floating IP pool named pool-ns-default, and the corresponding virtual network is vn-ns-default. Once the ns-user-1 is created with this YAML file, any new service which requires a floating IP, if not created with the object-specific pool name in its YAML file, will get a floating IP allocated from this pool. In practice, most namespaces (especially those isolated namespaces) will need their own namespace default pool so you will see this type of configuration very often in the field.

Global floating IP Pool

To specify a global level floating IP pool, you need to give the fully-qualified pool name (domain > project > net- work > name) in contrail-kube-manager (KM) Docker’s configuration file(/etc/contrail/contrail-kubernetes.conf). This file is automatically generated by the Docker during its bootup based on its ENV parameters, which can be found in the /etc/contrail/common_kubemanager.env file in the master node:

As you can see, this .env file contains important environmental parameters about the setup. To specify a global FIP pool, add the following line:

KUBERNETES_PUBLIC_FIP_POOL={'domain': 'default-domain','name': 'pool-global-default','network': 'vn-global-default','project': 'k8s-ns-user-1'}

It reads: the global floating IP pool is called pool-global-default and it is defined based on a virtual network vn-global-default under project k8s-ns-user-1. This indicates that the corresponding Kubernetes namespace is ns-user-1.

Now with that piece of configuration placed, you can re-compose the contrail-kube-manager Docker container to make the change take effect. Essentially you need to tear it down and then bring it back up:

Now the global floating IP pool is specified for the cluster.


In all three scopes, floating IP is automatically allocated and associated only to service and ingress objects. If the floating IP has to be associated to a pod it has to be done manually. We’ll talk about this in the next section.

Floating IP for Pods

Once floating IP pool is created and available, a floating IP can be allocated from the floating IP pool for the pods that require one. This can be done by associating a floating IP to a VMI (VM, or pod, interface), You can manually create a floating IP out of a floating IP pool in Contrail UI, and then associate it with a pod VMI as in Figure 4 and Figure 5

Figure 4: Create Floating IP
Create Floating IP
Figure 5: Associate a Floating IP in a Pod Interface
Associate a Floating IP in a Pod Interface

Make sure the floating IP pool is shared to the project where floating IP is going to be created.

Advertising Floating IP

Once a floating IP is associated to a pod interface, it will be advertised to the MP-BGP peers, which are typically gateway routers. The following Figures, Figure 6, Figure 7, and Figure 8, show how to add and edit a BGP peer.

Figure 6: Contrail Command: Select Main-Menu > INFRASTRUCTURE: Cluster > Advanced Options
Contrail Command: Select Main-Menu > INFRASTRUCTURE: Cluster
> Advanced Options
Figure 7: Contrail Command: Select BGP Router > Create
Command: Select BGP Router > Create
Figure 8: Edit BGP Peer Parameters
Edit BGP Peer Parameters

Input all the BGP peer information and don’t forget to associate the controller(s), which is shown next in Figure 9.

Figure 9: Associate the Peer to a Controller
the Peer to a Controller

From the dropdown of peer under Associated Peers, select the controller(s) to peer with this new BGP router that you are trying to add. Click save when done. A new BGP peer with ROUTER TYPE router will pop up.

Figure 10: A New BGP Router in the BGP Router List
A New
BGP Router in the BGP Router List

Now we’ve added a peer BGP router as type router. For the local BGP speaker, which is with type control-node, you just need to double-check the parameters by clicking the Edit button. In this test we want to build an MP-IBGP neighborship between Contrail Controller and the gateway router, so make sure the ASN and Address Families fields match on both ends, refer to Figure 11.

Figure 11: Contrail Controller BGP Parameters: ASN
Controller BGP Parameters: ASN

Now you can check BGP neighborship status in the gateway router:

Once the neighborship is established, BGP routes will be exchanged between the two speakers, and that is when we’ll see that the floating IP assigned to the Kubernetes object is advertised by the master node ( and learned in the gateway router:

The detail version of the same command tells more: the floating IP route is reflected from the Contrail Con- troller, but Protocol next hop being the compute node ( indicates that the floating IP is assigned to a compute node. One entity currently running in that compute node owns the floating IP:

The dynamic soft GRE configuration makes the gateway router automatically create a soft GRE tunnel interface:

The IP-Header indicates a GRE outer IP header, so the tunnel is built from the current gateway router whose BGP local address is, to the remote node, in this case it’s one of the Contrail compute nodes. The floating IP advertisement process is illustrated in Figure 12.

Figure 12: Floating IP Advertisement
Floating IP Advertisement