Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Contrail as a CNI

 

In container technology, a veth (virtual Ethernet) pair functions pretty much like a virtual cable that can be used to create tunnels between network namespaces. One end of it is plugged in the container and the other end is in the host or docker bridge namespace.

A Contrail CNI plugin is responsible for inserting the network interface (that is one end of the veth pair) into the container network namespace and it will also make all the necessary changes on the host, like attaching the other end of the veth into a bridge, assigning IP, configuring routes, and so on.

There are many such CNI plugin implementations that are publicly available today. Contrail is one of them and it is our favorite. For a comprehensive list check https://github.com/containernetworking/cni.

Figure 1: The Container and veth Pair
The Container and
veth Pair

Another CNI plugin, multus-cni, enables you to attach multiple network interfaces to pods. Multiple-network support of multus-cni is accomplished by Multus calling multiple other CNI plugins. Because each plugin will create its own network, multiple plugins allow the pod to have multiple networks. One of the main advantages that Contrail provides, compared to mutus-cni, and all other current implementations in the industry, is that Contrail provides the ability to attach multiple network interfaces to a Kubernetes pod by itself, without having to call in any other plugins. This brings support to a truly multi-homed pod.

Network Attachment Definition CRD

Contrail CNI follows the Kubernetes CRD (Custom Resource Definition) Network Attachment Definition to provide a standardized method to specify the configurations for additional network interfaces. There is no change to the standard Kubernetes upstream APIs, making the implementation with the most compatibilities.

In Contrail, the Network Attachment Definition CRD is created by contrail-kube-manager(KM). When booted up, KM validates if a network CRD network-attachment-definitions.k8s.cni.cncf.io is found in the Kubernetes API server, and creates one if it isn’t.

Here is a CRD object YAML file:

In the Contrail Kubernetes setup, the CRD has been created and can be verified:

Using this new kind of Network-Attachment-Definition created from the above CRD, we now have the ability to create a virtual network in Contrail Kubernetes environments.

To create a virtual network from Kubernetes, use a YAML template like this:

Like many other standard Kubernetes objects, you specify the virtual network name, the namespace under metadata, and the annotations that are used to carry additional information about a network. In Contrail, the following annotations are used in the NetworkAttachmentDefinition CRD to enable certain attributes for the virtual network:

  • opencontrail.org/cidr: This CIDR defines the subnet for a virtual network.

  • opencontrail.org/ip_fabric_forwarding: This flag is to enable/disable the ip fabric forwarding feature.

  • opencontrail.org/ip_fabric_snat: This is a flag to enable/disable the ip fabric snat feature.

In Contrail, the ip-fabric-forwarding feature enables IP fabric-based forwarding without tunneling for the virtual network. When two ip_fabric_forwrding enabled virtual networks communicate with each other, the overlay traffic will be forwarded directly using the underlay.

With the Contrail ip-fabric-snat feature, pods that are in the overlay can reach the Internet without floating IPs or a logical router. The ip-fabric-snat feature uses compute node IP for creating a source NAT to reach the required services.

Note that ip fabric forwarding and ip fabric snat features are not covered in this book.

Alternatively, you can define a new virtual network by referring an existing virtual network:

In this book we use the first template to define our virtual networks in all examples.