Enable Pods with Multiple Network Interfaces
Cloud-Native Contrail® Networking™ (CN2) supports multiple network interfaces for a pod within Kubernetes. Multiple network interface support for a pod in Kubernetes provides a variety of environment-specific functionality, including the ability to segment traffic over multiple interfaces.
Cloud-Native Contrail Networking natively supports multiple network interfaces for a pod. You can also enable multiple network interfaces in Cloud-Native Contrail Networking using Multus. Multus is a container network interface (CNI) plugin for Kubernetes developed by the Kubernetes Network Plumbing Working Group. Cloud-Native Contrail can interoperate with Multus to provide support for multiple interfaces provided by multiple CNIs in a pod.
This document provides the steps to enable multiple interfaces for a pod in environments using CN2. It includes information about when and how to enable multiple networking interfaces. Multiple interface support for a pod was initially released in Contrail Networking Release 22.1.
Multiple Network Interfaces in Cloud-Native Contrail Benefits
Support for multiple network interfaces is useful or required in many cloud-networking environments. This list provides a few common examples:
-
Pods routinely require a data interface to carry data traffic and a separate interface for management traffic.
-
Virtualized network functions (VNFs) typically need three interfaces—a left, a right, and a management interface—to provide network functions. A VNF often can't provide its function with a single network interface.
-
Cloud network topologies routinely need to support two or more network interfaces to isolate management networks from tenant networks.
-
In customized or high-scale cloud-networking environments, you often must use a cloud-networking product that supports multiple network interfaces to meet a variety of environment-specific requirements.
A pod in a Kubernetes cluster using the default CNI has a single network interface for sending and receiving network traffic. You can use Cloud-Native Contrail Networking to provide multiple network interfaces. Cloud-Native Contrail Networking also supports Multus integration, allowing environments using Cloud-Native Contrail for networking to support multiple network interfaces using Multus.
Multiple Network Interfaces in Cloud-Native Contrail Overview
You can enable multiple network interfaces in Cloud-Native Contrail using Multus and without using Multus. Multus is a container network interface (CNI) plugin for Kubernetes that enables support for multiple network interfaces on a pod as well as multihoming between pods. Multus can simultaneously support interfaces from multiple delegate CNIs. This multiple delegate CNI support allows for the creation of cloud-networking environments that are interconnected using CNIs from different vendors, including CN2. Multus is often called a "meta-plugin" because of this multi-vendor support.
The following two paragraphs provide information on when to use the two methods of enabling multiple network interfaces.
You should enable multiple network interfaces using the native Cloud-Native Contrail Networking support for multiple network interfaces for the following reasons;
-
You do not want the overhead of enabling and maintaining Multus in your environment.
-
You are using Cloud-Native Contrail Networking as your only container networking interface (CNI).
-
You do not want to create and maintain Network Attachment Definition (NAD) objects to support multiple network interfaces in your environment.
You must create a NAD object to enable multiple network interfaces with Multus. You do not have to configure a NAD object to enable multiple network interfaces if you are not using Multus.
Each NAD object notably creates a virtual network and a subnet that you have to monitor and maintain.
You should enable multiple network interfaces using Multus for the following reasons:
-
You are using Cloud-Native Contrail in an environment that is already using Multus. Multus is especially common in environments using Openshift orchestration.
-
You need the "meta-plugin" capabilities provided by Multus. You are using Cloud-Native Contrail in an environment where a pod is using multiple interfaces and the multiple interfaces are being managed by Cloud-Native Contrail and other CNIs.
-
You are using Cloud-Native Contrail in an environment where it is integrated with Juniper Networks Apstra. You must enable Multus in order to enable Cloud-Native Contrail integration with Apstra.
Cloud-Native Contrail integration with Apstra was introduced in Release 22.4. For more information regarding Cloud-Native Contrail integration with Apstra, see Extend Virtual Networks to Apstra.
-
You need some of the other Multus features in your environment.
Cloud-Native Contrail Integration with Multus Overview
A Contrail vRouter is natively Multus-aware. No Cloud-Native Contrail Networking-specific configuration is required to enable Multus interoperability with Cloud-Native Contrail.
This list summarizes Cloud-Native Contrail support interoperability options with Multus:
-
Cloud-Native Contrail is compatible with Multus CNI version 0.3.1.
-
Cloud-Native Contrail must be configured as the primary/default CNI with Multus.
-
Cloud-Native Contrail can be configured as a delegate CNI with Multus only when Cloud-Native Contrail is also configured as the primary/default CNI. Cloud-Native Contrail is not supported as a delegate CNI when other CNIs are configured as the primary CNI.
-
Cloud-Native Contrail supports interoperability with Multus when in vRouter kernel mode or Data Plane Development Kit (DPDK) mode.
Multus is a third-party plugin. You enable and configure Multus within Kubernetes but entirely outside of Cloud-Native Contrail. To enable Multus, you can apply the multus-daemonset.yml files provided by the Kubernetes Network Plumbing Working Group.
For detailed information about Multus, see the Multus CNI Usage Guide from the Kubernetes Network Plumbing Working Group.
Create a Network Attachment Definition Object
You do not need to create a NetworkAttachmentDefinition (NAD) object to enable multiple interfaces using the native multiple interfaces support in Cloud-Native Contrail Networking. You can skip this section if you are not using Multus to enable multiple network interfaces in your environment. If you are not using NAD objects but need to create a virtual network, see Deploy VirtualNetworkRouter in Cloud-Native Contrail Networking.
This section illustrates how to create a NAD object using a YAML file. You configure Cloud-Native Contrail into the NAD object using the juniper.net/networks annotation. We provide a representative example of the YAML file that creates the NAD object and a field descriptions table later in this section.
Be sure to include the juniper.net/networks annotation when you create the NetworkAttachmentDefinition object. If you define the YAML file to create the NetworkAttachmentDefinition object without using the juniper.net/networks annotation, the NetworkAttachmentDefinition object is treated as a third-party object. No Contrail-related objects will be created in the network, including the VirtualNetwork object and the Subnet object.
You create the NetworkAttachmentDefinition object in a Kubernetes environment using the NAD controller. The NAD controller runs in kube-manager and either creates a VirtualNetwork object or updates an existing VirtualNetwork object when a NetworkAttachmentDefinition is successfully created. The NAD controller is enabled by default but you can disable it; see Disable the Network Attachment Definition Controller.
Following is an example of the YAML file used to create a NetworkAttachmentDefinition object:
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: networkname-1 namespace: nm1 annotations: juniper.net/networks: '{ "ipamV4Subnet": "172.16.10.0/24", "ipamV6Subnet": "2001:db8::/64", "routeTargetList": ["target:23:4561"], "importRouteTargetList": ["target:10.2.2.2:561"], "exportRouteTargetList": ["target:10.1.1.1:561"], "fabricSNAT": true }' spec: config: '{ "cniVersion": "0.3.1", "name": "juniper-network", "type": "contrail-k8s-cni" }'
The NetworkAttachmentDefinition Object Fields table provides usage details for the variables in the NetworkAttachmentDefinition object file.
Variable | Usage |
---|---|
ipamV4Subnet |
(Optional) Specifies the IPv4 subnet address for the virtual network. |
ipamV6Subnet | (Optional) Specifies the IPv6 subnet address for the virtual network. |
routeTargetList | (Optional) Provides a list of route targets that are used as both import and export routes. |
importRouteTargetList | (Optional) Provides a list of route targets that are used as import routes. |
exportRouteTargetList | (Optional) Provides a list of route targets that are used as export routes. |
fabricSNAT |
(Optional) Specifies if you'd like to toggle connectivity to the underlay network using the port-mapping capabilities provided by fabric source NAT. Set this parameter to true or false. It is set to false by default. If you want to allow connectivity to the underlay network, set the parameter to true. |
You should note the following network activities related to the NetworkAttachmentDefinition object:
-
The network attachment definition controller works in kube-manager and handles processing of all network attachment definition objects.
-
You can monitor network attachment definition controller updates in juniper.net/network-status.
-
IPAM updates are not allowed to the network attachment definition object.
The network attachment definition object creates a virtual network. The Network Attachment Definition Object Impact on Virtual Networks table provides an overview of how events related to the network attachment definition object impact virtual networks.
If | Then |
---|---|
You define a namespace for a network attachment definition object in a single cluster topology |
A VirtualNetwork is created in the same namespace as the network attachment definition. This VirtualNetwork will have the same name as the Network Attachment Definition object. The NAD object is named using the name: field in the metadata: hierarchy. |
You define a namespace for a network attachment definition object in a multi-cluster topology |
The VirtualNetwork namespace is cluster-name-ns.. |
A namespace is not defined for a network attachment definition object in a multi-cluster topology |
The VirtualNetwork namespace is cluster-name-default. |
You delete a network attachment definition resource |
The associated VirtualNetwork object is also deleted. |
You delete a virtual network that was created by the network attachment definition resource |
The network attachment definition controller reconciles the issue and recreates the virtual network. |
Configure a Pod to Use Multiple Interfaces
You configure multiple interfaces in the pod object. If you are using Multus, you must also configure the NAD object as outlined in Create a Network Attachment Definition Object.
In the following example, you create two interfaces for network traffic in the juniper-pod-1 pod: tap1 and tap2.
apiVersion: v1 kind: Pod metadata: name: juniper-pod-1 namespace: juniper-ns annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name":"juniper-network1", "namespace":"juniper-ns", "cni-args":null, "ips":["172.16.20.42"], "mac":"de:ad:00:00:be:ef", "interface":"tap1" }, { "name":"juniper-network2", "namespace":"juniper-ns", "cni-args":null, "ips":["172.16.21.42"], "mac":"de:ad:00:00:be:ee", "interface":"tap2" } ]
Disable the Network Attachment Definition Controller
The NAD controller is part of the kube-manager object. You enable and disable this controller using the enableNad: variable within the YAML file that defines the kubemanager object. The network attachment definition controller is enabled by default.
If you would prefer to prevent the application of NetworkAttachmentDefinion objects, you can disable the network attachment definition controller.
In the following example, the network attachment definition controller is disabled:
kind: Kubemanager metadata: name: remote-cluster namespace: contrail spec: common: nodeSelector: node-role.kubernetes.io/master: "" enableNad: false