Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

CN2 Apstra Integration for SR-IOV-Based Workloads

SUMMARY This topic describes how to extend virtual networks in Juniper® Cloud-Native Contrail Networking to an Apstra-managed fabric for SR-IOV-enabled networks. Juniper Networks supports this integration feature using Contrail Networking Release 22.4 or later in a Kubernetes-orchestrated environment.

Overview

SR-IOV-enabled NICs on servers are used to deliver efficient I/O virtualization. The SR-IOV technology enables the physical NIC to be split into several virtual functions. These virtual NICs or virtual functions can transmit and receive packets directly as opposed to going through the vRouter. When you create workloads on SR-IOV servers and attach virtual functions to the pods, the workloads use the fabric underlay directly.

You use Juniper Apstra to provision the fabric to provide the required underlay connectivity for the SR-IOV workloads. Apstra is Juniper’s intent-based networking software that automates and validates the design, deployment, and operation of data center networks. You can integrate CN2 with Apstra software to provision the fabric underlay for SR-IOV pods.

Note:

We refer to primary nodes in our documentation. Kubernetes refers to master nodes. References in this guide to primary nodes correlate with master nodes in Kubernetes terminology.

Example: CN2 Kubernetes Deployment

Figure 1 shows an example of a CN2 Kubernetes deployment. This deployment uses Apstra to provision the fabric underlay for the SR-IOV pods. Table 1 describes the different components.

Figure 1: CN2 Kubernetes Deployment CN2 Kubernetes Deployment
Table 1: CN2 Apstra Components
Component Description
SR-IOV worker nodes The SR-IOV worker nodes connect to the leaf devices in the fabric. These nodes, which are part of the CN2 cluster, have SR-IOV-enabled NICs which can be split into virtual functions and attached to the pods. When you create a pod on an SR-IOV worker node, the pod's interface is attached to a virtual function on the SR-IOV-enabled NIC.
CN2 Apstra plug-in The CN2 Apstra plug-in extends the virtual network to the fabric. This plug-in listens for CN2 Kubernetes events such as creating a NAD, attaching pods to the virtual network, and creating a Virtual Network Router (VNR). The plug-in then configures the fabric for the underlay through Apstra.
Apstra Apstra provisions the fabric to provide the required underlay connectivity for SR-IOV pods. Apstra also provides topology information regarding which leaf port is connected to which worker node. The CN2 Apstra plug-in uses this information to configure virtual network membership. The plug-in configures membership on relevant fabric ports based on the worker node in which the SR-IOV pod is spawned.

Prerequisites

Before you configure SR-IOV devices in a CN2 Apstra configuration, you must install the following:

  • Apstra software 4.0 or higher

  • A CN2 cluster with worker nodes that have SR-IOV-enabled NICs

  • The following CN2 Container Network Interface (CNI) plug-ins:

    • Multus Plug-In

    • SR-IOV Network Device Plug-In

    • CN2 Apstra Plug-In

    • CN2 IPAM Plug-In

  • Licenses on the switches you are using in your topology

    Juniper QFX switches require software licenses for advanced features. To ensure your that your fabric has the required licenses, see the Juniper Networks Licensing Guide.

    Note:

    Make sure that you onboard the fabric to your Apstra blueprint as described in Step 4 in the Installation Workflow.

Considerations

Read through this list of considerations before you begin the installation:

  • This feature assumes:

    • CN2 single-cluster deployments

    • Communication between SR-IOV pods

    • Basic use cases for intra-VNI and inter-VNI communication between SR-IOV pods. Other forms of routing, such as hub-and-spoke routing, are not supported.

  • This feature assumes a simple spine leaf topology where the SR-IOV worker node is connected to only one leaf device. If an SR-IOV worker node is connected to multiple leaf ports, we configure all the leaf ports on all the leaf devices to which this SR-IOV worker node is connected.

  • In CN2, you must manually configure the routes on the pods for inter-VNI routing. For example: you can use the command ip route add 10.30.30.0/8 via 10.20.20.1 to reach other virtual networks.

  • Overlapping IPs and bonded interfaces are not in use.

  • Only IPv4 addressing is in use.

Installation Workflow

Follow the steps in this procedure to install and configure the CN2 Apstra plug-in and its prerequisites:

  1. Install the Apstra Software.

    Install and configure Apstra software 4.0 or higher. See the Juniper Apstra Installation and Upgrade Guide.

    If you have an existing data center network, Apstra is already managing the fabric. Make sure that you assign the required resource pools such as ASNs and loopback IP addresses for the blueprint.

  2. Install a CN2 Cluster.

    Install and configure a CN2 cluster that contains Kubernetes worker nodes. See the Install sections in the CN2 Installation Guide for Upstream Kubernetes or CN2 Installation Guide for OpenShift Container Platforms for instructions.

  3. Install the Plug-Ins.

    1. Multus Plug-In:

      This plug-in enables you to attach multiple network interfaces to pods. See the Multus CNI for Kubernetes or Multus CNI for OpenShift documentation for installation instructions.

    2. SR-IOV Network Device Plug-In:

      This plug-in discovers and advertises networking resources for SR-IOV virtual functions on a Kubernetes host. See the SR-IOV Network Device Plugin for Kubernetes or SR-IOV Network Device Plugin for OpenShift documentation for instructions.

    3. CN2 Apstra Plug-In:

      This plug-in is installed as part of the CN2 deployer. See Install and Configure the CN2 Apstra Plug-In to install the plug-in.

    4. CN2 IPAM Plug-In:

      This plug-in allocates IP addresses for the pods. You install this plug-in on the SR-IOV nodes. See Install the CN2 IPAM Plug-In to install the plug-in.

  4. Onboard the Fabric in Apstra.

    You onboard the fabric in Apstra from the Apstra Web GUI. For onboarding instructions, see the Juniper Apstra User Guide..

    • Make sure that you assign the required resource pools such as ASNs and loopback IP addresses to the blueprint.

    • Make sure that the hostnames of the generic systems (that is, servers) in your Apstra blueprint matches the hostnames of the corresponding CN2 nodes. You must also tag the SR-IOV link that connects the SRIOV-enabled NICs on the worker nodes to the fabric ports. You'll enter this same value in the sriov_link_tag in the CN2 Apstra plug-in CRD when you install the plug-in. The following diagram shows an example of a topology in an Apstra blueprint where the hostnames of the generic system were edited to match the corresponding hostnames of the CN2 worker nodes. The diagram also shows the SRIOV tags that were configured for the aforementioned SR-IOV links.

  5. Verify Your Installation.

    See Verify Your Installation for instructions.

Install and Configure the CN2 Apstra Plug-In

This section describes how to install and configure the CN2 Apstra plug-in.

The CN2 Apstra plug-in is installed as part of the deployer. The CN2 Apstra plug-in extends the virtual network to the fabric, listens for CN2 Kubernetes events (such as NAD creation), and configures the fabric for the underlay through the Apstra SDK.

Depending on your installation, use the following files to install and configure the plug-in:

  • For Kubernetes, use the single_cluster_deployer_example.yaml file.

  • For OpenShift, copy all the files in the ocp/plugins directory to one level up in the directory structure.

To install and configure the CN2 Apstra plug-in:

  1. Uncomment the apstra-plugin-secret and contrail-apstra-plugin in your single_cluster_deployer_example.yaml file.

  2. Enter your credentials (username and password) in the apstra-plugin-secret section in the corresponding deployer file. Make sure that your credentials are base64 encoded.

    For example:

  3. Enter the parameters for blueprint name, server_ip, sriov_link tag in the contrail-apstra-plugin as shown in the following example. Make sure that the parameter for the sriov_link tag is the same parameter that you specified in the Apstra software.

    This example also shows the image URL from where it fetches the contrail-apstra-plugin image. You can edit the image URL, if needed. For example, you can change the value of the release_number in the image to R22.4-5.

    For help in understanding what each field means, run the kubectl explain apstraplugin.spec command.

    Note:

    The following example is only for informational purposes. You can run this command only after you deploy the CN2 Apstra plug-in.

With the above steps, you have made the required changes in the deployer to install the CN2 Apstra plug-in. You can now proceed with the CN2 installation by following the instructions in the CN2 Installation Guide for Upstream Kubernetes or the CN2 Installation Guide for OpenShift Container Platforms.

Note:

Even after you have completed the CN2 installation, you can still edit the CN2 Apstra plug-in parameters in the deployer YAML(s) as mentioned in the above steps and then reinstall CN2.

Verify Your Installation

Run the following kubectl commands to verify that your installation is up and running. For example:

Install the CN2 IPAM Plug-In

Follow this procedure to install the CN2 IPAM plug-in for both Kubernetes and OpenShift deployments. This procedure assumes that CN2 is already installed on a Kubernetes cluster. In this procedure, we show a single-cluster deployment.

To install and configure the CN2 IPAM plug-in:

  1. Run the kubectl get nodes command to view the list of available nodes.
  2. Add the label sriov:"true" for each worker node with SR-IOV-enabled NICs. For example:
  3. Add the sriovLabelSelector on the contrail-vrouters-nodes CRD.
    In the CRD, under the spec field, add the following information:
  4. Verify the plug-in installation.

    Wait for the vRouter pod to restart on the master node. Verify that the cn2-ipamand sriov binaries are installed, as shown in the following example:

    Note:

    The default location of the binary files depends on whether you use Kubernetes or OpenShift:

    • For Kubernetes, the binaries reside in the /opt/cni/bin/ directory.

    • For OpenShift, the binaries reside in the /var/lib/cni/bin/ directory.

SR-IOV Use Cases

This section provides examples of SR-IOV uses case for Intra-VNI and Inter-VNI topologies.

Intra-VNI: Pods That Belong to the Same Virtual Network

In this Intra-VNI use case, the pods are attached to the same virtual network. By default, pods attached to the same virtual network can communicate with one another, regardless of whether:

  • The pods are spawned on the same SR-IOV worker nodes or on different SR-IOV worker nodes.

    Or

  • The SR-IOV worker nodes are connected to the same leaf device or to a different leaf device.

Figure 2 shows an example of an Intra-VNI topology. This spine leaf topology shows two SR-IOV worker nodes. Each node has a physical NIC with SR-IOV enabled. These physical NICs (ens801f2 and ens801f3) can be split into virtual functions and attached to the pods for direct input/output (I/O). When packets travel these virtual functions, the packets are tagged with the correct VLAN. In this example, each pod belongs to the same virtual network. In this use case, the packets do not pass through the vRouter. Instead, the packets go directly to the fabric underlay provisioned by Apstra.

Figure 2: Example: Intra-VNI Topology
Intra-VNI: Pods That Belong to the Same Virtual Network

Inter-VNI Routing: Pods That Belong to Different Virtual Networks

Figure 3. shows an example of an Inter-VNI topology. In this topology, the pods in the topology belong to two different virtual networks. To enable routing between these networks, you must create a Virtual Network Router (VNR) in CN2. For instructions, see Configure Inter-VNI Pod Communication.

Note:

For a list of supported Juniper devices for use in an Inter-VNI topology, see Layer 3 connectivity in an EVPN-VXLAN topology. Also, make sure that the QFX devices are running Junos OS version 20.2R2.11 or above.

Figure 3: Example: Inter-VNI Topology
Inter-VNI Routing: Pods That Belong to Different Virtual Networks

Configure Intra-VNI Pod Communication

Follow this procedure to configure communication between pods that reside on the same virtual network.

To configure intra-VNI pod communication:

  1. Create a NetworkAttachmentDefinition (NAD) object to attach the pods to the virtual network.
    The following example shows the NetworkAttachmentDefinition.yaml file you use to create the NAD.
    In this example, we use the label juniper.net/plugin: apstra to extend the CN2 virtual network to the fabric using the Apstra plug-in. You can use the same NAD definition for an OpenShift cluster.
    When you create this object, the CN2 Apstra plug-in listens to the NAD and extends the VirtualNetwork to the fabric through Apstra.
  2. Run the kubectl apply -f sriov_net20_nad.yaml command to create the NAD.
  3. Run the following command to verify that the NAD was created:
    When the NAD is created, the CN2 Apstra plug-in listens for changes and provisions the fabric through the Apstra SDK.
  4. Next, create a Pod.yaml file. For example:
    Note that we are referencing the NAD (sriov-net20) that we created in Step 1. in addition to the resource name (intel.com/intel_sriov_netdevice) for the virtual function.
  5. Finally, run the kubectl apply -f pod.yaml command to create the pod.
    When you create the pod, the CN2 Apstra plug-in listens to the pod-creation event and provisions the fabric to assign the relevant fabric ports to the VirtualNetwork.
You have now configured communication between pods on the same virtual network.

Configure Inter-VNI Pod Communication

Follow this procedure to configure communication between pods that belong to different virtual networks.

To configure Inter-VNI pod communication:

  1. Create a NAD object to attach the pods to the virtual network. You can use this same NAD definition for an OpenShift cluster.
  2. Run the kubectl apply -f sriov_net30_nad.yaml command to create the NAD.
  3. Run the following command to verify that the NAD was created:
    When you create the NAD, the CN2 Apstra plug-in listens to the NAD event and extends the VirtualNetwork to the fabric through Apstra. You can create additional NADs as needed, following the same pattern.
  4. Create a Pod.yaml file to attach the pods to the different virtual networks. For example:

    Note that we are referencing the NAD (sriov-net30) created in Step 1. in addition to the resource name (intel.com/intel_sriov_netdevice) for the virtual function.

  5. Run the kubectl apply -f pod.yaml command to create the pod.
    When you create the pod, the CN2 Apstra plug-in listens to the pod-creation event and provisions the fabric to assign the relevant fabric ports to the VirtualNetwork.
  6. Create a VirtualNetworkRouter.yaml file to route the different virtual networks with a common label. In this example, the common label that is being used is: web.
  7. Finally, configure the gateway manually on each pod for inter-VNI routing.
    For example, on Ubuntu, you can use the command ip route add 10.20.20.0/24 via 10.30.30.1 to configure the gateway.
You have now configured communication between pods that belong to different virtual networks.