Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Cloud-Native Router Operator Service Module: VPC Gateway

The Cloud-Native Router Operator Service Module is an operator framework that we use to develop cRPD applications and solutions. This section describes how to use the Service Module to implement a VPC gateway between your Amazon EKS cluster and your on-premises Kubernetes cluster.

Cloud-Native Router VPC Gateway Overview

We provide the Cloud-Native Router Operator Service Module to install JCNR (with a BYOL license) on an Amazon EKS cluster and to configure it to act as an EVPN-VXLAN VPC Gateway between a separate Amazon EKS cluster running MetalLB and an on-premises Kubernetes cluster (Figure 1).

Once you configure the VPC Gateway custom resource with information on your MetalLB cluster and your on-premises Kubernetes cluster, the VPC Gateway establishes a BGP session with your MetalLB cluster and establishes a BGP EVPN session with your on-premises Kubernetes cluster. Routes learned from the MetalLB cluster are re-advertised to the on-premises cluster using EVPN Type 5 routes. Routes learned from the on-premises cluster are leaked into the route tables of the routing instance for the MetalLB cluster.

The configuration example we'll use in this section connects workloads at 10.4.230.4/32 in the on-premises cluster to services at 10.14.220.1/32 in the MetalLB cluster.

Note:

Configuring the connectivity between the AWS Cloud and the Customer Data Center is not covered in this procedure. Use your preferred AWS method for connectivity.

Figure 1: Cloud-Native Router VPC Gateway
Note:

The VPC Gateway custom resource automatically installs Cloud-Native Router with a configuration that is specific to this application. You don't need to install Cloud-Native Router explicitly and you don't need to configure the Cloud-Native Router installation Helm chart.

Install the Cloud-Native Router VPC Gateway

This is the main procedure. Start here.

  1. Prepare the clusters.
    1. Prepare the Cloud-Native Router VPC Gateway cluster. See Prepare the Cloud-Native Router VPC Gateway Cluster.
    2. Prepare the MetalLB cluster. See Prepare the MetalLB Cluster.
    3. Prepare the on-premises cluster. See Prepare the On-Premises Cluster
    After preparing the clusters, you can start installation of the Cloud-Native Router VPC Gateway. Execute the remaining steps in the Cloud-Native Router VPC Gateway cluster.
  2. Download and install the Cloud-Native Router Service Module Helm chart on the cluster.

    You can download the Cloud-Native Router Service Module Helm chart from the Juniper Networks software download site. See Cloud-Native Router Software Download Packages.

  3. Install the downloaded Helm chart.
    Note:

    The provided Helm chart installs the Cloud-Native Router VPC Gateway on cores 2, 3, 22, and 23. Therefore ensure that the nodes in your cluster have at least 24 cores and that the specified cores are free to use.

    Check that the controller-manager and the contrail-k8s-deployer pods are up.

  4. Configure the Cloud-Native Router VPC Gateway custom resource.
    This custom resource contains information on the MetalLB cluster and the on-premises cluster.
    1. Create a YAML file that contains the desired configuration. We'll put our Cloud-Native Router VPC Gateway pods into a namespace that we'll call gateway.
      The YAML file has the following format:Table 1 describes the main configuration fields for the spec section. In the spec definition, application refers to the MetalLB cluster and client refers to the on-premises cluster.
      Table 1: Spec Descriptions

      Spec Field

      Description

      applicationTopology

      This section contains information on the MetalLB cluster.

      applicationInterface

      The name of the interface connecting to the MetalLB cluster.

      bgpSpeakerType

      Specify metallb when connecting to the MetalLB cluster.

       

      clusters

       
         

      kubeconfigSecretName

      The secret containing the kubeconfig of the MetalLB cluster.
          name The name of the MetalLB cluster.
       

      enableV6

      (Optional) True or false.

      Enables or disables IPv6 in the MetalLB cluster. Default is false.

      neighbourDiscovery

      (Optional) True or false.

      Governs how BGP neighbors (BGP speakers from the MetalLB cluster) are determined.

      When set to true, BGP neighbors with addresses specified in sessionPrefix or with addresses in the application interface's subnet are accepted.

      When set to false, the remote MetalLB cluster's cRPD pod IP is used as the BGP neighbor. Default is false.

      routePolicyOverride

      (Optional) True or false.

      When set to true, a route policy called "export-onprem" is used to govern what MetalLB cluster routes are exported to the on-premises cluster. This gives you the opportunity to create your own export policy. You must create this policy manually and call it "export-onprem".

      Default is false, which means that all MetalLB cluster routes are exported to the on-premises cluster.

      sessionPrefix

      (Optional) Used when neighbourDiscovery is set to true.

      When present, it indicates the CIDR from which BGP sessions from the MetalLB cluster are accepted.

      Default is to accept BGP sessions from BGP neighbors in the application interface's subnet.

      client

      Information related to the on-premises cluster.

       

      address

      The BGP speaker IP address of the on-premises cluster.

      The Cloud-Native Router VPC Gateway establishes a direct eBGP session with this address. This eBGP session is used to learn the route to the loopback address, which is used to establish the subsequent BGP EVPN session.

       

      asn

      The AS number of the eBGP speaker in the client cluster.

      The Cloud-Native Router VPC Gateway validates this when establishing the direct eBGP session with the BGP speaker in the on-premises cluster.

       

      loopbackAddress

      The loopback address of the BGP speaker in the on-premises cluster.

      The Cloud-Native Router VPC Gateway uses this IP address to establish a BGP EVPN session with the BGP speaker in the on-premises cluster.

       

      myASN

      The local AS number that the Cloud-Native Router VPC Gateway uses for the direct eBGP session with the BGP speaker in the on-premises cluster.

       

      routeTarget

      The route target for the EVPN routes in the on-premises cluster.
        vrrp

      Always set to true.

      This enables VRRP on interfaces towards the on-premises cluster.

      clientInterface

      The name of the interface connecting to the on-premises cluster.

      dpdkDriver

      Set to vfio-pci.

      loopbackIPPool

      The IP address pool used for assigning IP addresses to the cRPD instances created in the cluster (in CIDR format).

      Note:

      The number of addresses in the pool must be at least one more than the number of replicas.

      nodeSelector

      (Optional) Used in conjunction with a node's labels to determine whether the VPC Gateway pod can run on a node.

      This selector must match a node's labels for the pod to be scheduled on that node.

      replicas

      (Optional) The number of JCNRs created. Default is 1.

      Note:

      Armed with the MetalLB kubeconfig, the Cloud-Native Router VPC Gateway has sufficient information to configure BGP sessions automatically with the MetalLB cluster. You don't need to provide any parameters other than what's listed in the table.

      Here's an example of a working configuration:
    2. Apply the YAML file to the cluster.
      where vpcGateway.yaml is the YAML file defining the Cloud-Native Router VPC Gateway.
    3. Check the pods.
  5. Verify your installation.
    Find the name of the configlet:See how the configlet is configured. For example:
  6. Verify your installation.
    1. Access the cRPD pod.
    2. Enter CLI mode.
    3. Check the BGP peers.
      In the output above, the Cloud-Native Router VPC Gateway has the following BGP sessions:
      • with the iBGP speaker in the on-premises cluster at 10.14.140.200 for EVPN routes

      • with the eBGP speaker in the on-premises cluster at 10.14.205.158 for the direct eBGP session

      • with the MetalLB cluster at 10.14.207.29

    4. Check the routes to the MetalLB cluster and the on-premises cluster.
      Check the route to the Nginx service in the MetalLB cluster:Check the route to the workloads in the on-premises cluster:With the routes successfully exchanged, the on-premises workloads at 10.4.230.4 can access the MetalLB cluster at 10.14.220.1.

Prepare the MetalLB Cluster

The MetalLB cluster is the Amazon EKS cluster that you ultimately want to connect to your on-premises cluster. Follow this procedure to prepare your MetalLB cluster to establish a BGP session with the Cloud-Native Router VPC Gateway.

  1. Create the Amazon EKS cluster where you'll be running the MetalLB service.
  2. Deploy MetalLB on that cluster. MetalLB provides a network load balancer implementation for your cluster.
    See https://metallb.universe.tf/configuration/ for information on deploying MetalLB.
  3. Create the necessary MetalLB resources. As a minimum, you need to create the MetalLB IPAddressPool resource and the MetalLB BGPAdvertisement resource.
    1. Create the MetalLB IPAddressPool resource.

      Here's an example of a YAML file that defines the IPAddressPool resource.

      In this example, MetalLB will assign load balancer IP addresses from the 10.14.220.0/24 range.

      Apply the above YAML to the cluster to create the IPAddressPool.

      where ipaddresspool.yaml is the name of the YAML file defining the IPAddressPool resource.
    2. Create the MetalLB BGPAdvertisement resource.

      Here's an example of a YAML file that defines the BGPAdvertisement resource.

      The BGPAdvertisement resource advertises your service IP addresses to external routers (for example, to your Cloud-Native Router VPC Gateway).

      Apply the above YAML to the cluster to create the BGPAdvertisement resource.

      where bgpadvertisement.yaml is the name of the YAML file defining the BGPAdvertisement resource.
  4. Create the LoadBalancer service. The LoadBalancer service provides the entry point for external workloads to reach the cluster. You can create any LoadBalancer service of your choice.

    Here's an example YAML for an Nginx LoadBalancer service.

    Apply the above YAML to the cluster to create the Nginx LoadBalancer service.

    where nginx.yaml is the name of the YAML file defining the Nginx service.
  5. Verify your installation.
    1. Take a look at the pods in your cluster.
      For example:

      The example output shows that both MetalLB and Nginx are up.

    2. Check the assigned external IP address for the Nginx service.
      For example:

      In this example, MetalLB has assigned 10.14.220.1 to the Nginx LoadBalancer service. This is the overlay IP address that workloads in the on-premises cluster can use to reach services in the MetalLB cluster.

Prepare the Cloud-Native Router VPC Gateway Cluster

  1. Create the Amazon EKS cluster that you want to act as the Cloud-Native Router VPC Gateway.
    The cluster must meet the system requirements described in System Requirements for EKS Deployment.

    Since you're not installing Cloud-Native Router explicitly, you can ignore any requirement that relates to downloading the Cloud-Native Router software package or configuring the Cloud-Native Router Helm chart.

  2. Ensure all worker nodes in the cluster have identical interface names and identical root passwords.
    In this example, we'll use eth2 to connect to the MetalLB cluster and eth3 to connect to the on-premises cluster.
  3. Once the cluster is up, create a jcnr-secrets.yaml file with the below contents.
  4. Follow the steps in Installing Your License to install your Cloud-Native Router BYOL license in the jcnr-secrets.yaml file.
  5. Enter the base64-encoded form of the root password for your nodes into the jcnr-secrets.yaml file at the following line:
    You must enter the password in base64-encoded format. To encode the password, create a file with the plain text password on a single line. Then issue the command: Copy the output of this command into the designated location in jcnr-secrets.yaml.
  6. Apply jcnr-secrets.yaml to the cluster.
  7. Create the secret for accessing the MetalLB cluster.
    1. Base64-encode the MetalLB cluster kubeconfig file.
      where <metalLB-kubeconfig> is the kubeconfig file for the MetalLB cluster.

      The output of this command is the base64-encoded form of the MetalLB cluster kubeconfig.

    2. Create the YAML defining the MetalLB cluster kubeconfig secret. We'll use a namespace called jcnr-gateway, which we'll define later.
      where <base64-encoded kubeconfig of MetalLB cluster> is the base64-encoded output from the previous step.
    3. Apply the YAML.
      where metallb-cluster-kubeconfig-secret.yaml is the name of the YAML file defining the secret.
  8. Install webhooks.
  9. Create the jcnr-aws-configmap. See Cloud-Native Router ConfigMap for VRRP.
Your cluster is now ready for you to install the Cloud-Native Router VPC Gateway, but let's prepare the on-premises cluster first.

Prepare the On-Premises Cluster

The Cloud-Native Router VPC Gateway sets up an eBGP session and an iBGP session with the on-premises cluster:

  • The Cloud-Native Router VPC Gateway uses the eBGP session to learn the loopback IP address of the BGP speaker in the on-premises cluster. The Cloud-Native Router VPC Gateway then uses the loopback IP address to establish the subsequent iBGP session.

  • The Cloud-Native Router VPC Gateway uses the iBGP session to learn routes to the workloads in the on-premises cluster. For the iBGP session, you must configure the local and peer AS number to be 64512.

The Cloud-Native Router VPC Gateway does not impose any restrictions on the on-premises cluster as long as you configure it to establish the BGP sessions with the Cloud-Native Router VPC Gateway as described above and to expose routes to the desired workloads.

We don't cover configuring the on-premises cluster because that's very device-specific. You should configure the following, however, in order to be consistent with our ongoing example:

  • an eBGP speaker at 10.14.205.158 for the eBGP session

  • an iBGP speaker at 10.14.140.200 for exchanging EVPN routes

  • workloads reachable at 10.4.230.4/32