Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install Juniper BNG CUPS

This section describes installation procedures and system requirements for Juniper BNG CUPS.

Juniper BNG CUPS disaggregates the broadband network gateway (BNG) function running in Junos OS into separate control plane and user plane components. The control plane is a cloud-native application that runs in a Kubernetes environment. The user plane component continues to run on Junos OS on a dedicated hardware platform.

The installation instructions in this guide are for the disaggregated control plane component of the Juniper BNG CUPS solution. In the Juniper BNG CUPS solution, the control plane is referred to as Juniper BNG CUPS Controller (BNG CUPS Controller). The BNG CUPS Controller component requires a Kubernetes cluster that consists of multiple nodes.

The BNG CUPS Controller can be installed on a single Kubernetes cluster or on a multiple geographical, multiple cluster setup. The installation requirements and installation process for these two types of setups are different. See the followings sections for your BNG CUPS Controller setup:

BNG CUPS Controller Installation Requirements

To install BNG CUPS Controller, you need the hardware and software requirements listed in this section.

BNG CUPS Controller Requirements for a Single Geography Setup

BNG CUPS Controller installs on a Kubernetes cluster that consists of physical or virtual machines. For availability, you must have at least three nodes in the cluster.

BNG CUPS Controller requires the minimum resources listed in Table 1, from the Kubernetes cluster.

Table 1: Single Kubernetes Cluster Setup Requirements
Category Details

Cluster

A single cluster with 3 hybrid nodes.

Kubernetes node

The Kubernetes nodes require the following:

  • For the operating system, you can use either of the following:

    • Ubuntu 22.04 LTS (for a BBE Cloudsetup cluster)

    • Red Hat Enterprise Linux CoreOS (RHCOS) 4.16 or later (for an OpenShift Container Platform cluster)

  • CPU: 16 cores

  • Memory: 64 GB

  • Storage: 512 GB storage partitioned as 128 GB root (/), 128 GB /var/lib/docker, and 256 GB /mnt/longhorn(application data

  • Kubernetes role: Control plane etcd function and worker node

This specification establishes a cluster that can run BNG CUPS Controller as well as its companion applications such as BBE Event Collection and Visualization and Address Pool Manager (APM) simultaneously.

Jump host

The jump host requires the following:
  • Operating system: Ubuntu version 22.04 LTS

  • CPU: 2 core

  • Memory: 8 gibibytes (GiB)

  • Storage: 128 gibibytes (GiB)

  • Installed software:

    • Python 3.10-venv

    • Helm utility

    • Docker utility

    • OpenShift CLI. Required if you are using a Red Hat OpenShift Container Platform cluster.

Cluster software

The cluster requires the following software:

  • RKE version 1.3.15 (Kubernetes 1.24.4)—Kubernetes distribution

  • MetalLB version 0.13.7—Network load balancer

  • Keepalived version 2.2.8—Kubelet HA VIP Controller

  • Longhorn version 1.2.6—CSI

  • Flannel version 0.15.1—CNI

  • Registry version 2.8.1—Container registry

  • OpenShift version 4.16 or later—Kubernetes Distribution for RHOCP. Uses compatible versions of Longhorn (CSI), and MetalLB, OVN (CNI), and OpenShift Image Registry

Jump host software

The jump host requires the following software:

  • Kubectl version 1.28.6+rke2r1—Kubernetes client

  • Helm version 3.12.3—Kubernetes package manager

  • Docker-ce version 20.10.21—Docker engine

  • Docker-ce-cli version 20.10.21—Docker engine CLI

  • OpenShift version 4.16 or later—Kubernetes distribution for RHOCP clusters.

Storage

A storage class named jnpr-bbe-storage.

Network load balancer address

Two for TCP and UDP load balancing services.

Registry storage

Each BNG CUPS Controller release requires 2 gibibytes (GiB) of container images.

BNG CUPS Controller Requirements for a Multiple Geography Setup

A geographically redundant, multiple cluster BNG CUPS Controller setup consists of three separate Kubernetes clusters. Each of the three clusters is geographically separated, so that service impacting events affecting one cluster do not affect the other clusters. The clusters which comprise the multiple cluster setup take on specific roles.

One cluster takes on the role of the management cluster and the other two clusters take on the role of workload clusters. The workload clusters provided a redundant platform where most of BNG CUPS Controller application runs. The management cluster runs software that monitors the health of the workload clusters and determines whether BNG CUPS Controller should switchover from one workload cluster to the other.

Each Kubernetes cluster in the multiple cluster consists of physical or virtual machines (VMs).

BNG CUPS Controller requires the minimal resources listed in Table 2, from the Kubernetes cluster.

Table 2: Multiple Geography Kubernetes Cluster Setup Requirements
Category Details

Cluster

The multiple cluster consists of three clusters with each cluster consisting of 3 hybrid nodes.

The three clusters must consist of one management cluster and two workload clusters.

Note:

Make sure that the pod and service CIDRs for each workload cluster do not overlap. The cluster internal networks of each workload cluster are connected by a Submariner IP tunnel. The internal CIDRS must be distinct.

Management cluster, Kubernetes node

The management cluster Kubernetes node requires the following:

  • For the operating system, you can use either of the following:

    • Ubuntu 22.04 LTS (for a BBE Cloudsetup cluster)

    • Red Hat Enterprise Linux CoreOS (RHCOS) 4.16 or later (for an OpenShift Container Platform cluster)

  • CPU: 8

  • Memory: 24 GB

  • Storage: 256 GB of storage partitioned according to the following:

    • On a Rancher Kubernetes Engine 2 (RKE2) system—64 GB root (/), 96 GB /var/lib/rancher, and 96 GB /var/lib/longhorn (application data)

    • On a RHOCP system—64 GB root (/), 96 GB /var/lib/containers, and 96 GB /var/lib/longhorn

  • Kubernetes role: Control plane etcd function and worker node

This specification establishes a cluster that can run Karmada and ECAV simultaneously.

Workload cluster, Kubernetes node

The workload cluster Kubernetes node requires the following:

  • For the operating system, you can use either of the following:

    • Ubuntu 22.04 LTS (for a BBE Cloudsetup cluster)

    • Red Hat Enterprise Linux CoreOS (RHCOS) 4.16 or later (for an OpenShift Container Platform cluster)

  • CPU: 16 cores

  • Memory: 64 GB

  • Storage: 512 GB of storage, partitioned according to the following:

    • On a RKE2 system— 128 GB root (/), 128 GB /var/lib/rancher, 256 GB /var/lib/longhorn;

    • On a RHOCP system—128 GB root (/), 128 GB /var/lib/containers, and 256 GB /var/lib/longhorn

  • Kubernetes role: Control plane etcd function and worker node

This specification establishes a cluster that can run BNG CUPS Controller as well as its companion applications such as BBE Event Collection and Visualization and APM simultaneously.

Jump host

The jump host requires the following:
  • Operating system: Ubuntu version 22.04 LTS

  • CPU: 2 core

  • Memory: 8 gibibytes (GiB)

  • Storage: 128 gibibytes (GiB)

  • Installed software:

    • Python 3.10-venv

    • Helm utility

    • Docker utility

    • OpenShift CLI. Required if you are using a Red Hat OpenShift Container Platform cluster.

Cluster software

The cluster requires the following software:

  • RKE2 version 1.28.6+rke2r1—Kubernetes distribution

    • MetalLB version 0.14.3—Network load balancer

    • Kube-vip version 0.7.1—Kubelet HA VIP Controller

    • Longhorn version 1.6.0—CSI

    • Canal—CNI (part of RKE2 distribution)

    • Registry version 2.8.1—Container registry

  • OpenShift version 4.16 or later—Kubernetes Distribution for RHOCP. Uses compatible versions of Longhorn (CSI), and MetalLB, OVN (CNI), and OpenShift Image Registry.

  • Karmada version 1.13.1—Multiple cluster orchestration. Required for the management cluster.

  • Submariner version 0.20.0—Layer 3 tunneling.

Jump host software

The jump host requires the following software:

  • Kubectl version 1.28.6+rke2r1—Kubernetes client.

  • Helm version 3.12.3—Kubernetes package manager.

  • Docker-ce version 20.10.21—Docker engine.

  • Docker-ce-cli version 20.10.21—Docker engine CLI.

  • OpenShift CLI Tool (oc) version 4.16 or later—Kubernetes distribution for RHOCP clusters.

  • Subctl version 0.20.0—Submariner CLI utility.

  • Kubectl Karmada version 1.13.1—Kubectl karmada plug-in.

Storage

A storage class named jnpr-bbe-storage

Network load balancer address

Two for the TCP and UDP load balancing services for each workload cluster.

One for the TCP load balancing service for the management cluster

Registry storage

Each BNG CUPS Controller release requires 2.5 gibibytes (GiB) of container images. Required for each cluster.

Note:

In a single geography BNG CUPS Controller setup, you can make some basic assumptions about the cluster's parameters. You can use a quick start tool like BBE Cloudsetup to create a single geography BNG CUPS Controller. The construction of a production environment BNG CUPS Controller setup with multiple geographies and multiple clusters requires much more input from you to build.

Additional Requirements

To use Juniper BNG CUPS, you must purchase a license for both the Juniper BNG CUPS Controller (control plane) and the Juniper BNG User Planes (user planes) associated to the Juniper BNG CUPS Controller. For information about how to purchase a software license, contact your Juniper Networks sales representative at https://www.juniper.net/in/en/contact-us/.

The MX Series devices that you are using in your Juniper BNG CUPS environment also require their own separate licenses. For information about how to purchase hardware, contact your Juniper Networks sales representative at https://www.juniper.net/in/en/contact-us/.

Confirm that you have a juniper.net user account with permissions to download the BNG CUPS Controller software package. Download and install the BNG CUPS Controller software from a machine that will not be part of the Kubernetes cluster.

Install Juniper BNG CUPS Controller in a Single Geography Setup

Use the procedures in this section to install Juniper BNG CUPS Controller in a single geography setup for the first time.

Before you begin, confirm that you have met the requirements for the BNG CUPS Controller installation (see Table 1).

You have the following two options for creating the Kubernetes cluster that you can install BNG CUPS Controller on:

  • BBE Cloudsetup—For installation instructions, see BBE Cloudsetup Installation Guide.

    Note:

    BBE Cloudsetup is a utility that you can use to quickly get started with using BNG CUPS Controller. It is not a life cycle tool for the cluster. You cannot expand the width of the cluster, perform node maintenance, upgrade infrastructure components, and so on. A Kubernetes cluster for production purposes should be designed and constructed with the requirements of the production environment and with appropriate support to maintain its life cycle.

  • Red Hat OpenShift Container Platform—For installation instructions, see the Red Hat OpenShift Container Platform documentation.

Before starting the BNG CUPS Controller installation, make sure that you have the following information:

Required Information:

  • Container registry details:

    • If you are using a BBE Cloudsetup created cluster:

      • External registry address

      • External registry port number (usually 5000)

    • If you are using a Red Hat OpenShift Container Platform cluster:

      • External registry (FQDN)

      • Internal (Docker) registry address

      • Internal (Docker) registry port number

Optional Information:

  • Storage class name for persistent volume claim (PVC) creation (default is jnpr-bbe-storage).
  • PVC Size (defaults is 90 MiB).
  • Syslog server details—Syslog server information is required if you are planning to export BNG CUPS Controller logs to an external syslog collector.

    • Syslog server address

    • Syslog server port number

Install the BNG CUPS Controller Application (Single Geography)

  1. Download the Juniper BNG CUPS software package from Juniper Networks software download page, and save it to the jump host.

    BNG CUPS Controller is available as a compressed TAR file image (.tgz). The filename includes the release number as part of the name. The release number has the format:m.nZb.s

    For example, the software release number 23.4R1.5 maps to the following format:

    • m is the main release number of the product (for example, 23).

    • n is the minor release number of the product (for example, 4).

    • Z is the type of software release (for example, R for FRS or maintenance release).

    • b is the build number of the product (for example, 1, indicating the FRS release, rather than a maintenance release).

    • s is the spin number of the product (for example, 5).

  2. Unpack the BNG CUPS Controller TAR file (.tgz) file on the jump host by entering:
  3. Run the loader script after you unpack the TAR file.
  4. Use the sudo -E dbng link --context context-name --version software-release command to link to the cluster. The link command associates the loaded BNG CUPS Controller software package to the cluster in preparation for the setup.
    • context context-name—The Kubernetes context name.

    • version software-release—The BNG CUPS Controller software version, as displayed from the dbng_loader output.

  5. If you are installing BNG CUPS Controller on a Red Hat OpenShift Container Platform cluster, log in with the OpenShift CLI as a user with admin privileges and then proceed to the next step.
    If you are installing BNG CUPS Controller on a BBE Cloudsetup created cluster, proceed to the next step.
  6. You must authenticate with the container registry in order to be able to push the BNG CUPS Controller container images. How you authenticate to the registry varies depending on if you are installing BNG CUPS Controller on a BBE Cloudsetup created cluster or on a Red Hat OpenShift Container Platform cluster (see the respective documentation for details).

    If you are using a secure registry (created on a BBE Cloudsetup created cluster), you authenticate with the registry by issuing a docker login as the system user (the system user supplied in the BBE Cloudsetup configuration file) to the cluster's registry transport address (the FQDN supplied as the system address in the BBE Cloudsetup configuration file).

  7. Run setup to configure your installation. The setup command does the following:
    • Establishes the operational parameters for the Kubernetes deployment.

      If you did not use either the bbecloudsetup option or the template file-name option with the setup command, you need to complete these prompts during the setup:

      • If you are using BBE Cloudsetup to create your cluster:

        • External registry address.

        • External registry port number.

      • If you are using a Red Hat OpenShift Container Platform cluster:

        • External registry (fully qualified domain name)

        • Internal (Docker) registry address

        • Internal (Docker) registry port number

    Note:

    context context-name is the only required option for the setup command.

    The options that you can use with the setup command are listed in the following:

    • context context-name—The Kubernetes context name.

    • h, help—Shows the help message and exit.

    • l, log [error, warning, info, debug]—Adjusts the log level.

    • no-color—Prints messages without colors.

    • bbecloudsetup—Fills in operational parameters that align with a bbecloudsetup created cluster so that you do not have to interact with BNG CUPS Controller during the setup process (see the BBE Cloudsetup Installation Guide for cluster installation instructions).

      Note:

      Only use either the bbecloudsetup option or the template file-name option. Do not use both options.

    • update—You will only be prompted for missing values during setup.

    • ssh host:port—A hostname or IP address of the cluster (any of the cluster’s nodes) and open port used for SSH access to the CLI.

    • secrets—Updates the keys, certificates, and secrets used by the BNG CUPS Controller.

    • verbose—Provides a detailed description before each prompted question.

    • config file-name—The name of the initial configuration file that you want BNG CUPS Controller to use during startup.

    • template file-name—A YAML formatted file that contains a subset of the configuration file that is created during setup. The values that are entered in the template file are used automatically by the setup process. When you use the template option, you are not required to manually enter the information contained in the template file during the setup process. You should only use the template option when using Red Hat OpenShift Container Platform to create the cluster or when creating a multiple geographical cluster. Table 3 describes the information that you need to enter into the template configuration file.

      Note:

      Only use either the bbecloudsetup option or the template file-name option. Do not use both options.

    • mandatory—Only asks required questions during setup.

    • optional—Only asks questions that are not required during setup.

    Table 3: Setup File Field Descriptions

    Field

    Description

    External registry address

    The external registry address is an FQDN that the container images are pushed to.

    APM TLS secret name

    The Transport Layer Security (TLS) secret name for Address Pool Manager (APM).

    APM certificate

    The certificate for APM.

    APM private key

    The private key for APM.

    APM root certificate

    The root certificate for APM.

    BBE observer connection TLS secret name

    The TLS secret name for the BBE observer connection.

    BBE observer connection certificate

    The certificate for the BBE observer connection.

    BBE observer connection private key

    The private key for the BBE observer connection.

    BBE observer connection root certificate

    The root certificate for the BBE observer connection.

    DTLS secret name

    The Datagram Transport Level Security (DTLS) secret name.

    DTLS certificate

    The DTLS certificate.

    DTLS private key

    The DTLS private key.

    DTLS root certificate

    The DTLS root certificate.

    Registry for k8s to pull from

    The transport address or FQDN:port that the container images are pulled from.

    Startup config to mount on rollout

    The configuration file to use for the initial configuration. If a configuration file is not provided a factory default configuration is used.

    (Optional) CPi Config Storage Name

    The name of the configured CPi storage class.

    (Optional) CPi Config Storage Size

    The size of the configured CPi storage class in mebibytes (MiB).

    (Optional) CPi Core Storage Name

    The name of the CPi core storage class.

    (Optional) CPi Core Storage Size

    The size of the CPi core storage class in mebibytes (MiB).

    (Optional) Scache Core Storage Name

    The name of the Scache core storage class.

    (Optional) Scache Core Storage size

    The size of the Scache core storage class in mebibytes (MiB).

  8. Verify the BNG CUPS Controller installation by running the dbng-version command.
    • context context-name—The Kubernetes context name.

    • detail—Displays all available software versions.

Start BNG CUPS Controller in a Single Geography Setup

Use this procedure to configure and to start BNG CUPS Controller in a single geography setup.

  1. Enter rollout to start the BNG CUPS Controller installation. The BNG CUPS Controller utility allows you to roll out different software versions for all microservices that are part of BNG CUPS Controllerted. You need to use the rollout command with sudo as root. The rollout command also validates that all the values needed for the new release are present and loads the new release container images to the registry. Use sudo -E dbng rollout --context context-name --version software-release --service service-name to start BNG CUPS Controller services. For example:
    • context-name—The Kubernetes context.

    • service service-name—The microservice name to rollout (for example, scache or cpi).

    • version software-release—The software release to rollout (defaults to the release that links to the cluster).

    Note:

    On the, first rollout -–service is not required. The -–service is used with the –-version to rollout (upgrade) specific versions of specific services.

    Note:

    By default, BNG CUPS Controller starts from factory-default. The configuration is reset to its initial state. Any persistent state and any persistent logs are cleared.

  2. Enter dbng status --context context-name --detail to verify that the BNG CUPS Controller services are up and running. For example:
    Note:

    Collect the logs for a service and contact the Juniper Networks Technical Assistance Center (JTAC) when either of the following occurs:

    • The service is not running.

    • The service’s up time compared with other services indicates that it has restarted.

  3. You must add a control plane instance (CPi) to your BNG CUPS Controller. Run the CPi add command.
    • context-name—The Kubernetes context name.

    • version software-release—The software release for the new CPi pod. Enter a release.

    • cpi-label—The label is used to uniquely name the CPi for identification purposes. In the above example output the cpi-label is cpi-example-1.

  4. Verify that the CPi microservice is running, by using the dbng status command.
    • context-name—The Kubernetes context name.

    • detail—Displays detailed information.

Install Juniper BNG CUPS Controller in a Multiple Geography Setup

Use the installation procedures in this section for a BNG CUPS Controller setup that consists of multiple BNG CUPS Controllers that are located in different geographical locations.

Before you begin, confirm that you have met the requirements for the BNG CUPS Controller installation (see Table 2).

Before starting the BNG CUPS Controller installation, make sure that you have the following information:

Note:

You must collect the information listed below for all three clusters.

  • The cluster context names of the workload clusters, the management cluster's karmada context and the management cluster's working context

  • Karmada kubeconfig secret name—The kubeconfig file for the Karmada context on the management cluster. You can extract the kubeconfig file for the Karmada context from the management cluster context in the karmada-system namespace.

    For an example of the command to run, see the following:

  • Container registry details for each cluster:

    • External registry address

    • External registry port number (usually 5000)

  • Syslog server details—Syslog server information is required if you are planning to export BNG CUPS Controller logs to an external syslog collector.

    • Syslog server address

    • Syslog server port number

  • Kubeconfig for the management cluster.

Install the BNG CUPS Controller Application (Multiple Geography Setup)

  1. Create the jnpr-bng-controller namespace/project in the management context.
    • context management-context-name—The context name for the management cluster. Be aware that this context is not the same as the Karmada context name that is associated with the management cluster.

  2. Create the karmada-kconf secret. Create the secret on the management cluster in the BNG CUPS Controller namespace using the management cluster’s kubeconfig.
    Note:

    The karmada-kconf secret contains the kubeconfig that is used by the observer to monitor the status of the CPi. If the secret is not created, the observer (and the BNG CUPS Controller) will not operate correctly.

    Run the following command:

    • management-context-name—The context name for the management cluster. Be aware that this context is not the same as the Karmada context name that is associated with the management cluster.

    • karmada-secret-file—The management cluster’s kubeconfig file.

  3. Verify that the secret is created by running the following command:
    Note:

    Depending on your setup, the number of bytes listed may very.

  4. Download the Juniper BNG CUPS software package from Juniper Networks software download page, and save it to the jump host.

    BNG CUPS Controller is available as a compressed TAR file image (.tgz). The filename includes the release number as part of the name. The release number has the format:m.nZb.s

    For example, the software release number 23.4R1.5 maps to the following format:

    • m is the main release number of the product (for example, 23).

    • n is the minor release number of the product (for example, 4).

    • Z is the type of software release (for example, R for FRS or maintenance release).

    • b is the build number of the product (for example, 1, indicating the FRS release, rather than a maintenance release).

    • s is the spin number of the product (for example, 5).

  5. Unpack the BNG CUPS Controller TAR (.tgz) file on the jump host by entering:
  6. Run the loader script after you unpack the TAR file.
  7. Use the sudo -E dbng link command to link to the cluster. In preparation for running setup, the link command takes the list of workload cluster contexts and an observer context and associates them to the loaded BNG CUPS Controller software package,
    • context karmada-context-name—The context name of the Karmada context that is created when Karmada is installed on the management cluster.

    • worload-contexts workload-1-context-name workload-2-context-name—The two workload context names.

    • observer-context management-context-name—The context name for the management cluster. Be aware that this context is not the same as the Karmada context name that is associated with the management cluster.

    • version software-release—The BNG CUPS Controller software version, as displayed from the dbng_loader output.

    Note:

    During installation, Karmada is installed on the management cluster and Karmada creates its own context on the management cluster. The Karmada context is used to target any operations involving the workload clusters. The management cluster also has its own context that is used for running noncritical centralized workloads (for example, BBE Event Collection and Visualization). You use this management cluster context for the observer-context.

    Figure 1 shows where the different contexts are located in a multiple cluster setup.

    Figure 1: Multiple Cluster Contexts High-level architecture diagram of a distributed system showing Jump Host, Karmada and Management Contexts, and Workload Clusters A and B connected by Secure L3 Tunnel.
  8. When using a RHOCP cluster, you can interact with it after authenticating the OpenShift cluster and the three RHOCP clusters (management and two workload clusters) using the OpenShift CLI.

    For an example of the command to run, see the following:

  9. In order to push the BNG CUPS Controller container images, you must authenticate with the registry on each cluster in the multiple cluster setup. You authenticate with the registry by issuing a docker login as the system user (the system user entered in the BBE Cloudsetup configuration file) to the cluster's registry transport address (the FQDN supplied as the system address in the BBE Cloudsetup configuration file).
  10. Run setup to configure your installation. The setup command does the following:
    • Collects information about the cluster environment—The information collected consists of the following:

      • Names of storage classes or persistent volumes

      • Location of a container registry

      • Container and pod name of registry

      • TLS keys

      • Information relevant to supporting BNG CUPS Controller features.

    • Initializes the BNG CUPS Controller configuration.

    During the setup process you will be prompted for information. To avoid entering information during the setup process, you can use a template file (for a description of the template file, see template).

    If you did not use a template file, the following prompts may appear during the setup process:

    • The following information for the observer on the management cluster:

      • External registry address and port number for each cluster. Click the Enter key after entering the information for each cluster.

      • Registry address for the observer (management cluster) to pull from.

      • Karmada kubeconfig secret name

      • Enable TLS (default is False)

      • TLS secret name

    • The following information for the primary workload (workload-1) cluster. After entering the information, click the Enter key to enter the information for the backup workload cluster:

      • External registry address and port number for each cluster. Click the Enter key after entering the information for each cluster.

      • Observer TLS CA secret name

    • The following information for the backup (workload-2) workload cluster:

      • External registry address and port number for each cluster. Click the Enter key after entering the information for each cluster.

      • Observer TLS CA secret name

    • Backup workload cluster—Enter the name of the backup workload cluster

    • Backup workload registry address

    • Primary workload cluster—Enter the name of the primary workload cluster

    • APM TLS secret name

      Note: The name of the Kubernetes secret object in the jnpr-bng-controller name space containing the TLS key and certificates. If a secret does not exists, leave this field empty and setup will prompt you for files containing the certificate, private key and root certificate. The Kubernetes secret will then be created for you. If a secret is provided, setup will not prompt you for the certificate, private key, or root certificate files.
    • APM certificate

    • APM private key

    • APM root certificate

    • BBE observer connection TLS secret name

    • BBE observer connection certificate

    • BBE observer connection private key

    • BBE observer connection root certificate

    • DTLS secret name

      Note: The name of the Kubernetes secret object in the jnpr-bng-controller name space containing the TLS key and certificates. If a secret does not exists, leave this field empty and setup will prompt you for files containing the certificate, private key and root certificate. The Kubernetes secret will then be created for you. If a secret is provided, setup will not prompt you for the certificate, private key, or root certificate files.
    • DTLS certificate

    • DTLS private key

    • DTLS root certificate

    • Registry for k8s to pull from

    • Startup config to mount on rollout

    • CPi Config storage class name and size

    • CPi Core storage class name and size

    • Scache Core storage name

    • Scache Core storage size

    The options that you can use with the setup command are listed in the following:

    • context karmada-context-name—The context name of the Karmada context that is created when Karmada is installed on the management cluster.

    • h, help—Shows the help message and exit.

    • l, log [error, warning, info, debug]—Adjusts the log level.

    • no-color—Prints messages without colors.

    • update—You will only be prompted for missing values during setup.

    • ssh host:port-number—A hostname or IP address of the cluster (any of the cluster’s nodes) and open port used for SSH access to the CLI.

    • secrets—Updates the keys, certificates, and secrets used by the BNG CUPS Controller.

    • verbose—Provides a detailed description before each prompted question.

    • config file-name—The name of the initial configuration file that you want BNG CUPS Controller to use during startup.

    • template file-name—A YAML formatted file that contains a subset of the configuration file that is created during setup. The values that are entered in the template file are used automatically by the setup process. When you use the template option, you are not required to manually enter the information contained in the template file during the setup process. You should only use the template option when using Red Hat OpenShift Container Platform to create the cluster or when creating a multiple geographical cluster. Table 3 describes the information that you need to enter into the template configuration file.

    • mandatory—Only asks required questions during setup.

    • optional—Only asks questions that are not required during setup.

  11. Verify the BNG CUPS Controller installation by running the dbng-version command.
    • context context-name—The Kubernetes context name.

    • detail—Displays all available software versions.

Start BNG CUPS Controller in a Multiple Geography Setup

Use this procedure to configure and to start BNG CUPS Controller in a multiple geography setup.

  1. Enter rollout to start the BNG CUPS Controller installation. The BNG CUPS Controller utility allows you to roll out different software versions for all microservices that are part of your BNG CUPS Controller multiple geography setup. You need to use the rollout command with sudo as root. The rollout command also validates that all the values needed for the new release are present and loads the new release container images to the registry. Use sudo -E dbng rollout --context karmada-context-name --version software-release --service service-name to start BNG CUPS Controller services. For example:
    • karmada-context-name—The context name of the Karmada context that is created when Karmada is installed on the management cluster.

    • service service-name—The microservice name to rollout (for example, scache or cpi).

    • version software-release—The software release to rollout (defaults to the release that links to the cluster).

    Note:

    On the, first rollout -–service is not required. The -–service is used with the –-version to rollout (upgrade) specific versions of specific services.

    Note:

    By default, BNG CUPS Controller starts from factory-default. The configuration is reset to its initial state. Any persistent state and any persistent logs are cleared.

  2. Enter dbng status --context karmada-context-name to verify that the BNG CUPS Controller services are up and running. For example:
    Note: For a detailed output, you can use the dbng status --context karmada-context-name --detail command.
    • karmada-context-name—The context name of the Karmada context that is created when Karmada is installed on the management cluster.

    • detail—Displays detailed information.

    Note:

    Collect the logs for a service and contact the Juniper Networks Technical Assistance Center (JTAC) when either of the following occurs:

    • The service is not running.

    • The service’s up time compared with other services indicates that it has restarted.

  3. You must add a control plane instance (CPi) to your BNG CUPS Controller. Run the CPi add command.
    • context karmada-context-name—The context name of the Karmada context that is created when Karmada is installed on the management cluster.

    • version software-release—The software release for the new CPi pod. Enter a release.

    • ip-aaa ip-address—The IP address to deploy for CPi AAA actions. You use the ip-aaa option to specify a single external IP address to use for the RADIUS listener port on the CPi. The address remains the same across multiple geography switchovers. The ip-aaa option requires a L3-enabled MetalLB instance. MetalLB BGP peering is used to direct traffic to the CPi on the active workload cluster.

    • label—Specify a name for the new CPi.

  4. Verify that the CPi microservice is running, by using the dbng status command.

Install a BNG User Plane

The BNG User Planes that you use as part of Juniper BNG CUPS are the MX Series routers that you have installed in your network. BNG User Planes (MX Series routers) run Junos OS, if you need to install a BNG User Plane, see Junos® OS Software Installation and Upgrade Guide.

Note: You must use a line card interface to communicate to the BNG CUPS Controller. The management interface (fxp0) on a BNG User Plane is not a supported interface for the Sci and CPRi services.