Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

APM Installation

APM Installation Overview

Juniper Address Pool Manager (APM) is an automated, centralized, container-based cloud-native application that network operators and administrators use to manage IP prefix resources. APM works with managed broadband network gateways (BNGs) to monitor address pools on BNGs. When the number of free addresses drops below a set threshold, the BNG raises an alarm. The alarm triggers APM to allocate unused prefixes from its global list of prefixes and provision a subset of the prefixes to the BNG as new pools.

APM can be installed on a single Kubernetes cluster or on a multiple geography, multiple cluster setup. The installation requirements and installation process for these two types of setups are different. See the followings sections for the requirements for your APM setup:

Note:

The term BNG in this document also applies to the BNG CUPS Controller.

You can deploy APM on any hardware that meets the requirements. The following sections describe:

  • APM installation requirements

  • How to install APM

  • How to adjust APM setup parameters

APM Installation Requirements

To install APM, you need the following hardware and software requirements listed in this section.

APM Requirements for a Single Geography Setup

APM installs on a Kubernetes cluster comprised of physical or virtual machines (VMs). For availability, you must have at least three nodes hosting the control plane's etcd function and three nodes hosting the worker function in the cluster.

APM has been qualified against the single geography cluster described in Table 1.

Table 1: Single Kubernetes Cluster Setup Requirements
Category Details

Cluster

A single cluster with 3 hybrid nodes.

Kubernetes node

The Kubernetes nodes require the following:

  • For the operating system, you can use either of the following:

    • Ubuntu 22.04 LTS (for a BBE Cloudsetup cluster)

    • Red Hat Enterprise Linux CoreOS (RHCOS) 4.16 or later (for a Red Hat OpenShift Container Platform cluster)

  • CPU: 14 or 16 cores. Use a 16 core node if you plan on running other applications on the cluster (such as the BNG CUPS Controller application).

  • Memory: 64 GB

  • Storage: 512 GB storage partitioned as 128 GB root (/), 128 GB /var/lib/docker, and 256 GB /mnt/longhorn(application data

  • Kubernetes role: Control plane etcd function and worker node

This specification establishes a cluster that can run APM as well as its companion applications such as BBE Event Collection and Visualization and BNG CUPS Controller simultaneously.

Jump host

The jump host requires the following:
  • Operating system: Ubuntu version 22.04 LTS or 24.04 LTS

  • CPU: 2 core

  • Memory: 8 gibibytes (GiB)

  • Storage: 128 gibibytes (GiB)

  • Installed software:

    • Python 3.10-venv (Ubuntu 22.04) or 3.12-venv (Ubuntu 24.04)

    • Helm utility

    • Docker utility

    • OpenShift CLI. Required if you are using a Red Hat OpenShift Container Platform (RHOCP) cluster.

Cluster software

The cluster requires the following software:

  • RKE version 1.3.15 (Kubernetes 1.24.4)—Kubernetes distribution

  • MetalLB version 0.13.7—Network load balancer

  • Keepalived version 2.2.8—Kubelet HA VIP Controller

  • Longhorn version 1.2.6—CSI

  • Flannel version 0.15.1—CNI

  • Registry version 2.8.1—Container registry

  • OpenShift version 4.16 or later—Kubernetes Distribution for RHOCP. Uses compatible versions of Longhorn (CSI), and MetalLB, OVN (CNI), and OpenShift Image Registry

Jump host software

The jump host requires the following software:

  • Kubectl version 1.28.6+rke2r1—Kubernetes client

  • Helm version 3.12.3—Kubernetes package manager

  • Docker-ce version 20.10.21—Docker engine

  • Docker-ce-cli version 20.10.21—Docker engine CLI

  • OpenShift version 4.16 or later—Kubernetes distribution for RHOCP clusters.

Storage

A storage class named jnpr-bbe-storage.

Network load balancer address

One for APMi.

Registry storage

Each APM release requires approximately 3 gibibytes (GiB) of container images.

APM Requirements for a Multiple Geography Setup

A geographically redundant, multiple cluster APM setup consists of three separate Kubernetes clusters. Each of the three clusters is geographically separated, so that service impacting events affecting one cluster do not affect the other clusters. The clusters which comprise the multiple cluster setup take on specific roles.

One cluster takes on the role of the management cluster and the other two clusters take on the role of workload clusters. The workload clusters provide a redundant platform where most of the APM application runs. The management cluster hosts execution of the Karmada multiple cluster orchestration software. Karmada manages the propagation of APM workloads across the workload clusters.

Each Kubernetes cluster node in a multiple cluster can be constructed from a physical or virtual machine (VM).

APM has been qualified against the multiple geography cluster described in Table 2.

Table 2: Multiple Geography Kubernetes Cluster Setup Requirements
Category Details

Cluster

The multiple cluster consists of three clusters with each cluster consisting of 3 hybrid nodes.

The three clusters must consist of one management cluster and two workload clusters.

Note:

Make sure that the cluster and service CIDRs for each workload cluster do not overlap. The cluster internal networks of each workload cluster are connected by a Submariner IP tunnel. The internal CIDRS must be distinct.

Management cluster Kubernetes node

The management cluster Kubernetes node requires the following:

  • For the operating system, you can use either of the following:

    • Ubuntu 22.04 LTS (for a BBE Cloudsetup cluster)

    • Red Hat Enterprise Linux CoreOS (RHCOS) 4.16 or later (for a RHOCP cluster)

  • CPU: 8

  • Memory: 24 GB

  • Storage: 256 GB of storage partitioned according to the following:

    • On a Rancher Kubernetes Engine 2 (RKE2) system—64 GB root (/), 96 GB /var/lib/rancher, and 96 GB /var/lib/longhorn (application data)

    • On a RHOCP system—64 GB root (/), 96 GB /var/lib/containers, and 96 GB /var/lib/longhorn

  • Kubernetes role: Control plane etcd function and worker node

This specification establishes a cluster that can run Karmada and ECAV simultaneously.

Workload cluster Kubernetes node

The workload cluster Kubernetes node requires the following:

  • For the operating system, you can use either of the following:

    • Ubuntu 22.04 LTS (for a BBE Cloudsetup cluster)

    • Red Hat Enterprise Linux CoreOS (RHCOS) 4.16 or later (for a RHOCP cluster)

  • CPU: 14 or 16 cores. Use a 16 core node if you plan on running other applications on the cluster (such as the BNG CUPS Controller application).

  • Memory: 64 GB

  • Storage: 512 GB of storage, partitioned according to the following:

    • On a RKE2 system— 128 GB root (/), 128 GB /var/lib/rancher, 256 GB /var/lib/longhorn;

    • On a RHOCP system—128 GB root (/), 128 GB /var/lib/containers, and 256 GB /var/lib/longhorn

  • Kubernetes role: Control plane etcd function and worker node

This specification establishes a cluster that can run APM as well as its companion applications such as BBE Event Collection and Visualization and BNG CUPS Controller simultaneously.

Jump host

The jump host requires the following:
  • Operating system: Ubuntu version 22.04 LTS or 24.04 LTS

  • CPU: 2 core

  • Memory: 8 gibibytes (GiB)

  • Storage: 128 gibibytes (GiB)

  • Installed software:

    • Python 3.10-venv (Ubuntu 22.04) or 3.12-venv (Ubuntu 24.04)

    • Helm utility

    • Docker utility

    • OpenShift CLI. Required if you are using a RHOCP cluster.

Cluster software

The cluster requires the following software:

  • RKE2 version 1.28.6+rke2r1—Kubernetes distribution

    • MetalLB version 0.14.3—Network load balancer

    • Kube-vip version 0.7.1—Kubelet HA VIP Controller

    • Longhorn version 1.6.0—CSI

    • Canal—CNI (part of RKE2 distribution)

    • Registry version 2.8.1—Container registry

  • OpenShift version 4.16 or later—Kubernetes Distribution for RHOCP. Uses compatible versions of Longhorn (CSI), and MetalLB, OVN (CNI), and OpenShift Image Registry.

  • Karmada version 1.13.1—Multiple cluster orchestration. Required for the management cluster.

  • Submariner version 0.20.0—Layer 3 tunneling.

Jump host software

The jump host requires the following software:

  • Kubectl version 1.28.6+rke2r1—Kubernetes client.

  • Helm version 3.12.3—Kubernetes package manager.

  • Docker-ce version 20.10.21—Docker engine.

  • Docker-ce-cli version 20.10.21—Docker engine CLI.

  • OpenShift CLI Tool (oc) version 4.16 or later—Kubernetes distribution for RHOCP clusters.

  • Subctl version 0.20.0—Submariner CLI utility.

  • Kubectl Karmada version 1.13.1—Kubectl karmada plug-in.

Storage

A storage class named jnpr-bbe-storage

Network load balancer addresses

One on each workload cluster for APMi and one on the management cluster for bbe-observer.

Registry storage

Each APM release requires approximately 3 gibibytes (GiB) of container images. Required for each cluster.

Note:

In a single geography APM setup, you can make some basic assumptions about the cluster's parameters. You can use a quick start tool like BBE Cloudsetup to create a single geography APM. The construction of a production environment APM setup with multiple geographies and multiple clusters requires much more input from you to build.

Additional Requirements

The BNG is a Juniper Networks MX Series router running Junos OS or a Juniper BNG CUPS Controller (BNG CUPS Controller).

We recommend the following releases:
  • Junos OS Release 23.4R2-S5 or later

  • BNG CUPS Controller 24.4R2 or later

For APM, confirm that you have a juniper.net user account with permissions to download the APM software package. Download and install the APM software from a machine that will not be part of the Kubernetes cluster.

Install a Single Geography APM

Use the procedures in this section to install a single geography APM for the first time.

Before you begin, confirm that you have met the requirements for the APM installation.

We recommend that you use a secure connection between APM and the BNG.

You have the following two options for installing APM:

  • Install a Single Geography APM Using the APM installation Utility—You can install APM using the APM utility, which streamlines the installation process.

    Note:

    BBE Cloudsetup is a utility that you can use to quickly get started with using APM. It is not a life cycle tool for the cluster. You cannot expand the width of the cluster, perform node maintenance, upgrade infrastructure components, and so on. A Kubernetes cluster for production purposes should be designed and constructed with the requirements of the production environment and with appropriate support to maintain its life cycle. (For information about BBE Cloudsetup, see BBE Cloudsetup Installation Guide.)

  • Install a Single Geography APM Without Using the APM Utility—You can install APM on a preexisting Kubernetes cluster of your choice. This process is a manual process and does not use the APM utility that comes with the APM installation package.

Before starting the APM installation, make sure that you have the following information:

Required Information:

  • Container registry details:
    • If you are using a BBE Cloudsetup created cluster.

      • External registry address.

      • External registry port number (usually 5000).

    • If you are using a Red Hat OpenShift Container Platform cluster:

      • External registry (fully qualified domain name)

      • Internal (Docker) registry address

      • Internal (Docker) registry port number

Optional Information:

  • APM initial configuration file. If a configuration file is not supplied, a basic configuration file is automatically generated.
  • Storage class name for persistent volume claim (PVC) creation (default is jnpr-bbe-storage).
  • PVC Size (default is 90 MiB).
  • Archival configuration details. This is required if you are planning to mirror a copy of the APM configuration to an external server.
    • Either the name of the SSH private key file or the name of the Kubernetes secret that is present in the jnpr-apm namespace containing the SSH private key.

    • The Secure Copy Protocol (SCP) URL of the server where the configuration file will be archived. An SCP URL takes the form of scp://user-login@server-fqdn:server-port/absolute-file-path (for example, scp://user@host1.mydomain.com:30443/home/user/configs/apm).

  • Syslog server details. This is required if you are planning to export APM logs to an external syslog collector.
    Note:

    If BBE Event Collection and Visualization is detected running on the target cluster, the address and port values of the ECAV deployment will be suggested as the default.

    • Syslog server address.

    • Sysylog server port number.

  • Network load balancer details. This is required if you are planning to use a specific network load balancer pool and address for APMi.
    • Network load balancer pool name.

    • Network load balancer pool address.

  • APMi Details:
    • Port (default is 20557)
    • TLS details. You will need one of the following:
      • None (insecure)

      • Either the key and certificate files or the name of the Kubernetes secret that is present in the jnpr-apm namespace that contains the key and certificate information.

  • Service Account Name—The name of the Kubernetes service account used to bind certain operational privileges to the mgmt microservice. If a service account name is not provided, APM creates a service account named apm-svca during rollout.

  • SSH service type—If SSH access to the mgmt microservice is specified (--ssh <ip>:<port>), you must specify whether the service should be created as a node port (NodePort) service or a load balancer (LoadBalancer) service. If LoadBalancer is selected, a MetalLB pool is created containing the supplied external IP address. The load balancer service created at rollout is assigned the external IP address from the newly created MetalLB pool.

  • DBSync service type—The apm multicluster status APM utility command collects the state to display from the DBSync microservice through a Kubernetes service. By default, a node port service is created for this purpose. If you select LoadBalancer, you are prompted for an external IP address and a MetalLB pool is created containing the supplied external IP address. The LoadBalancer service created at rollout is assigned the external IP address from the newly created MetalLB pool.

  • Number of worker processes for the provman microservice (default is 3).

Install a Single Geography APM Using the APM installation Utility

You use the procedure in this section if you are installing a single geography APM.

  1. Download the APM software package from the Juniper Networks software download page to the jump host.

    APM is available as a compressed TAR (.tgz) file. The filename includes the release number as part of the name. The release number has the format: <Major>.<Minor>.<Maintenance>

    • major is the main release number of the product.
    • minor is the minor release number of the product.
    • maintainance is the revision number.
  2. Unpack the APM TAR (.tgz) file on the jump host by entering:
  3. Run the loader script after you unpack the TAR file.
  4. Use the sudo -E apm link --context context-name --version apm-version command to link to the cluster. The link command associates the loaded APM software package to the cluster context in preparation for the setup.
    • context-name is the Kubernetes context.

    • apm-version is the software version.

  5. If you are installing APM on a Red Hat OpenShift Container Platform cluster, log in with the OpenShift CLI and then proceed to the next step.
    If you are installing APM on a BBE Cloudsetup created cluster, proceed to the next step.
  6. You must authenticate with the container registry in order to be able to push the APM container images. How you authenticate to the registry varies depending on if you are installing APM on a BBE Cloudsetup created cluster or on an Red Hat OpenShift Container Platform cluster (see the respective documentation for details).
  7. Run setup to configure your installation. The setup command does the following:
    • Collects information about the cluster environment such as: container registry contact information, keys and certificates needed to secure external interfaces, persistent storage resources, and other information relevant to supporting APM features.

    • Establishes the operational parameters for the Kubernetes deployment.

      If you did not use either the bbecloudsetup option or the template file-name option with the setup command, you need to complete these prompts during the setup:

      • If you are using BBE Cloudsetup to create your cluster.

        • External registry address.

        • External registry port number.

      • If you are using a Red Hat OpenShift Container Platform cluster:

        • External registry (fully qualified domain name)

        • Internal (Docker) registry address

        • Internal (Docker) registry port number

    Note:

    When running setup, you can interact with the setup process by entering ^d.

    If you want to change a value after entering it, enter ^d. After entering ^d, the value you previously entered is removed and the default value is automatically used for the question. You can use the ^d operation for any setup questions that are optional or for which a list of values can be provided.

    Note:

    context context-name is the only required option for the setup command.

    The options that you can use with the setup command are listed in the following:

    • context context-name—The Kubernetes context name.

    • h, help—Shows the help message and exit.

    • l, log [error, warning, info, debug]—Adjusts the log level.

    • no-color—Prints messages without colors.

    • bbecloudsetup—Fills in operational parameters that align with a bbecloudsetup created cluster so that you do not have to interact with APM during the setup process (see the BBE Cloudsetup Installation Guide for cluster installation instructions).

      Note:

      Only use either the bbecloudsetup option or the template file-name option. Do not use both options.

    • update—You will only be prompted for missing values during setup.

    • ssh host:port—A hostname or IP address of the cluster (any of the cluster’s nodes) and open port used for SSH access to the CLI.

      Note:

      Enabling SSH access requires the MGMT microservice to run in privileged mode.

    • secrets—Updates the keys, certificates, and secrets used by APM.

    • verbose—Provides a detailed description before each prompted question.

    • config config-file-path-name—The name of the initial configuration file that you want APM to use during startup.

      Note:

      You can use an initial configuration file to start and roll out APM. You use the configuration file through the –-config config-file-path-name switch on the utility script’s setup command.

      When APM is started or rolled out, the configuration file that you supply during setup is used to initialize APM. If you do not supply a configuration file, APM starts with the factory defaults. If the BBE Event Collection and Visualization application is running on the cluster, the factory defaults include the bbe-ecav syslog server configuration.

      The supplied configuration file is stored on the jumphost’s context repository. This allows the configuration to be preserved across APM start and stop events. Commits to the initial configuration are not automatically saved to the persistent location on the jumphost. To update the configuration at the persistent location, use the utility script’s save-config command.

      Using the save-config command ensures that the latest configuration is used the next time that APM is started and rolled out. In order to restore the initial configuration back to its factory default, enter setup interactively and enter ^d to the startup config ... question.

      The action in the parenthesis changes to remove. Press Enter to accept the removal of the deployed configuration. APM reverts back to the factory default configuration after a stop and then rollout command sequence.

      When you change the initial configuration file using the utility script’s setup command, you must perform a stop and then rollout command sequence for the change to take effect.

    • template file-name—A YAML formatted file that contains a subset of the operational parameters file that is created during setup. The values that are entered in the template file are used automatically by the setup process. When you use the template option, you are not required to manually enter the information contained in the template file during the setup process. Use the template option when using Red Hat OpenShift Container Platform to create the cluster or when creating a multiple geography cluster. Table 3 describes the information that you can enter into the template configuration file.

      Note:

      Only use either the bbecloudsetup option or the template file-name option. Do not use both options.

    • mandatory—Only asks required questions during setup.

    • optional—Only asks questions that are not required during setup.

    Table 3: Setup File Field Descriptions

    Field

    Description

    External registry address

    The external registry address is a fully qualified domain name (FQDN) that the container images are pushed to.

    Internal (Docker) registry transport address (fqdn:port)

    The internal registry transport address is the address from which the container images are pulled from during rollout. This address is typically different than the external registry address.

    (Optional) Initial APM configuration file

    The configuration file that is used at APM startup.

    (Optional) Cluster storage-class name

    The name of the Kubernetes storage class to use for creating persistent volume claims (PVCs). The management microservice uses a PVC to record the configuration state.

    (Optional) Cluster storage size

    The PVC size in mebibytes (MiB).

    (Optional) Configuration archival server

    When you configure the Configuration archival server option, APM archives a copy of the updated configuration to an external server after each successful commit.

    To configure the server information where configuration file changes are archived, you must enter the following information:

    • ssh-key information. Provide information for one of the following:

      • The name of a Kubernetes Secret in the APM namespace that contains the SSH private key data.

      • The name of the SSH private-key file.

      Note:

      If a secret name is supplied, you will not be prompted for the SSH private-key file.

    • The Secure Copy Protocol (SCP) URL of the server where the configuration file will be archived.

      Note:

      The URL must use the following format: scp://user-login@server-fqdn:server-port/absolute-file-path (for example, scp://user@host1.mydomain.com:30443/home/user/configs/apm).

      Upon successful commit, an SCP transfer of the candidate configuration is transferred to the archival URL as a compressed file with the name:

      apm-identifier_YYYYMMDD_HHMMSS_juniper.conf.n.gz

      • apm-identifier is the external IP address of the APMi interface.

      • YYYYMMDD_HHMMSS is the time stamp in Coordinated Universal Time (UTC).

      • n is the number designation of the compressed configuration rollback file.

    (Optional) Syslog Details

    If you want to export APM log information to an external syslog collector,enter the following syslog server information:

    • IP address or fully qualified domain name

    • Port number

    Syslog information is included in the generated factory default configuration file. If you did not use the generated factory default configuration file, and used your own initial configuration file, you must include the system syslog host stanza containing the connection details for the syslog server.

    (Optional) Network Load Balancer Pool

    If you want the APMi external address to be allocated from a specific network load balancer address pool, enter the following network load balancer pool information:

    • Network load balancer address annotation (for example: metallb.io/address-pool: myMetalIpAddressPool)

    • Network load balancer pool annotation (for example: metallb.io/loadBalancerIPs: 10.1.1.3)

    (Optional) APMi port

    The APMi port number (default is 20557).

    (Optional) APMi secrets

    To secure the APMi (recommended),enter one of the following:

    • The name of a Kubernetes secret in the APM namespace that contains the TLS secret data (root Certificate Authority certificate, certificate, private-key)

    • Key files (root Certificate Authority certificate, certificate, and private key)
    Note:

    If a secret is provided, you will not be prompted for the Key files during installation.

    (Optional) Service Account Name

    The name of the Kubernetes service account used to bind certain operational privileges to the mgmt microservice. If a service account name is not provided, APM creates a service account named apm-svca during rollout.

    (Optional) Type of apm SSH service

    If SSH access to the mgmt microservice is specified (--ssh <ip>:<port>), you must specify whether the service should be created as a node port (NodePort) service or a load balancer (LoadBalancer) service. If LoadBalancer is selected, a MetalLB pool is created containing the supplied external IP address. The load balancer service created at rollout is assigned the external IP address from the newly created MetalLB pool.

    (Optional) DBSync service type

    The apm multicluster status APM utility command collects the state to display from the DBSync microservice through a Kubernetes service. By default, a node port service is created for this purpose. If you select LoadBalancer, you are prompted for an external IP address and a MetalLB pool is created containing the supplied external IP address. The LoadBalancer service created at rollout is assigned the external IP address from the newly created MetalLB pool.

    (Optional) Number of worker processes

    The number of provman worker processes determines how simultaneous processes provman deploys to handle the entity workload. We suggest that you plan for 20 entities per process. Each process can consume a CPU core on the node it is running on. Therefore, the nodes in the cluster must have sufficient CPU cores to support the number of provman processes (plus any other workloads that may be running on a node).

    You can configure 1 to 10 worker process (default is 3).

    (Optional) Provide service account for the Observer's Controller Manager

    The service account for the controller manager. If you select True for this prompt, you are prompted for a service account name. If you select False, the bbe-observer-controller-manager service account is created.

    (Optional) Provide service account for the Observer's gRPC server

    The service account for the gRPC server. If you select True for this prompt, you are prompted for a service account name. If you select False, the bbe-observer-grpc-server service account is created.

  8. Verify the APM installation by running the apm version command.
    • context context-name—The Kubernetes context name.

    • detail—Displays all available software versions.

Start APM in a Single Geography Setup

Use this procedure to configure and to start APM in a single geography setup.

  1. Enter rollout to start the APM installation. You need to use the rollout command with sudo/as root. The rollout command also validates that all the values needed for the new releases are present and loads the new release container images to the registry. Use sudo -E apm rollout --context context-name to start APM services. For example:
    • context contex-name—The Kubernetes context.

    Note:

    By default, APM starts with the values that you provided during setup. Unless the configuration was saved, the initial configuration is what was entered during setup. All other persistent states (logs, database keys, and so on) are cleared.

  2. Enter apm status --context context-name [-o|--output json] [--detail] to verify that the APM services are up and running. For example:
    Note:

    Collect the logs for a service and contact the Juniper Networks Technical Assistance Center (JTAC) when either of the following occurs:

    • The service is not running.

    • The service’s uptime compared with other services indicates that it has restarted.

Install a Single Geography APM Without Using the APM Utility

The instructions in this section describes the installation steps for installing a single geography APM on a preexisting Kubernetes cluster of your choice. This process is a manual process and does not use the APM utility that comes with the APM installation package.

  1. Download the APM software package from the Juniper Networks software download page to the jump host.

    APM is available as a compressed TAR (.tgz) file. The filename includes the release number as part of the name. The release number has the format: <Major>.<Minor>.<Maintenance>

    • major is the main release number of the product.
    • minor is the minor release number of the product.
    • maintainance is the revision number.
  2. Unpack the APM TAR (.tgz) file on the jump host by entering:
  3. The container images needed by APM are stored in the images subdirectory. You must push the images to the registry where the scheduled application images will be pulled from. Depending on the type of container registry being used the commands may be different. The following commands illustrate one method of pushing container images to the registry:
  4. To prepare APM for deployment, you must create a YAML configuration file for each microservice. Each microservice's configuration file contains the specific configuration settings for the microservice. The YAML configuration file is called values.yaml and the file is located under the charts subdirectory, with each microservice. You should create a separate values.yaml (for example, new-values.yaml) specific to your configuration for each microservice. Table 4 describes the fields in the microservice's configuration files (values.yaml).
    Note:

    If you do not want to create multiple values.yaml files, you can create a single values.yaml that contains information for all the microservices. The single values.yaml is located under the umbrella chart in the apm/apm/charts/address_pool_manager folder. The procedures in this section describe how to configure an individual YAML configuration file for each microservice.

    Create a new values.yaml file for each of the microservices, by making a copy of the file and then saving the new file. Update each file according to your Kubernetes cluster's information.
    Following are the microservices and their values.yaml file location:
    • redis microservice—Located at apm/apm/charts/redis

    • mgmt microservice—Located at apm/apm/charts/mgmt

    • addrman microservice—Located at apm/apm/charts/addrman

    • entman microservice—Located at apm/apm/charts/entman.

    • provman microservice—Located at apm/apm/charts/provman

    Table 4: Microservices Configuration File Field Descriptions

    Field

    Description

    Microservice

    APMi port

    The APMi exposed port number.

    provman

    APMi secrets

    • name—Name space secret to mount

    • certificate—Certificate file name

    • key—Private key file name

    • rootca—CA certificate file name

    provman

    apmInitVersion

    APM init software version.

    • mgmt

    • redis

    archivalUrl

    The Secure Channel Protocol (SCP) URL of the server where the configuration file is archived.

    mgmt

    db master updateStrategy

    Only RollingUpdate is supported.

    redis

    evictionToleration

    The node's unreachable tolerance (in seconds).

    • addrman

    • entman

    • mgmt

    • provman

    • redis

    log_level

    The default logging level.

    • addrman

    • entman

    • mgmt

    • provman

    • redis

    nlbPoolAnnotation

    The network load balancer pool name.

    provman

    nlbPoolIpAnnotation

    The network load balancer IP address

    provman

    pvcs config

    • meta—Persistent volume claim (PVC) for configuration file storage.

    • size—PVC size (MiB).

    mgmt

    registry

    Registry information:

    • host—The registry contact for the cluster pulls.

    • port—The registry port number for cluster pulls.

    • addrman

    • entman

    • mgmt

    • provman

    • redis

    resourceRequestsEnabled

    Whether or not to accept the resource request.

    • addrman

    • entman

    • mgmt

    • provman

    • redis

    resourceRanges

    Required resource ranges:

    • cpuRequest—The minimum millicores that are required to operate the system.

    • memRequest—The minimum mebibytes (MiB) that are required to operate the system.
    • addrman

    • entman

    • mgmt

    • provman

    • redis

    • redis

    sentinelCount

    The number of sentinels to start.

    redis

    startup config

    The configuration to use for system startup.

    mgmt

    storage_class

    Name of the storage class for PVC.

    mgmt

    tlsEnabled

    Indicates if TLS is enabled.

    provman

    workerProcs

    The number of worker processes that you want started.

    provman

  5. After you have made all the desired changes to your new values.yaml files for each microservice, the microservices must be deployed with the new values.yaml files.

    Run the following commands:

  6. To complete the configure of the provman microservice, you must run the helm upgrade command twice. The double configuration is required because the provman microservice uses the external IP address assigned to the APMi load balancer service in the protocol exchanges with its entities. The first helm upgrade command establishes the external IP address and the second helm upgrade command passes the external IP address to the provman microservice, which allows it to initialize.

    Run the following commands:

  7. Verify the APM installation by running the Kubernetes Command Line Tool command kubectl get pods and verify the APM pods are running.
  8. Verify that the services are present. Run the Kubernetes Command Line Tool command kubectl get services.

Install APM in a Multiple Geography Setup

Use the installation procedures in this section for an APM setup that consists of multiple APMs that are located in different geographical locations.

Before you begin, confirm that you meet the requirements for the APM installation (see Table 2).

Prerequisites

Before starting the APM installation, make sure that you have the following information:

For descriptions of the following information, see Table 3.

Required Information:

  • The cluster context names of the workload clusters, the management cluster's karmada context and the management cluster's working context.

    For example, your context output might look like the following:

  • Karmada kubeconfig—The kubeconfig file for the Karmada context on the management cluster. You can extract the kubeconfig file for the Karmada context from the management cluster context in the karmada-system namespace.

    For an example of the command to run, see the following:

  • Container registry details for each cluster:

    Note:

    You must collect the following information for all three clusters.

    • External registry address

    • External registry port number (usually 5000)

Optional Information:

  • Service account for the controller manager. If you select True for this prompt, you are prompted for a service account name. If you do not select True, a service account named bbe-observer-controller-manager is created.

  • Service account for the gRPC server. If you select True for this prompt, you are prompted for a service account name. If you do not select True, a service account named bbe-observer-grpc-server is created.

  • APM initial configuration file. If a configuration file is not supplied, a basic configuration file is automatically generated.
  • Storage class name for persistent volume claim (PVC) creation (default is jnpr-bbe-storage).
  • PVC Size (default is 90 MiB).
  • Archival configuration details. This is required if you are planning to mirror a copy of the APM configuration to an external server.
    • Either the name of the SSH private key file or the name of the Kubernetes secret that is present in the jnpr-apm namespace containing the SSH private key.

    • The Secure Copy Protocol (SCP) URL of the server where the configuration file will be archived. An SCP URL takes the form of scp://user-login@server-fqdn:server-port/absolute-file-path (for example, scp://user@host1.mydomain.com:30443/home/user/configs/apm).

  • Syslog server details. This is required if you are planning to export APM logs to an external syslog collector.
    Note:

    If BBE Event Collection and Visualization is detected running on the target cluster, the address and port values of the ECAV deployment will be suggested as the default.

    • Syslog server address.

    • Sysylog server port number.

  • Network load balancer details. This is required if you are planning to use a specific network load balancer pool and address for APMi.
    • Network load balancer pool name.

    • Network load balancer pool address.

  • APMi Details:
    • Port (default is 20557)
    • TLS details. You will need one of the following:
      • None (insecure)

      • Either the key and certificate files or the name of the Kubernetes secret that is present in the jnpr-apm namespace that contains the key and certificate information.

  • Service account name—The name of the Kubernetes service account used to bind certain operational privileges to the mgmt microservice. If a service account name is not provided, APM creates a service account named apm-svca during rollout.

  • DBSync service type—The apm multicluster status APM utility command collects the state to display from the DBSync microservice through a Kubernetes service. By default, a node port service is created for this purpose. If you select LoadBalancer, you are prompted for an external IP address and a MetalLB pool is created containing the supplied external IP address. The LoadBalancer service created at rollout is assigned the external IP address from the newly created MetalLB pool.

  • Number of worker processes for the provman microservice (default is 3).

Install the APM Application (Multiple Geography Setup)

  1. Create the jnpr-apm namespace/project in the management context.
    • context management-context-name—The context name for the management cluster. Be aware that this context is not the same as the Karmada context name that is associated with the management cluster.

  2. Create the karmada-kconf secret. Create the secret on the management cluster in the APM namespace using the management cluster’s kubeconfig.
    Note:

    The karmada-kconf secret contains the kubeconfig that is used by the observer to monitor the status of the CPi. If the secret is not created, the observer (and APM) will not operate correctly.

    • context management-context-name—The context name for the management cluster. Be aware that this context is not the same as the Karmada context name that is associated with the management cluster.

    • karmada-secret-file—The management cluster’s kubeconfig file.

  3. Verify that the secret was created.
  4. Download the APM software package from Juniper Networks software download page to the jump host.

    APM is available as a compressed TAR (.tgz) file. The filename includes the release number as part of the name. The release number has the format: <Major>.<Minor>.<Maintenance>

    • major is the main release number of the product.
    • minor is the minor release number of the product.
    • maintainance is the revision number.
  5. Unpack the APM TAR (.tgz) file on the jump host by entering:
  6. Run the loader script after you unpack the TAR file.
  7. Use the sudo -E apm link command to link to the cluster. In preparation for running setup, the link command takes the list of workload cluster contexts and an observer context and associates them to the loaded APM software package.
    • context karmada-context-name—The context name of the Karmada context that is created when Karmada is installed on the management cluster.

    • worload-contexts workload-1-context-name workload-2-context-name—The two workload context names.

    • observer-context management-context-name—The context name for the management cluster. Be aware that this context is not the same as the Karmada context name that is associated with the management cluster.

    • version software-release—The APM software version, as displayed from the apm_loader output.

    Note:

    During installation, Karmada is installed on the management cluster and Karmada creates its own context on the management cluster. The Karmada context is used to target any operations involving the workload clusters. The management cluster also has its own context that is used for running noncritical centralized workloads (for example, BBE Event Collection and Visualization). You use this management cluster context for the observer-context.

    Figure 1 shows where the different contexts are located in a multiple cluster setup.

    Figure 1: Multiple Cluster Contexts Multiple Cluster Contexts
  8. When using a RHOCP cluster, you can interact with it after authenticating the OpenShift cluster and the three RHOCP clusters (management and two workload clusters) using the OpenShift CLI.

    For an example of the command to run, see the following:

  9. In order to push the APM container images, you must authenticate with the registry on each cluster in the multiple cluster setup. Authenticate with the registry by issuing a docker login as the system user (the system user entered in the BBE Cloudsetup configuration file) to the cluster's registry transport address (the FQDN supplied as the system address in the BBE Cloudsetup configuration file).

    For an example of the command to run, see the following:

  10. Run setup to configure your installation. The setup command does the following:
    • Collects the following information about the cluster environment:

      • Names of storage classes or persistent volumes

      • Location of a container registry

      • Container and pod name of registry

      • TLS keys

      • Information relevant to supporting APM features.

    • Initializes the APM configuration.

    The following prompts will appear during the setup process:

    • Enter the following information for the management cluster:

      • External registry address and port number for each cluster. Press the Enter key after entering the information for each cluster.

      • Registry address and port number, for the observer (management cluster) to pull from.

      • Karmada kubeconfig secret name

      • Enable TLS (default is False)

      • TLS secret name

    • Enter the following for the primary workload cluster. After entering the information, press the Enter key to enter the information for the backup workload cluster:

      • Name of cluster

      • Cluster registry address and port number

    • Enter the following for the backup workload cluster:

      • Name of cluster

      • Cluster registry address and port number

    Note:

    To set up CLI access through SSH in a multiple geography deployment, you must use a template file (use the template option in the setup command). This enables you to configure the two workload cluster addresses.

    To configure SSH, add the following information (YAML formatted) for each workload cluster to the template file that you provide during the setup process:

    For more information regarding the SSH configuration in the template file, see the following:

    • ip-address—The IP address you use to manage the cluster from the jump host.

    • cluster-name—The name of the workload cluster as it appears in the output of the kubectl get clusters command.

    • available-port-value—If the NodePort option is entered in the type field, the port value must be a TCP port that is not used on any of the workload cluster's nodes. A best practice is to avoid ports that are already in use (like the often used SSH port 22), but below the ephemeral port range (port 49152 and higher). This avoids possible port contention with the node itself.

    • service-name—The name you want the created service to use. A best practice is to include the application name, the purpose, and the workload cluster in the name (for example, apm-ssh-workload1).

    The options that you can use with the setup command are listed in the following:

    • context karmada-context-name—The context name of the Karmada context that is created when Karmada is installed on the management cluster.

    • h, help—Shows the help message and exit.

    • l, log [error, warning, info, debug]—Adjusts the log level.

    • no-color—Prints messages without colors.

    • update—You will only be prompted for missing values during setup.

    • secrets—Updates the keys, certificates, and secrets used by APM.

    • verbose—Provides a detailed description before each prompted question.

    • config file-name—The name of the initial configuration file that you want APM to use during startup.

    • template file-name—A YAML formatted file that contains a subset of the configuration file that is created during setup. The values that are entered in the template file are used automatically by the setup process. When you use the template option, you are not required to manually enter the information contained in the template file during the setup process. You should only use the template option when using Red Hat OpenShift Container Platform to create the cluster or when creating a multiple geography cluster. Table 3 describes the information that you need to enter into the template configuration file.

    • mandatory—Only asks required questions during setup.

    • optional—Only asks questions that are not required during setup.

  11. Verify the APM installation by running the apm version command.
    • context karmada-context-name—The context name of the Karmada context that is created when Karmada is installed on the management cluster.

    • detail—Displays all available software versions.

  12. Create the karmada-kubeconfig secret from the kubeconfig file for the Karmada context (for additional information, see Prerequisites). The secret is necessary for the APM observer microservice, which runs in the management context, to monitor the workload cluster scheduling events in order to calculate a generation number used by APM in the cluster switchover process.
    Use the following command to create the karmada-kubeconfig secret (name the secret karmada-kconf):

Start APM in a Multiple Geography Setup

Use this procedure to configure and to start APM in a multiple geography setup.

  1. Enter rollout to start the APM installation. The APM utility allows you to roll out different software versions for all microservices that are part of your APM multiple geography setup. You need to use the rollout command with sudo as root. The rollout command also validates that all the values needed for the new release are present and loads the new release container images to the registry. Use sudo -E apm rollout --context karmada-context-name --version software-release --service service-name to start APM services. For example:
    • karmada-context-name—The context name of the Karmada context that is created when Karmada is installed on the management cluster.

    • service service-name—The microservice name to rollout.

    • version software-release—The software release to rollout (defaults to the release that links to the cluster).

    Note:

    On the, first rollout service is not required. The service is used with the version to rollout (upgrade) specific versions of specific services.

    Note:

    By default, APM starts with the values that you provided during setup. Unless the configuration was saved, the initial configuration is what was provided during setup. All other persistent states (logs, database keys, and so on) are cleared.

  2. Enter apm status --context karmada-context-name --detail to verify that the APM CUPS Controller services are up and running. For example:
    Note:

    Collect the logs for a service and contact the Juniper Networks Technical Assistance Center (JTAC) when either of the following occurs:

    • The service is not running.

    • The service’s up time compared with other services indicates that it has restarted.