Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Deploy the Cluster

This topic describes the procedure to deploy a Routing Director deployment cluster on the VMs.

After you have created the node VMs and prepared them, configure the cluster parameters on the VMs and deploy the Routing Director deployment cluster. The steps to configure and deploy the cluster are the same regardless of the hypervisor you deploy the cluster on.

Perform the following steps to configure and deploy the Routing Director deployment cluster:

  1. Configure the node VMs.

  2. Deploy the cluster.

Configure the node VMs

When all the node VMs are created, perform the following steps to configure the VMs.

  1. Connect to the node VM Web console of the first VM. You are logged in as root automatically.

  2. You are prompted to change your password immediately. Enter and re-enter the new password. You are automatically logged out of the VM.

    Note:

    We recommend that you enter the same password for all the VMs. If you configure different passwords for the VMs, enter the different passwords correctly when requested to Generate SSH keys for the cluster nodes when deploying the cluster.

  3. When prompted, log in again as root user with the newly configured password.

  4. Configure the following information when prompted.

    Table 1: VM Configuration Wizard

    Prompt

    Action

    Do you want to set up a Hostname? (y/n)

    Enter y to configure a hostname.

    Please specify the Hostname

    Enter an identifying hostname for the VM. For example: Primary1. The hostname should be under 64 characters and can include alphanumeric and some special characters.

    If you do not enter a hostname, a default hostname in the format controller-<VM-IP-address-4th-octet> is assigned.

    Note:

    Since you are deploying the cluster from one node and entering the IP addresses of the other nodes during the cluster configuration process, the roles are assigned automatically. The first three nodes to be configured are the primary and worker nodes and the last node is the worker-only node.

    The hostnames (and whether or not they match the role of the node), will not affect the operations of the cluster. However, for management purposes, we recommend that you pay attention to how you name the nodes and the order in which you entered their addresses during the cluster creation steps.

    We do not support changing the hostname after the cluster has been installed.

    Do you want to set up Static IP (preferred)? (y/n)

    Enter y to configure an IP address for the VM.

    Please specify the IP address in CIDR notation

    Enter the IP address in the CIDR notation. For example, 10.1.2.3/24.

    Your node VMs can be in the same subnet or in different subnets.

    Note:

    If you enter 10.1.2.3 instead of 10.1.2.3/24, you will see an Invalid IP address error message.

    Please specify the Gateway IP

    Enter the gateway IP address.

    Ensure that you enter the gateway IP address corresponding to the network of your node VM.

    Please specify the Primary DNS IP

    Enter the primary DNS IP address.

    Please specify the Secondary DNS IP

    Enter the secondary DNS IP address.

    Do you want to set up IPv6? (y/n)

    Enter y to configure IPv6 addresses.

    If you don't want to configure IPv6 addresses, enter n and proceed to Step 5.

    Please specify the IPv6 address in CIDR notation

    Enter the IPv6 address in the CIDR notation. For example, 2001:db8:1:2::3/64.

    Your node VMs can be in the same subnet or in different subnets.

    Note:

    If you enter 2001:db8:1:2::3 instead of 2001:db8:1:2::3/64, you will see an Invalid IP address error message.

    Please specify the Gateway IPv6

    Enter the gateway IPv6 address.

    Ensure that you enter the gateway IP address corresponding to the network of your node VM.

    Please specify the Primary DNS IPv6

    Enter the primary DNS IPv6 address.

    Please specify the Secondary DNS IPv6

    Enter the secondary DNS IPv6 address.

  5. When prompted if you are sure to proceed, review the information displayed, type y and press Enter.

    You are logged into Deployment Shell.

  6. Repeat steps 1 through 5 for the other VMs.

  7. (Optional) Verify connectivity between the nodes. Log in again to all the node VMs. If you have been logged out, log in again as root with the previously configured password. You are placed in Deployment Shell operational mode. Type exit to enter the Linux root shell of the nodes. Ping the other three nodes from each node using the ping static-ipv4-address command to verify that the nodes can connect to each other.

  8. (Optional) Before you proceed to deploy the cluster, verify that the NTP server(s) is reachable. On any one of the cluster nodes, type start shell. At the # prompt, ping the server using the ping ntp-servers-name-or-address command. If the ping is unsuccessful, use an alternate NTP server.

You have completed the node preparation steps and are ready to deploy the cluster.

Deploy the cluster

Perform the following steps to configure and deploy the Routing Director deployment cluster using Deployment Shell CLI.

  1. Go back to the first node VM (Primary1). If you have been logged out, log in again as root with the previously configured password. You are placed in Deployment Shell operational mode.

  2. To configure the cluster, enter the configuration mode in Deployment Shell.

  3. Configure the following cluster parameters.

    Where:

    The IP addresses of kubernetes nodes with indexes 1 through 4 must match the static IP addresses that are configured on the node VMs. The Kubernetes nodes with indexes 1, 2, and 3 are the primary and worker nodes, the node with index 4 is the worker-only node. The node IP addresses can be on the same subnet or on different subnets. If you are configuring a three node cluster, skip configuring the Kubernetes node with index 4.

    ntp-servers is the NTP server to synchronize to.

    web-admin-user and web-admin-password are the e-mail address and password that the first user can use to log in to the Web GUI.

    ingress-vip is the VIP address for generic common ingress and is used to connect to the Web GUI.

    test-agent-gateway-vip is the VIP address for the Active Assurance Test Agent gateway (TAGW).

    The VIP addresses are added to the outbound SSH configuration that is required for a device to establish a connection with Routing Director.

    Note:

    In a multi-subnet cluster installation, the VIP addresses must not be in the same subnet as the cluster nodes.

  4. Configure the PCE server VIP address.

    Where:

    pce-server-vip is the VIP address that is used by the PCE server to establish Path Computational Element Protocol (PCEP) sessions between Routing Director and the devices. The VIP address can be on the same subnet as the cluster nodes or on a different subnet. The VIP address can be on different subnets from the other VIP addresses.

    Note:

    Configure the PCE server VIP address to view your network topology updates in real-time.

    You can also configure the VIP address at any time post cluster deployment. For information on how to configure the PCE server VIP address after cluster deployment, see Configure a PCE Server.

  5. (Optional) Configure the routing observability feature and VIP addresses to establish BGP Monitoring Protocol (BMP) session and IPFIX data collection.

    Where:

    install-routingbot enables the routing observability feature.

    routingbot-crpd-vip is the VIP address used by external network devices as BMP station IP address to establish the BMP session.

    routingbot-ipfix-vip is the VIP address to view predictor events.

    Warning: The bare minimum resources required to configure routing observability features are listed in Hardware Requirements. However, to get an estimate of the resources required to configure the routing observability feature on your production deployment, contact your Juniper Partner or Juniper Sales Representative.
  6. (Optional) Enable the AI/ML (artificial intelligence [AI] and machine learning [ML]) feature to automatically monitor Key Performance Indicators (KPIs) related to a device's health and detect blackholes on the device.

    Where:

    install-aiml enables AI/ML features. This is disabled, by default.

    enable-device-health configures monitoring of device-health using AI/ML features.

    enable-blackhole enables detecting blackholes (packet drops) on a device using AI/ML features.

    Warning: Monitoring device health and detecting blackholes using AI/ML are Beta features this release. The bare minimum resources required to configure AI/ML is listed in Hardware Requirements. However, to get an estimate of the resources required to configure the AI/ML feature on your production deployment, contact your Juniper Partner or Juniper Sales Representative.
  7. (Optional) Configure IPv6 addresses.

    Where:

    cluster-ipv6-enabled enables usage of IPv6 addresses for the cluster making the cluster dual-stack.

    ingress-vip-ipv6 is the IPv6 VIP address for generic common ingress and is used to connect to the Web GUI.

    test-agent-gateway-vip-ipv6 is the IPv6 VIP address for the Active Assurance TAGW.

    prefer-ipv6 configures preference for IPv6 addresses over IPv4 addresses. When set to true, and if hostnames are not configured, IPv6 VIP addresses are added to the outbound SSH configuration.

    The VIP addresses can be on the same subnet as the cluster nodes or on a different subnet. The VIP addresses can also be on different subnets from each other.

  8. (Optional) If you want to use multiple VIP addresses for generic ingress, configure the additional VIP addresses for NETCONF and gNMI.

    Where:

    ingress-vip is used to configure an additional VIP address to be used for NETCONF and gNMI. When more than one ingress-vip addresses are defined, you can configure one VIP address to be used to connect to the GUI and the additional VIP address to be used for NETCONF and gNMI access.

    oc-term-host is the VIP address that you want to use for NETCONF.

    gnmi-term-host is the VIP address that you want to use for gNMI.

    The address configured for NETCONF and gNMI is added to the outbound SSH configuration used to adopt devices.

  9. (Optional) If your cluster nodes are in different subnets, configure BGP peering between the ToR router and the cluster nodes using the metalLB agent running in each cluster node. In this example, as illustrated in Figure 2, cluster nodes 1 and 2 are served by ToR1 and cluster nodes 3 and 4 are served by ToR2.

    Where:

    enable-l3-vip enables L3 VIP addresses for cluster nodes and VIP addresses in different subnets.

    metallb-bgp-peer and metallb-bgp-peer-ipv6are the IP and IPv6 addresses of the ToR routers, respectively.

    peer-asn is the ToR AS number.

    local-asn is the AS number of the cluster nodes. The AS number remains the same for all the cluster nodes.

    local-nodes are the cluster nodes IP addresses configured in step 3.

  10. (Optional) If you want to configure hostnames for generic ingress and Active Assurance TAGW, configure the following:

    Where:

    system-hostname is the hostname for the generic ingress virtual IP (VIP) address.

    test-agent-gateway-hostname is the hostname for the Active Assurance TAGW VIP address.

    When you configure hostnames, the hostnames take precedence over VIP addresses and are added to the outbound SSH configuration. The hostnames can resolve to either IPv4 or IPv6 VIP addresses or both.

  11. (Optional) Configure the following settings for SMTP-based user management.

    Where:

    smtp-allowed-sender-domains are the e-mail domains from which Routing Director sends e-mails to users.

    smtp-relayhost is the name of the SMTP server that relays messages.

    smtp-relayhost-username (optional) is the username to access the SMTP (relay) server.

    smtp-relayhost-password (optional) is the password for the SMTP (relay) server.

    smtp-allowed-sender-domains are the e-mail domains from which Routing Director sends e-mails to users.

    smtp-sender-email is the e-mail address that appears as the sender's e-mail address to the e-mail recipient.

    smtp-sender-name is the name that appears as the sender’s name in the e-mails sent to users from Routing Director.

    papi-local-user-management enables or disables local-user management.

    mail-server smtp-enabled enables or disables SMTP.

    Note:

    SMTP configuration is optional at this point. SMTP settings can be configured after the cluster has been deployed also. For information on how to configure SMTP after cluster deployment, see Configure SMTP Settings in Paragon Shell.

  12. (Optional) Install custom user certificates. Note, before you install user certificates, you must copy the custom certificate file and certificate key file to the Linux root shell of the node from which you are deploying the cluster. Copy the files to the /root/epic/config folder.

    Where:

    user-certificate-filename is the user certificate filename.

    user-certificate-key-filename is the user certificate key filename.

    Note:

    Installing certificates is optional at this point. You can configure Routing Director to use custom user certificates after cluster deployment also. For information on how to install user certificates after cluster deployment, see Install User Certificates.

  13. (Optional) Configure and enforce security between the PCE server and Path Computation Clients (PCC) using system generated certificates.

    Where:

    pce-server-global-default-tls-mode enables PCEP security. You can set it to auto-detect or strict-enable. It is set to strict-disable, by default.

    Note:

    Enabling PCEP security is optional at this point. You can configure Routing Director to enforce PCEP security after cluster deployment also. Additionally, you can enforce PCEP security using custom certificates. For information on enabling PCEP security using system generated or custom certificates after cluster deployment, see Enable PCEP Security.

  14. (Optional) Configure the scale size of your cluster. If your cluster is configured with the bare minimum resources required to install a cluster, the scale mode of the cluster is small. The scale mode is set to small by default and you can skip this step.

    If you want to install a cluster that supports more devices and you have at least 32 vCPUs and 64-GB RAM, you must change the scale mode to large.

  15. Commit the configuration and exit configuration mode.

  16. Generate the configuration files.

    The inventory file contains the IP addresses of the VMs.

    The config.yml file contains minimum Routing Director deployment cluster configuration parameters that are required to deploy a cluster.

    The request deployment config command also generates a config.cmgd file in the config directory. The config.cmgd file contains all the set commands that you have executed. If the config.yml file is inadvertently edited or corrupted, you can redeploy your cluster using the load set config/config.cmgd command in the configuration mode.

  17. Generate SSH keys on the cluster nodes.

    When prompted, enter the SSH password for the VMs. Enter the same password that you configured to log in to the VMs.

    Note:

    If you have configured different passwords for the VMs, ensure that you enter corresponding passwords when prompted.

  18. Deploy the cluster.

    The cluster deployment begins and takes over an hour to complete.

  19. (Optional) Monitor the progress of the deployment onscreen.

    The progress of the deployment is displayed. Deployment is complete when you see an output similar to this onscreen.

    Alternatively, if you did not choose to monitor the progress of the deployment onscreen using the monitor command, you can view the contents of the log file using the file show /epic/config/log command. The last few lines of the log file must look similar to the sample output. We recommend that you check the log file periodically to monitor the progress of the deployment.

  20. Upon successful completion of the deployment, the application cluster is created. Log out of the VM and log in again to Deployment Shell.

    The console output displays the Deployment Shell welcome message and the IP addresses of the four nodes (called Controller-1 through Controller-4), the Active Assurance TAGW VIP address, the Web admin user e-mail address, and Web GUI IP address. If IPv6 addresses are configured, the welcome message displays the IPv6 VIP addresses as well.

    The CLI command prompt displays your login username and the node hostname that you configured previously. For example, if you entered Primary1 as the hostname of your primary node, the command prompt is root@Primary1 >.

You can now verify the cluster deployment and log in to the Web GUI. If you are accessing the Web GUI from an external IP address, outside the Routing Director network, you must use NAT to map the external IP address to the Web GUI IP address. Go to Log in to the Web GUI.