ON THIS PAGE
Deploy the Cluster
This topic describes the procedure to deploy a Routing Director deployment cluster on the VMs.
After you have created the node VMs and prepared them, configure the cluster parameters on the VMs and deploy the Routing Director deployment cluster. The steps to configure and deploy the cluster are the same regardless of the hypervisor you deploy the cluster on.
Perform the following steps to configure and deploy the Routing Director deployment cluster:
Configure the node VMs
When all the node VMs are created, perform the following steps to configure the VMs.
Connect to the node VM Web console of the first VM. You are logged in as root automatically.
You are prompted to change your password immediately. Enter and re-enter the new password. You are automatically logged out of the VM.
Note:We recommend that you enter the same password for all the VMs. If you configure different passwords for the VMs, enter the different passwords correctly when requested to Generate SSH keys for the cluster nodes when deploying the cluster.
When prompted, log in again as root user with the newly configured password.
Configure the following information when prompted.
Table 1: VM Configuration Wizard Prompt
Action
Do you want to set up a Hostname? (y/n)
Enter y to configure a hostname.
Please specify the Hostname
Enter an identifying hostname for the VM. For example: Primary1. The hostname should be under 64 characters and can include alphanumeric and some special characters.
If you do not enter a hostname, a default hostname in the format
controller-<VM-IP-address-4th-octet> is assigned.Note:Since you are deploying the cluster from one node and entering the IP addresses of the other nodes during the cluster configuration process, the roles are assigned automatically. The first three nodes to be configured are the primary and worker nodes and the last node is the worker-only node.
The hostnames (and whether or not they match the role of the node), will not affect the operations of the cluster. However, for management purposes, we recommend that you pay attention to how you name the nodes and the order in which you entered their addresses during the cluster creation steps.
We do not support changing the hostname after the cluster has been installed.
Do you want to set up Static IP (preferred)? (y/n)
Enter y to configure an IP address for the VM.
Please specify the IP address in CIDR notation
Enter the IP address in the CIDR notation. For example, 10.1.2.3/24.
Your node VMs can be in the same subnet or in different subnets.
Note:If you enter 10.1.2.3 instead of 10.1.2.3/24, you will see an Invalid IP address error message.
Please specify the Gateway IP
Enter the gateway IP address.
Ensure that you enter the gateway IP address corresponding to the network of your node VM.
Please specify the Primary DNS IP
Enter the primary DNS IP address.
Please specify the Secondary DNS IP
Enter the secondary DNS IP address.
Do you want to set up IPv6? (y/n)
Enter
yto configure IPv6 addresses.If you don't want to configure IPv6 addresses, enter
nand proceed to Step 5.Please specify the IPv6 address in CIDR notation
Enter the IPv6 address in the CIDR notation. For example, 2001:db8:1:2::3/64.
Your node VMs can be in the same subnet or in different subnets.
Note:If you enter 2001:db8:1:2::3 instead of 2001:db8:1:2::3/64, you will see an Invalid IP address error message.
Please specify the Gateway IPv6
Enter the gateway IPv6 address.
Ensure that you enter the gateway IP address corresponding to the network of your node VM.
Please specify the Primary DNS IPv6
Enter the primary DNS IPv6 address.
Please specify the Secondary DNS IPv6
Enter the secondary DNS IPv6 address.
When prompted if you are sure to proceed, review the information displayed, type y and press Enter.
You are logged into Deployment Shell.
(Optional) Verify connectivity between the nodes. Log in again to all the node VMs. If you have been logged out, log in again as root with the previously configured password. You are placed in Deployment Shell operational mode. Type
exitto enter the Linux root shell of the nodes. Ping the other three nodes from each node using theping static-ipv4-addresscommand to verify that the nodes can connect to each other.(Optional) Before you proceed to deploy the cluster, verify that the NTP server(s) is reachable. On any one of the cluster nodes, type
start shell. At the#prompt, ping the server using theping ntp-servers-name-or-addresscommand. If the ping is unsuccessful, use an alternate NTP server.
You have completed the node preparation steps and are ready to deploy the cluster.
Deploy the cluster
Perform the following steps to configure and deploy the Routing Director deployment cluster using Deployment Shell CLI.
Go back to the first node VM (Primary1). If you have been logged out, log in again as root with the previously configured password. You are placed in Deployment Shell operational mode.
********************************************************************* WELCOME TO Routing Director SHELL! You will now be able to execute Routing Director CLI commands! ********************************************************************* root@eop>To configure the cluster, enter the configuration mode in Deployment Shell.
root@eop> configure Entering configuration mode [edit]
Configure the following cluster parameters.
root@eop# set deployment cluster nodes kubernetes 1 address node1-IP [edit] root@eop# set deployment cluster nodes kubernetes 2 address node2-IP [edit] root@eop# set deployment cluster nodes kubernetes 3 address node3-IP [edit] root@eop# set deployment cluster nodes kubernetes 4 address node4-IP [edit] root@eop# set deployment cluster ntp ntp-servers pool.ntp.org [edit] root@eop# set deployment cluster common-services ingress ingress-vip generic-ingress-vIP [edit] root@eop# set deployment cluster applications active-assurance test-agent-gateway-vip TAGW-vIP [edit] root@eop# set deployment cluster applications web-ui web-admin-user "user-admin@juniper.net" [edit] root@eop# set deployment cluster applications web-ui web-admin-password Userpasswd [edit]
Where:
The IP addresses of
kubernetesnodes with indexes 1 through 4 must match the static IP addresses that are configured on the node VMs. The Kubernetes nodes with indexes 1, 2, and 3 are the primary and worker nodes, the node with index 4 is the worker-only node. The node IP addresses can be on the same subnet or on different subnets. If you are configuring a three node cluster, skip configuring the Kubernetes node with index 4.ntp-serversis the NTP server to synchronize to.web-admin-userandweb-admin-passwordare the e-mail address and password that the first user can use to log in to the Web GUI.ingress-vipis the VIP address for generic common ingress and is used to connect to the Web GUI.test-agent-gateway-vipis the VIP address for the Active Assurance Test Agent gateway (TAGW).The VIP addresses are added to the outbound SSH configuration that is required for a device to establish a connection with Routing Director.
Note:In a multi-subnet cluster installation, the VIP addresses must not be in the same subnet as the cluster nodes.
Configure the PCE server VIP address.
root@eop# set deployment cluster applications pathfinder pce-server pce-server-vip PCE-vIP [edit]
Where:
pce-server-vipis the VIP address that is used by the PCE server to establish Path Computational Element Protocol (PCEP) sessions between Routing Director and the devices. The VIP address can be on the same subnet as the cluster nodes or on a different subnet. The VIP address can be on different subnets from the other VIP addresses.Note:Configure the PCE server VIP address to view your network topology updates in real-time.
You can also configure the VIP address at any time post cluster deployment. For information on how to configure the PCE server VIP address after cluster deployment, see Configure a PCE Server.
(Optional) Configure the routing observability feature and VIP addresses to establish BGP Monitoring Protocol (BMP) session and IPFIX data collection.
root@eop# set deployment cluster applications routingbot install-routingbot true [edit] root@eop# set deployment cluster applications routingbot routingbot-crpd-vip v4-crpd-address [edit] root@eop# set deployment cluster applications routingbot routingbot-ipfix-vip v4-ipfix-term-address [edit]
Where:
install-routingbotenables the routing observability feature.routingbot-crpd-vipis the VIP address used by external network devices as BMP station IP address to establish the BMP session.routingbot-ipfix-vipis the VIP address to view predictor events.Warning: The bare minimum resources required to configure routing observability features are listed in Hardware Requirements. However, to get an estimate of the resources required to configure the routing observability feature on your production deployment, contact your Juniper Partner or Juniper Sales Representative.(Optional) Enable the AI/ML (artificial intelligence [AI] and machine learning [ML]) feature to automatically monitor Key Performance Indicators (KPIs) related to a device's health and detect blackholes on the device.
root@eop# set deployment cluster applications aiops install-aiml true [edit] root@eop# set deployment cluster applications aiops enable-device-health true [edit] root@eop# set deployment cluster applications aiops enable-blackhole true [edit]
Where:
install-aimlenables AI/ML features. This is disabled, by default.enable-device-healthconfigures monitoring of device-health using AI/ML features.enable-blackholeenables detecting blackholes (packet drops) on a device using AI/ML features.Warning: Monitoring device health and detecting blackholes using AI/ML are Beta features this release. The bare minimum resources required to configure AI/ML is listed in Hardware Requirements. However, to get an estimate of the resources required to configure the AI/ML feature on your production deployment, contact your Juniper Partner or Juniper Sales Representative.(Optional) Configure IPv6 addresses.
root@eop# set deployment cluster kubernetes address-family cluster-ipv6-enabled true [edit] root@eop# set deployment cluster common-services ingress ingress-vip-ipv6 generic-ingress-IPv6 [edit] root@eop# set deployment cluster applications active-assurance test-agent-gateway-vip-ipv6 TAGW-IPv6 [edit] root@eop# set deployment cluster install prefer-ipv6 true [edit]
Where:
cluster-ipv6-enabledenables usage of IPv6 addresses for the cluster making the cluster dual-stack.ingress-vip-ipv6is the IPv6 VIP address for generic common ingress and is used to connect to the Web GUI.test-agent-gateway-vip-ipv6is the IPv6 VIP address for the Active Assurance TAGW.prefer-ipv6configures preference for IPv6 addresses over IPv4 addresses. When set totrue, and if hostnames are not configured, IPv6 VIP addresses are added to the outbound SSH configuration.The VIP addresses can be on the same subnet as the cluster nodes or on a different subnet. The VIP addresses can also be on different subnets from each other.
(Optional) If you want to use multiple VIP addresses for generic ingress, configure the additional VIP addresses for NETCONF and gNMI.
root@eop# set deployment cluster common-services ingress ingress-vip netconf-gnmi-vIP [edit] root@eop# set deployment cluster applications oc-term oc-term-host address netconf-gnmi-vIP [edit] root@eop# set deployment cluster applications gnmi-term gnmi-term-host address netconf-gnmi-vIP [edit]
Where:
ingress-vipis used to configure an additional VIP address to be used for NETCONF and gNMI. When more than oneingress-vipaddresses are defined, you can configure one VIP address to be used to connect to the GUI and the additional VIP address to be used for NETCONF and gNMI access.oc-term-hostis the VIP address that you want to use for NETCONF.gnmi-term-hostis the VIP address that you want to use for gNMI.The address configured for NETCONF and gNMI is added to the outbound SSH configuration used to adopt devices.
(Optional) If your cluster nodes are in different subnets, configure BGP peering between the ToR router and the cluster nodes using the metalLB agent running in each cluster node. In this example, as illustrated in Figure 2, cluster nodes 1 and 2 are served by ToR1 and cluster nodes 3 and 4 are served by ToR2.
root@eop# set deployment cluster install enable-l3-vip true [edit] root@eop# set deployment cluster common-services metallb metallb-bgp-peer peer-ip ToR1-IP peer-asn ToR1-asn local-asn node-asn local-nodes node1_IP [edit] root@eop# set deployment cluster common-services metallb metallb-bgp-peer peer-ip ToR1-IP peer-asn ToR1-asn local-asn node-asn local-nodes node2_IP [edit] root@eop# set deployment cluster common-services metallb metallb-bgp-peer peer-ip ToR2-IP peer-asn ToR2-asn local-asn node-asn local-nodes [node3_IP node4_IP] [edit] root@eop# set deployment cluster common-services metallb metallb-bgp-peer-ipv6 peer-ip ToR1-IPv6 peer-asn ToR1-asn local-asn node-asn local-nodes [node1_IP node2_IP] [edit] root@eop# set deployment cluster common-services metallb metallb-bgp-peer-ipv6 peer-ip ToR2-IPv6 peer-asn ToR2-asn local-asn node-asn local-nodes [node3_IP node4_IP] [edit]
Where:
enable-l3-vipenables L3 VIP addresses for cluster nodes and VIP addresses in different subnets.metallb-bgp-peerandmetallb-bgp-peer-ipv6are the IP and IPv6 addresses of the ToR routers, respectively.peer-asnis the ToR AS number.local-asnis the AS number of the cluster nodes. The AS number remains the same for all the cluster nodes.local-nodesare the cluster nodes IP addresses configured in step 3.(Optional) If you want to configure hostnames for generic ingress and Active Assurance TAGW, configure the following:
root@eop# set deployment cluster common-services ingress system-hostname ingress-vip-dns-hostname [edit] root@eop# set deployment cluster applications active-assurance test-agent-gateway-hostname nginx-ingress-controller-hostname [edit]
Where:
system-hostnameis the hostname for the generic ingress virtual IP (VIP) address.test-agent-gateway-hostnameis the hostname for the Active Assurance TAGW VIP address.When you configure hostnames, the hostnames take precedence over VIP addresses and are added to the outbound SSH configuration. The hostnames can resolve to either IPv4 or IPv6 VIP addresses or both.
(Optional) Configure the following settings for SMTP-based user management.
root@eop# set deployment cluster mail-server smtp-relayhost smtp.relayhost.com [edit] root@eop# set deployment cluster mail-server smtp-relayhost-username relayuser [edit] root@eop# set deployment cluster mail-server smtp-relayhost-password relaypassword [edit] root@eop# set deployment cluster mail-server smtp-allowed-sender-domains routingdirector.net [edit] root@eop# set deployment cluster mail-server smtp-sender-email no-reply@routingdirector.net [edit] root@eop# set deployment cluster mail-server smtp-sender-name Juniper Routing Director [edit] root@eop# set deployment cluster papi papi-local-user-management false [edit] root@eop# set deployment cluster mail-server smtp-enabled true [edit]
Where:
smtp-allowed-sender-domainsare the e-mail domains from which Routing Director sends e-mails to users.smtp-relayhostis the name of the SMTP server that relays messages.smtp-relayhost-username(optional) is the username to access the SMTP (relay) server.smtp-relayhost-password(optional) is the password for the SMTP (relay) server.smtp-allowed-sender-domainsare the e-mail domains from which Routing Director sends e-mails to users.smtp-sender-emailis the e-mail address that appears as the sender's e-mail address to the e-mail recipient.smtp-sender-nameis the name that appears as the sender’s name in the e-mails sent to users from Routing Director.papi-local-user-managementenables or disables local-user management.mail-server smtp-enabledenables or disables SMTP.Note:SMTP configuration is optional at this point. SMTP settings can be configured after the cluster has been deployed also. For information on how to configure SMTP after cluster deployment, see Configure SMTP Settings in Paragon Shell.
(Optional) Install custom user certificates. Note, before you install user certificates, you must copy the custom certificate file and certificate key file to the Linux root shell of the node from which you are deploying the cluster. Copy the files to the /root/epic/config folder.
root@eop# set deployment cluster common-services ingress user-certificate use-user-certificate true [edit] root@eop# set deployment cluster common-services ingress user-certificate user-certificate-filename "certificate.cert.pem" [edit] root@eop# set deployment cluster common-services ingress user-certificate user-certificate-key-filename "certificate.key.pem" [edit]
Where:
user-certificate-filenameis the user certificate filename.user-certificate-key-filenameis the user certificate key filename.Note:Installing certificates is optional at this point. You can configure Routing Director to use custom user certificates after cluster deployment also. For information on how to install user certificates after cluster deployment, see Install User Certificates.
(Optional) Configure and enforce security between the PCE server and Path Computation Clients (PCC) using system generated certificates.
root@eop# set deployment cluster applications pathfinder pce-server pce-server-global-default-tls-mode strict-enable [edit]
Where:
pce-server-global-default-tls-modeenables PCEP security. You can set it toauto-detectorstrict-enable. It is set tostrict-disable, by default.Note:Enabling PCEP security is optional at this point. You can configure Routing Director to enforce PCEP security after cluster deployment also. Additionally, you can enforce PCEP security using custom certificates. For information on enabling PCEP security using system generated or custom certificates after cluster deployment, see Enable PCEP Security.
(Optional) Configure the scale size of your cluster. If your cluster is configured with the bare minimum resources required to install a cluster, the scale mode of the cluster is small. The scale mode is set to small by default and you can skip this step.
If you want to install a cluster that supports more devices and you have at least 32 vCPUs and 64-GB RAM, you must change the scale mode to
large.root@eop# set deployment cluster install scale-mode large [edit]
Commit the configuration and exit configuration mode.
root@eop# commit commit complete [edit] root@eop# exit Exiting configuration mode root@eop>
Generate the configuration files.
root@eop> request deployment config Deployment inventory file saved at /epic/config/inventory Deployment config file saved at /epic/config/config.yml
The inventory file contains the IP addresses of the VMs.
The config.yml file contains minimum Routing Director deployment cluster configuration parameters that are required to deploy a cluster.
The
request deployment configcommand also generates a config.cmgd file in the config directory. The config.cmgd file contains all thesetcommands that you have executed. If the config.yml file is inadvertently edited or corrupted, you can redeploy your cluster using theload set config/config.cmgdcommand in the configuration mode.Generate SSH keys on the cluster nodes.
When prompted, enter the SSH password for the VMs. Enter the same password that you configured to log in to the VMs.
root@eop> request deployment ssh-key Setting up public key authentication for ['node1-IP','node2-IP','node3-IP','node4-IP'] Please enter SSH username for the node(s): root Please enter SSH password for the node(s): checking server reachability and ssh connectivity ... Connectivity ok for node1-IP Connectivity ok for node2-IP Connectivity ok for node3-IP Connectivity ok for node4-IP SSH key pair generated in node1-IP SSH key pair generated in node2-IP SSH key pair generated in node3-IP SSH key pair generated in node4-IP copied from node1-IP to node1-IP copied from node1-IP to node2-IP copied from node1-IP to node3-IP copied from node1-IP to node4-IP copied from node2-IP to node1-IP copied from node2-IP to node2-IP copied from node2-IP to node3-IP copied from node2-IP to node4-IP copied from node3-IP to node1-IP copied from node3-IP to node2-IP copied from node3-IP to node3-IP copied from node3-IP to node4-IP copied from node4-IP to node1-IP copied from node4-IP to node2-IP copied from node4-IP to node3-IP copied from node4-IP to node4-IP
Note:If you have configured different passwords for the VMs, ensure that you enter corresponding passwords when prompted.
Deploy the cluster.
root@eop> request deployment deploy cluster Process running with PID: 231xx03 To track progress, run 'monitor start /epic/config/log' After successful deployment, please exit Deployment-shell and then re-login to the host to finalize the setup
The cluster deployment begins and takes over an hour to complete.
(Optional) Monitor the progress of the deployment onscreen.
root@eop> monitor start /epic/config/log
The progress of the deployment is displayed. Deployment is complete when you see an output similar to this onscreen.
<output snipped> PLAY RECAP ********************************************************************* node1-IP : ok=2397 changed=1015 unreachable=0 failed=0 rescued=0 ignored=15 node2-IP : ok=192 changed=96 unreachable=0 failed=0 rescued=0 ignored=0 node3-IP : ok=192 changed=96 unreachable=0 failed=0 rescued=0 ignored=0 node4-IP : ok=186 changed=95 unreachable=0 failed=0 rescued=0 ignored=0 Monday 16 June 2025 22:00:57 +0000 (0:00:00.183) 1:01:53.469 ***** =============================================================================== user-registry : Push Docker Images from local registry to paragon registry - 335.76s kubernetes/addons/rook : Wait for Object-Store ------------------------ 213.36s kubernetes/multi-master-rke2 : start rke2 server on 1st master -------- 212.18s jcloud/papi : wait for papi rest api ---------------------------------- 122.19s jcloud/airflow2 : Install Helm Chart ---------------------------------- 117.00s Check if kafka container is up ---------------------------------------- 108.86s Install Helm Chart ---------------------------------------------------- 106.30s kubernetes/addons/postgres-operator : Make sure postgres is fully up and accepting request using regular user -- 73.73s systemd ---------------------------------------------------------------- 62.14s kubernetes/addons/postgres-operator-rb : RB >>> Make sure postgres is fully up and accepting request using regular user -- 52.96s kubernetes/multi-master-rke2 : start rke2 server on other master ------- 51.71s Create Kafka Topics ---------------------------------------------------- 51.48s systemcheck : Get Disk IOPS -------------------------------------------- 49.09s delete existing install config-map - if any ---------------------------- 43.10s paa/timescaledb : Make sure postgres is fully up and accepting request using regular user -- 41.96s kubernetes/multi-master-rke2 : start rke2 server on other master ------- 41.90s Save installer config to configmap ------------------------------------- 41.47s Install Helm Chart ----------------------------------------------------- 35.92s Verify healthbot victoriametrics --------------------------------------- 35.17s kubernetes/addons/vm-operator-rb : Wait for vm storage statefulset and pods -- 31.15s Playbook run took 0 days, 1 hours, 1 minutes, 53 seconds root@eop>
Alternatively, if you did not choose to monitor the progress of the deployment onscreen using the
monitorcommand, you can view the contents of the log file using thefile show /epic/config/logcommand. The last few lines of the log file must look similar to the sample output. We recommend that you check the log file periodically to monitor the progress of the deployment.Upon successful completion of the deployment, the application cluster is created. Log out of the VM and log in again to Deployment Shell.
The console output displays the Deployment Shell welcome message and the IP addresses of the four nodes (called Controller-1 through Controller-4), the Active Assurance TAGW VIP address, the Web admin user e-mail address, and Web GUI IP address. If IPv6 addresses are configured, the welcome message displays the IPv6 VIP addresses as well.
Welcome to Juniper Routing Director Shell This VM is part of a Juniper Routing Director Cluster with IPv6 enabled ======================================================================================= Controller IP : node1-IP, node2-IP, node3-IP, node4-IP PAA Virtual IP : TAGW-vIP, TAGW-vIPv6 UI : https://generic-ingress-vIP, https://[generic-ingress-vIPv6] Web Admin User : admin-user@juniper.net ======================================================================================= ova: 20250526_1117 build: eop-2.6.0.8952.gbef82aec6b *************************************************************** WELCOME TO Routing Director SHELL! You will now be able to execute Routing Director CLI commands! *************************************************************** root@Primary1>The CLI command prompt displays your login username and the node hostname that you configured previously. For example, if you entered Primary1 as the hostname of your primary node, the command prompt is
root@Primary1 >.
You can now verify the cluster deployment and log in to the Web GUI. If you are accessing the Web GUI from an external IP address, outside the Routing Director network, you must use NAT to map the external IP address to the Web GUI IP address. Go to Log in to the Web GUI.