Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Install Paragon Automation

This topic describes the installation of the Paragon Automation cluster. The order of installation tasks is shown at a high level in Figure 1. Ensure that you have completed all the pre-configuration and preparation steps described in Installation Prerequisites before you begin installation.

Figure 1: High-Level Process Flow for Installing Paragon AutomationHigh-Level Process Flow for Installing Paragon Automation

Download the Software


  • You need a Juniper account to download the Paragon Automation software.

  1. Log in to the control host.
  2. Create a directory in which you download the software.

    This directory is referred to as pa-download-dir in this guide.

  3. From the Version drop-down list on the Paragon Automation software download page at, select the version number.
  4. Download the Paragon Automation Setup installation files to the download folder using the wget "" command.

    The Paragon Automation Setup installation bundle consists of the following scripts and tar files to install each of the component modules:

    • davinci.tar.gz, which is the primary installer file.

    • run script, which executes the installer image.

    • infra.tar, which installs the Kubernetes infrastructure components including Docker and Helm.

    • ems.tar, which installs the EMS component.

    • northstar.tar, which installs the Paragon Pathfinder and Paragon Planner components.

    • healthbot.tar, which installs the Paragon Insights and the UI components.

Install Paragon Automation

  1. Use the run script to create and initialize a configuration directory with the configuration template files.

    config-dir is a user-defined directory on the control host that contains configuration information for a particular installation. The init command automatically creates the directory if it does not exist. Alternatively, you can create the directory before you execute the init command.

    If you are using the same control host to manage multiple installations of Paragon Automation, you can differentiate between installations by using differently named configuration directories.

  2. Use a text editor to customize the inventory file with the IP addresses or hostnames of the cluster nodes, as well as the usernames and authentication information that are required to connect to the nodes.

    The inventory file describes the cluster nodes on which Paragon Automation will be installed.

    Edit the following groups in the inventory file.

    1. Add the IP addresses of the Kubernetes primary and worker nodes of the cluster.

      The master group identifies the primary nodes, and the node group identifies the worker nodes. The same IP address cannot be in both master and node groups. If you have configured hostnames that can be resolved to the required IP addresses, you can also add hostnames in the inventory file.

      For example:

      For lab installations, if you want to install a single-node setup, include the IP address or hostname of the node as the primary node, and do not edit the worker node group.

    2. Define the nodes that run Elasticsearch in the elasticsearch_cluster group.

      These nodes store the Elasticsearch hosts for log collection, and use /var/lib/elasticsearch to store logging data. These nodes can be the same as the worker nodes, or can be a different set of nodes. Elasticsearch uses a lot of disk space. If you use the worker nodes, you must ensure there is sufficient space for log collection.

      For example:

    3. Define the nodes that have disk space available for applications under the local_storage_nodes:children group.

      Services such as Postgres, Zookeeper, Kafka, and MinIO use local storage or disk space partitioned inside export/local-volumes. By default, worker nodes have local storage available. If you require the primary nodes to run applications as well, add master to this group. If you do not add the master here, you can run only applications that do not require local storage on the primary nodes.

      For example:

    4. Configure the user account and authentication methods to authenticate the installer with the cluster nodes under the [all:vars] group. Set the ansible_user variable to the user account to log in to the cluster. The user account must be root or in case of non-root users, the account must have superuser (sudo) privileges. Use any one of the following methods to specify user account passwords.

      • Use an ssh-key for authentication. Configure the ansible_ssh_private_key_file variable in the inventory file.

        If you use an SSH key, you must perform the following steps on the control host.

        1. Generate an SSH key.

        2. Copy the private key to the config-dir directory, where the inventory file is saved.

        3. To allow authentication using the SSH key, copy to the cluster nodes. Repeat this step for all cluster nodes.

      • Enter the ansible_user name and password in the master and node groups of the inventory file.

      • Use the ansible-vault encrypt_string command supported by Ansible to encrypt passwords.

        1. Execute the run -c config-dir ansible-vault encrypt_string command.

        2. Enter a vault password and confirm the password when prompted.

        3. Copy and paste the encrypted password into the inventory file.

        For example:

        In this example, the encrypted password is the text starting from "!vault |" up to and including "6265". If you are encrypting multiple passwords, enter the same password for all.

        For more information, see


        The default inventory file is in the INI format. If you choose to encrypt passwords using the Vault method, you must convert the inventory file to the YAML format. For information about inventory files, see

      • Enter the authentication password directly in the file for ansible_password. We do not recommend using this option to specify the password.

        If ansible_user is not root, the configured user must be able to use sudo to execute privileged commands. If sudo requires a password, also add ansible_become_password=password to the inventory file. For more information about how to configure Ansible inventory, see

    5. (Optional) Specify a name for your Kubernetes cluster in the kubernetes_cluster_name group.

  3. Use the conf script to configure the installer.

    The conf script runs an interactive installation wizard that allows you to choose the components to be installed and configure a basic Paragon Automation setup. The script populates the config.yml file with your input configuration. For advanced configuration, you must edit the config.yml file manually.

    Enter the information as prompted by the wizard. Use the cursor keys to move the cursor, use the space key to select an option, and use a or i to toggle selecting or deselecting all options. Press Enter to move to the next configuration option. You can skip configuration options by entering a period (.). You can re-enter all your choices by exiting the wizard and restarting from the beginning. The installer allows you to exit the wizard after you save the choices that you already made or to restart from the beginning.. You cannot go back and redo the choices that you already made in the current workflow without exiting the wizard altogether.

    The configuration options that the conf script prompts for are listed in Table 1:

    Table 1: conf Script Options

    conf Script Prompts


    Select components

    You can install one or more of the Infrastructure, Pathfinder, Insights, and EMS components. By default, all components are selected.


    In Paragon Automation Release 21.1, you must install the EMS and Paragon Insights components. The installation of the other components is optional and based on your requirement.

    Infrastructure Options

    These options are displayed only if you selected to install the Infrastructure component in the previous prompt.

    • Install Kubernetes Cluster—Installs the required Kubernetes cluster. If you are installing Paragon Automation on an existing cluster, you can clear this selection.

    • Install MetalLB LoadBalancer—Installs an internal load balancer for the Kubernetes cluster. By default, this option is already selected. If you are installing Paragon Automation on an existing cluster with preconfigured load balancing, you can clear this selection.

    • Install Chrony NTP Client—NTP is required to synchronize the clocks of the cluster nodes. If NTP is already installed and configured, you need not install Chrony. All nodes must run NTP or some other time-synchronization at all times.

    • Allow Master Scheduling—Master scheduling determines how the primary nodes are used. If you select this option, the master nodes are used as both the control plane and worker nodes, which means that you can run application workloads on the primary nodes as well. This allows for better resource allocation and management in the cluster. However, you also run the risk that a misbehaving workload can exhaust resources and affect the stability of the whole cluster.

      If you do not allow master scheduling, the primary nodes are used only as the control plane. You can accommodate primary nodes with lesser resources, because they do not run any application workloads. If you have multiple primary nodes or nodes with high capacity and disk space, you risk wasting their resources by not utilizing them completely.

    Kubernetes Master Virtual IP address

    Configure a virtual IP address (VIP) for the primary nodes in a multi-primary node setup. The VIP must be in the same broadcast domain as the primary and worker nodes.


    This option is displayed only when the inventory file is updated with more than one primary node.

    Install Loadbalancer for Master Virtual IP address

    Install a load balancer for clusters with multiple primary nodes. The load balancer is responsible for the primary node’s VIP and is not used for externally accessible services. By default, the load balancer is internal, but you can also use external load balancers.

    For more information, see

    List of NTP servers

    Enter a comma-separated list of NTP servers.

    LoadBalancer IP address ranges

    Enter a comma-separated list of IP addresses or address ranges that are reserved for the load balancer. The externally accessible services are handled through MetalLB, which needs one or more IP address ranges that are accessible from outside the cluster. VIPs for the different servers are selected from these ranges of addresses. The address ranges can be (but need not be) in the same broadcast domain as the cluster nodes.

    For ease of management, because the network topologies need access to Insights services and the PCE server clients, we recommend that the the VIPs for these be selected from the same range.

    For more information, see VIP Considerations.

    Virtual IP address for ingress controller

    Enter a VIP to be used for Web access of the Kubernetes cluster or the Paragon Automation user interface. This must be an unused IP address that is managed by the load balancer.

    Virtual IP address for Insights services

    Enter a VIP for Paragon Insights services. This must be an unused IP address that is managed by the load balancer.

    Virtual IP address for Pathfinder PCE server

    Enter a VIP to be used for Paragon Pathfinder PCE server access. This must be an unused IP address that is managed by the load balancer.

    Hostname of Main web application

    Enter an IP address or a hostname. If you enter an IP address, it must be the same as the the VIP that you entered for the ingress controller. If you enter a hostname, it should resolve to the VIP for the ingress controller and must be preconfigured in the DNS server.

    BGP autonomous number of CRPD peer

    Set up the Containerized Routing Protocol Daemon (cRPD) autonomous systems and the nodes with which cRPD creates its BGP sessions.

    If Paragon Pathfinder is installed, you must configure a cRPD to peer with a BGP-LS router in the network to import the network topology. For a single autonomous system, you must configure the ASN of the network.

    Comma separated list of CRPD peers

    List of CRPD peers. The CRPD instance running as part of a cluster opens a BGP connection to the specified peer routers and imports topology data using BGP-LS.

    The following example shows the configuration of the BGP peer in the connected network topology.

    [edit groups northstar]
    root@system# show protocols bgp group northstar
    type internal;
    family traffic-engineering {
    export TE;
    allow 10.xx.43.0/24;
    [edit groups northstar]
    root@system# show policy-options policy-statement TE
    from family traffic-engineering;
    then accept;

    In this example, the cluster hosts are in the 10.xx.43.0/24 network, and the router will accept BGP sessions from any host in this network.

  4. Click Yes to save the configuration information.

    This configures a basic setup, and the information is saved in the config.yml file in the config-dir directory.

  5. (Optional) For more advanced configuration of the cluster, use a text editor to manually edit the config.yml file.

    The config.yml file consists of an essential section at the beginning of the file that corresponds to the configuration options that the installation wizard prompts you to enter. The file also has an extensive list of sections under the essential section that allows you to enter complex configuration values directly in the file. Save and exit the file after you finish editing it.

  6. (Optional) If you want to deploy custom SSL certificates signed by a recognized certificate authority (CA), store the private key and certificate in the config-dir directory. Save the private key as ambassador.key.pem and the certificate as ambassador.cert.pem.

    By default, ambassador uses a locally generated certificate signed by the Kubernetes cluster-internal CA.


    If the certificate is about to expire, save the new certificate as ambassador.cert.pem in the same directory, and execute the ./run -c config-dir deploy -t ambassador command.

  7. Install the Paragon Automation cluster based on the information that you configured in the config.yml and inventory files.

    The time taken to install the configured cluster depends on the complexity of the cluster. A basic setup installation takes at least 20 minutes to complete.

  8. Log in to the worker nodes.

    Use a text editor to configure the following recommended information for Paragon Insights in the limits.conf and sysctl.conf files.

    Repeat this step for all worker nodes.

Log in to the Paragon Automation UI

After you install Paragon Automation, log in to the Paragon Automation UI.

  1. Open a browser, and enter either the hostname of the main Web application or the VIP of the ingress controller that you entered in the URL field of the installation wizard.
    For example, https://vip-of-ingress-controller-or-hostname-of-main-web-application. The Paragon Automation login page is displayed.
  2. For first-time access, enter admin as username and Admin123! as the password to log in.
    The Set Password page is displayed. To access the Paragon Automation setup, you must set a new password.
  3. Set a new password that meets the password requirements.
    The password should be between 6 to 20 characters and must be a combination of uppercase letters, lowercase letters, numbers, and special characters. Confirm the new password, and click OK.
    The Dashboard page is displayed. You have successfully installed and logged in to the Paragon Automation UI.
  4. You need a license to activate the Graphical User Interface (GUI). When you log in to the Paragon Automation GUI for the first time, the Dashboard page appears. Navigate to Administration > License Management to add a license. After you successfully add a license for a component (Paragon Insights, Paragon Pathfinder, or Paragon Planner), you can see the related GUI pages. The availability of features in Paragon Insights, Paragon Pathfinder, and Paragon Planner are based on the license you have purchased.
  5. Update the URL to access the Paragon Automation UI in Administration > Authentication > Portal Settings to ensure that the activation e-mail sent to users for activating their account contains the correct link to access the GUI. For more information, see Configure Portal Settings.