Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install Single-Node Cluster on Ubuntu

You can also install Paragon Automation on a single node that acts as both primary node and worker node. Make sure you use a single-node setup only as proof of concept (POC) or for lab deployments and not for production deployments.

Read the following topics to learn how to install Paragon Automation on a single node, with Ubuntu as the base OS. Figure 1 shows a summary of installation tasks at a high level. Ensure that you've completed all the preconfiguration and preparation steps described in Installation Prerequisites on Ubuntu before you begin installation.

Figure 1: Installation Sequence - Infographic Installation Sequence - Infographic

To view a higher-resolution image in your Web browser, right-click the image and open in a new tab. To view the image in PDF, use the zoom option to zoom in.

Download the Software

Prerequisite

  • You need a Juniper account to download the Paragon Automation software.

  1. Log in to the control host.
  2. Create a directory in which you'll download the software.

    We refer to this directory as pa-download-dir in this guide.

  3. Select the version number from the Version list on the Paragon Automation software download page at https://support.juniper.net/support/downloads/?p=pa.
  4. Download the Paragon Automation Setup installation files to the download folder using the wget "http://cdn.juniper.net/software/file-download-url" command.

    The Paragon Automation setup installation bundle consists of the following scripts and TAR files to install each of the component modules:

    • davinci.tar.gz, which is the primary installer file.

    • infra.tar, which installs the Kubernetes infrastructure components including Docker and Helm.

    • ems.tar, which installs the base platform component.

    • northstar.tar, which installs the Paragon Pathfinder and Paragon Planner components.

    • healthbot.tar, which installs the Paragon Insights component.

    • paragon_ui.tar, which installs the Paragon Automation UI component.

    • run script, which executes the installer image.

    • addons.tar, which installs infrastructure components that are not part of the base Kubernetes installation. The infrastructure components include, IAM, Kafka, ZooKeeper, cert-manager, Ambassador, Postgres, Metrics, Kubernetes Dashboard, Open Distro for Elasticsearch, Fluentd, Reloader, ArangoDB, and Argo.

    • helm-charts.tar, which contains all the helm-charts required for installation.

    • rhel-84-airgap.tar.gz, which installs Paragon Automation using the air-gap method on nodes only where the base OS is Red Hat Enterprise Linux (RHEL).

    Note:

    The Paragon Automation setup installation bundle comprises of a foghorn.tar file. However, Foghorn is not supported in Release 23.1.

Install Paragon Automation on a Single-Node

  1. Make the run script executable in the pa-download-dir directory.
  2. Use the run script to create and initialize a configuration directory with the configuration template files.

    config-dir is a user-defined directory on the control host that contains configuration information for a particular installation. The init command automatically creates the config-dir directory if it does not exist. Alternatively, you can create the directory before you execute the init command.

    Ensure that you include the dot and slash (./)with the run command.

    If you are using the same control host to manage multiple installations of Paragon Automation, you can differentiate between installations by using differently named configuration directories.

  3. Ensure that the control host can connect to the cluster node through SSH using the install-user account.

    Copy the private key that you generated in Configure SSH client authentication to the user-defined config-dir directory. The installer allows the Docker container to access the config-dir directory. The SSH key must be available in the directory for the control host to connect to the cluster nodes.

    Ensure that you include the dot "." with the copy command.

  4. Customize the inventory file, created under the config-dir directory, with the IP address or hostname of the single cluster node, as well as the username and authentication information required to connect to the node. The inventory file is in the YAML format and describes the cluster nodes on which Paragon Automation will be installed. You can edit the file using the inv command or a Linux text editor such as vi.
    1. Customize the inventory file using the inv command:

      The following table lists the configuration options that the inv command prompts for.

      Table 1: inv Command Options
      inv Command Prompts Description
      Kubernetes master nodes Enter the IP address of the single Kubernetes cluster node.
      Kubernetes worker nodes Leave this field empty for a single-node cluster.
      Local storage nodes

      The local storage node is prepopulated with the IP address of the single cluster node.

      This field defines the node that has disk space available for applications that require local storage. Services such as Postgres, ZooKeeper, and Kafka, use local storage or disk space partitioned inside export/local-volumes.

      This is different from Ceph storage.

      External registry nodes (Optional) Configure an existing external user registry.
      Kubernetes nodes' username (for example, root) Configure the user account and authentication methods to authenticate the installer with the cluster nodes. The user account must be root or, in case of non-root users, the account must have superuser (sudo) privileges.
      SSH private key file (optional)

      If you chose ssh-key authentication, for the control host to authenticate with the nodes during the installation process, configure the directory (config-dir) where the ansible_ssh_private_key_file is located, and the id_rsa file, as "{{ config-dir }}/id_rsa".

      Kubernetes nodes' password (optional)

      If you chose password authentication for the control host to authenticate with the node during the installation process, enter the authentication password directly.

      Warning: The password is written in plain text. We do not recommend using this option for authentication.

      Kubernetes cluster name (optional) Enter a name for your Kubernetes cluster.
      Write inventory file?

      Click Yes to save the inventory information.

      For example:

    2. Alternatively, you can customize the inventory file manually using a text editor.

      Edit the following groups in the inventory file.

      1. Add the IP address of the single Kubernetes node in the master group only.

        The master group identifies the primary nodes, and the node group identifies the worker nodes. You cannot have the same IP address in both master and node groups.

        To create a single-primary-node setup, include the IP address or hostname of the node that will be acting as both primary and worker under the master group. Do not add any IP address or hostname under the node group.

      2. Add the address or hostname of the single Kubernetes node under the local_storage_nodes:children group under master. Do not add anything to the local_storage_nodes:children group under node.

      3. Configure the user account and authentication methods to authenticate the installer in the Ansible control host with the cluster node under the vars group.

      4. (Optional) Specify a name for your Kubernetes cluster in the kubernetes_cluster_name group.

  5. Configure the installer using the conf command.

    The conf command runs an interactive installation wizard that enables you to choose the components to be installed and configure a basic Paragon Automation setup. The command populates the config.yml file with your input configuration. For advanced configuration, you must edit the config.yml file manually.

    Enter the information as prompted by the wizard. Use the cursor keys to move the cursor, use the space key to select an option, and use a or i to toggle selecting or clearing all options. Press Enter to move to the next configuration option. You can skip configuration options by entering a period (.). You can reenter all your choices by exiting the wizard and restarting from the beginning. The installer allows you to exit the wizard after you save the choices that you already made or to restart from the beginning. You cannot go back and redo the choices that you already made in the current workflow without exiting and restarting the wizard altogether.

    The following table lists the configuration options that the conf command prompts for:

    Table 2: conf Command Options

    conf Command Prompts

    Description/Options

    Select components

    You can install one or more of the Infrastructure, Pathfinder, Insights, and base platform components. By default, all components are selected.

    You can choose to install Pathfinder based on your requirement. However, you must install all other components, apart from Foghorn.

    Note:

    Foghorn is not supported in Release 23.1. You must not select the Foghorn option. Installation fails if you select Foghorn.

    Infrastructure Options

    The wizard displays these options only if you selected to install the Infrastructure component at the preceding prompt.

    • Install Kubernetes Cluster—Install the required single-node Kubernetes cluster.

    • Install MetalLB LoadBalancer—Instal an internal load balancer for the single-node Kubernetes cluster. By default, this option is already selected.

    • Install Nginx Ingress Controller—Install Nginx Ingress Controller is a load-balancing proxy for the Pathfinder components.

    • Install Chrony NTP Client—Install the Chrony NTP client. The node must run NTP or some other time-synchronization protocol at all times. If NTP is already installed and configured, you need not install Chrony.

    • Allow Master Scheduling—Master scheduling determines how the node acting as primary nodes are used. Master is another term for a node acting as primary.

      If you select this option, the primary nodes can also act as worker nodes, which means they not only act as control plane but can run application workloads as well. If you do not select this option, the primary nodes are used only as the control plane.

      Note:

      For single-node cluster installations, you must allow master scheduling. If you don't, installation will fail.

    List of NTP servers

    Enter a comma-separated list of NTP servers. The wizard displays this option only if you chose to install Chrony NTP at the preceding prompt.

    Virtual IP address (es) for ingress controller

    Enter a VIP address to be used for Web access of the Kubernetes cluster or the Paragon Automation user interface. This address must be an unused IP address that is managed by the MetalLB load balancer pool.

    Virtual IP address for Infrastructure Nginx Ingress Controller

    Enter a VIP address for the Nginx Ingress Controller. This address must be an unused IP address that is managed by the MetalLB load balancer pool. This address is used for NetFlow traffic.

    Virtual IP address for Insights services

    Enter a VIP address for Paragon Insights services. This address must be an unused IP address that is managed by the MetalLB load balancer pool.

    Virtual IP address for SNMP trap receiver (optional) Enter a VIP address for the SNMP trap receiver proxy only if this functionality is required.

    If you do not need this option, enter a dot (.).

    Pathfinder Options Select to install Netflowd. You can configure a VIP address for netflowd or use a proxy for netflowd (same as the VIP address for the Infrastructure Nginx Ingress Controller).

    If you choose to not install netflowd, you cannot configure a VIP address for netflowd.

    Use netflowd proxy Enter Y to use a netflowd proxy. This option appears only if you chose to install netflowd.

    If you chose to use a netflowd proxy, you needn't configure a VIP address for netflowd. The VIP address for the Infrastructure Nginx Ingress Controller is used as the proxy for netflowd.

    Virtual IP address for Pathfinder Netflowd Enter a VIP address to be used for Paragon Pathfinder netflowd. This option appears only if you chose not to use netflowd proxy.
    PCE Server Proxy Select the proxy mode for the PCE server. Select from None and Nginx-Ingress.
    Virtual IP address for Pathfinder PCE server

    Enter a VIP address to be used for Paragon Pathfinder PCE server access. This address must be an unused IP address that is managed by the load balancer.

    If you selected Nginx-Ingress, as the PCE Server Proxy, this VIP address is not necessary. The wizard does not prompt you to enter this address and PCEP will use the same address as the VIP address for Infrastructure Nginx Ingress Controller.

    Note:

    The addresses for ingress controller, Infrastructure Nginx Ingress Controller, Insights services, and PCE server must be unique. You cannot use the same address for all four VIP addresses.

    All these addresses are listed automatically in the LoadBalancer IP address ranges option.

    LoadBalancer IP address ranges

    The LoadBalancer IP addresses are prepopulated from your VIP addresses range. You can edit these addresses. The externally accessible services are handled through MetalLB, which needs one or more IP address ranges that are accessible from outside the cluster. VIPs for the different servers are selected from these ranges of addresses.

    The address ranges can be (but need not be) in the same broadcast domain as the cluster nodes. For ease of management, because the network topologies need access to Insights services and the PCE server clients, we recommend that you select the VIP addresses from the same range.

    For more information, see Virtual IP Address Considerations.

    Addresses can be entered as comma-separated values (CSV), as a range, or as a combination of both. For example:

    • 10.x.x.1, 10.x.x.2, 10.x.x.3

    • 10.x.x.1-10.x.x.3

    • 10.x.x.1, 10.x.x.3-10.x.x.5

    • 10.x.x.1-3 is not a valid format

    Is user external registry

    Configure an existing external user registry. For information on configuring external registries, see Configure External Docker Registry.

    Hostname of Main web application

    Enter a hostname for the ingress controller. You can configure the hostname as an IP address or as a fully qualified domain name (FQDN). For example, you can enter 10.12.xx.100 or www.paragon.juniper.net (DNS name). Do not include http:// or https://.

    Note:

    You will use this hostname to access the Paragon Automation Web UI from your browser. For example, https://hostname or https://IP-address.

    BGP autonomous system number of CRPD peer

    Set up the Containerized Routing Protocol Daemon (cRPD) autonomous systems and the nodes with which cRPD creates its BGP sessions.

    You must configure the autonomous system number of the network to allow cRPD to peer with one or more BGP-LS routers in the network. By default, the autonomous system number is 64500.

    Note:

    While you can configure the autonomous system number at the time of installation, you can also modify the cRPD configuration later. See, #modify-crpd.

    Comma separated list of CRPD peers

    You must configure cRPD to peer with at least one BGP-LS router in the network to import the network topology. For a single autonomous system, configure the address of the BGP-LS router(s) that will peer with cRPD to provide topology information to Paragon Pathfinder. The CRPD instance running as part of a cluster will initiate a BGP-LS connection to the specified peer router(s) and import topology data once the session has been established. If more than one peer is required, these can be added as comma separated values, or as a range or as a combination of both, similar to how LoadBalancer IP addresses are added.

    Note:

    While you can configure the peer IP addresses at the time of installation, you can also modify the cRPD configuration later, as described in #modify-crpd.

    You must configure the BGP peer routers to accept BGP connections initiated from cRPD. The BGP session will be initiated from cRPD using the address of the worker where the bmp pod is running, as the source address. In the single node deployment case, cRPD will be running on the only worker configured. If new workers are added to the cluster later, you must allow connections from the addresses of any of the workers (the current worker, and any additional worker).

    You can allow the range of IP addresses that the worker address belongs to (for example, 10.xx.43.0/24), or the specific IP address of the worker (for example, 10.xx.43.1/32). You could also configure this using the neighbor command combined with the passive option to prevent the router from attempting to initiate the connection.

    The following example shows the options to configure a Juniper device to allow BGP-LS connections from cRPD.

    The following commands configure the router to accept BGP-LS sessions from any host in the 10.xx.43.0/24 network, where the worker is connected. This will accommodate any worker that is added to the cluster later.

    [edit groups northstar]
    root@system# show protocols bgp group northstar
    type internal;
    family traffic-engineering {
        unicast;
    }
    export TE;
    allow 10.xx.43.0/24;
    
    [edit groups northstar]
    root@system# show policy-options policy-statement TE
    from family traffic-engineering;
    then accept;
    

    The following commands configure the router to accept BGP-LS sessions from 10.xx.43.1 only. Additional allow commands can be added later on, if new workers are added to the cluster.

    [edit protocols bgp group BGP-LS]
    root@vmx101# show | display set 
    set protocols bgp group BGP-LS family traffic-engineering unicast
    set protocols bgp group BGP-LS peer-as 11
    set protocols bgp group BGP-LS allow 10.x.43.1
    set protocols bgp group BGP-LS export TE
    

    The following commands also configure the router to accept BGP-LS sessions from 10.xx.43.1 only. The passive option was added to prevent the router from attempting to initiate a BGP-LS session with cRPD. The router will wait for the session to be initiated by this BGP cRPD. Additional neighbor commands can be added later on, if new workers are added to the cluster.

    [edit protocols bgp group BGP-LS]
    root@vmx101# show | display set
    set protocols bgp group BGP-LS family traffic-engineering unicast
    set protocols bgp group BGP-LS peer-as 11
    set protocols bgp group BGP-LS neighbor 10.xx.43.1
    set protocols bgp group BGP-LS passive
    set protocols bgp group BGP-LS export TE
    

    You will also need to enable OSPF/ISIS and MPLS traffic engineering as shown:

    set protocols rsvp interface interface.unit
    
    set protocols isis interface interface.unit
    set protocols isis traffic-engineering igp-topology
    Or
    set protocols ospf area area interface interface.unit
    set protocols ospf traffic-engineering igp-topology
    
    set protocols mpls interface interface.unit
    set protocols mpls traffic-engineering database import igp-topology
    

    For more information, see https://www.juniper.net/documentation/us/en/software/junos/mpls/topics/topic-map/mpls-traffic-engineering-configuration.html.

    Finish and write configuration to file Click Yes to save the configuration information.

    This configures a basic setup, and the information is saved in the config.yml file in the config-dir directory.

    For example:
  6. (Optional) For more advanced configuration of the cluster, use a text editor to manually edit the config.yml file.

    The config.yml file consists of an essential section at the beginning of the file that corresponds to the configuration options that the installation wizard prompts you to enter. The file also has an extensive list of sections under the essential section that allows you to enter complex configuration values directly in the file.

    The following options are available.

    • Configure Open Distro, which is used to consolidate and index application logs. To configure Open Distro, set install_opendistro_es and install_fluentd to true.

    • Set the opendistro_es_admin_password password to log in to the Kibana application. Kibana is a visualization tool used to search logs using keywords and filters.

      By default, the username is preconfigured as admin in #opendistro_es_admin_user: admin and the install_opendistro_es option is set to true to replace the Elasticsearch version with Open Distro. Use admin as username and this password to log in to Kibana.

      By default, data is retained on the disks for seven days, before being purged, in a production deployment. You can edit the number of days to a smaller number in opendistro_es_retain if your disk size is low.

      If you do not configure the opendistro_es_admin_password password, the installer generates a random password. You can retrieve the password using the command:

      # kubectl -n kube-system get secret opendistro-es-account -o jsonpath={..password} | base64 -d

    • Set the iam_skip_mail_verification configuration option to true for user management without SMTP by Identity Access Management (IAM). By default, this option is set to false for user management with SMTP. You must configure SMTP in Paragon Automation so that the Paragon Automation users can be notified when their account is created, activated, locked, or when the password is changed for their account.

    • Configure the callback_vip option with an IP address different from that of the VIP for the ingress controller. You can configure a separate IP address, which is a part of the MetalLB pool of addresses, to enable segregation of management and data traffic from the southbound and northbound interfaces. By default, callback_vip is assigned the same or one of the addresses of the ingress controller.

    Save and exit the file after you finish editing it.

  7. (Optional) If you want to deploy custom SSL certificates signed by a recognized certificate authority (CA), store the private key and certificate in the config-dir directory. Save the private key as ambassador.key.pem and the certificate as ambassador.cert.pem.

    By default, Ambassador uses a locally generated certificate signed by the Kubernetes cluster-internal CA.

    Note:

    If the certificate is about to expire, save the new certificate as ambassador.cert.pem in the same directory, and execute the ./run -c config-dir deploy -t ambassador command.

  8. Install the Paragon Automation cluster based on the information that you configured in the config.yml and inventory files.

    The installation time to install the configured cluster depends on the complexity of the cluster. A basic setup installation takes at least 45 minutes to complete.

    NTP synchronization is checked at the start of deployment. If clocks are out of sync, deployment will fail. If you are installing Paragon Automation on an existing Kubernetes cluster, the deploy command upgrades the currently deployed cluster to the latest Kubernetes version. The command also upgrades the Docker CE version, if required. If Docker EE is already installed on the nodes, the deploy command will not overwrite it with Docker CE. When upgrading the Kubernetes version or the Docker version, the command performs the upgrade sequentially on one node at a time. Each node is cordoned off and removed from scheduling and upgrades are performed, Kubernetes is restarted on the node, and the node is finally uncordoned and brought back into scheduling.

  9. When deployment is completed, log in to the worker nodes.

    Use a text editor to configure the following recommended information for Paragon Insights in the limits.conf and sysctl.conf files. These values set the soft and hard memory limits for influx DB memory requirements. If you do not set these limits, you might see errors such as “out of memory” or “too many open files” because of default system limits.



Log in to the Paragon Automation UI

After you install Paragon Automation, log in to the Paragon Automation UI.

  1. Open a browser, and enter either the hostname of the main Web application or the VIP address of the ingress controller that you entered in the URL field of the installation wizard.
    For example, https://vip-of-ingress-controller-or-hostname-of-main-web-application. The Paragon Automation login page appears.
  2. For first-time access, enter admin as username and Admin123! as the password to log in. You must change the password immediately.
    The Set Password page appears. To access the Paragon Automation setup, you must set a new password.
  3. Set a new password that meets the password requirements.
    The password should be between 6 to 20 characters and must be a combination of uppercase letters, lowercase letters, numbers, and special characters. Confirm the new password, and click OK.
    The Dashboard page appears. You have successfully installed and logged in to the Paragon Automation UI.
  4. Update the URL to access the Paragon Automation UI in Administration > Authentication > Portal Settings to ensure that the activation e-mail sent to users for activating their account contains the correct link to access the GUI. For more information, see Configure Portal Settings.
    For high-level tasks that you can perform after you log in to the Paragon Automation GUI, see Paragon Automation Getting Started.