Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install Multi-Node Cluster on CentOS

This topic describes the installation of the Paragon Automation on a multi-node cluster. The summary of installation tasks is shown at a high level in Figure 1. Ensure that you have completed all the preconfiguration and preparation steps described in Installation Prerequisites on CentOS before you begin installation.

Figure 1: Installation Sequence - Infographic Installation Sequence - Infographic

To view a higher-resolution image in your Web browser, right-click the image and open in a new tab. To view the image in PDF, use the zoom option to zoom in.

Download the Software

Prerequisite

  • You need a Juniper account to download the Paragon Automation software.

  1. Log in to the control host.
  2. Create a directory in which you download the software.

    This directory is referred to as pa-download-dir in this guide.

  3. From the Version drop-down list on the Paragon Automation software download page at https://support.juniper.net/support/downloads/?p=pa, select the version number.
  4. Download the Paragon Automation Setup installation files to the download folder using the wget "http://cdn.juniper.net/software/file-download-url" command.

    The Paragon Automation Setup installation bundle consists of the following scripts and tar files to install each of the component modules:

    • davinci.tar.gz, which is the primary installer file.

    • infra.tar, which installs the Kubernetes infrastructure components including Docker and Helm.

    • ems.tar, which installs the base platform component.

    • northstar.tar, which installs the Paragon Pathfinder and Paragon Planner components.

    • healthbot.tar, which installs the Paragon Insights component.

    • paragon_ui.tar, which installs the Paragon Automation UI component.

    • run script, which executes the installer image.

Install Paragon Automation

  1. Make the run script executable in the pa-download-dir directory.
  2. Use the run script to create and initialize a configuration directory with the configuration template files.

    config-dir is a user-defined directory on the control host that contains configuration information for a particular installation. The init command automatically creates the directory if it does not exist. Alternatively, you can create the directory before you execute the init command.

    Ensure that you include the "./" when issuing the run command.

    If you are using the same control host to manage multiple installations of Paragon Automation, you can differentiate between installations by using differently named configuration directories.

  3. Ensure that the control host can connect to the cluster nodes through SSH using the install user account.

    Copy the private key that you generated in Install SSH Client Authentication to the user defined config-dir directory. The installer allows the Docker container to access the config-dir directory. The SSH key must be available in the directory for the control host to connect to the cluster nodes.

    Ensure that you include the dot "." when issuing the copy command.

  4. Customize the inventory file, created under the config-dir directory, with the IP addresses or hostnames of the cluster nodes, as well as the usernames and authentication information that are required to connect to the nodes. The inventory file is in the YAML format and describes the cluster nodes on which Paragon Automation will be installed. You can edit the file using the inv command or a Linux text editor such as vi.

    1. Customize the inventory file using the inv command:

      The configuration options that the inv command prompts are listed in the following table.

      Table 1: inv Command Options
      inv Command Prompts Description
      Kubernetes master nodes Enter IP addresses of the Kubernetes primary nodes.
      Kubernetes worker nodes Enter IP addresses of the Kubernetes worker nodes.
      Local storage nodes

      Define the nodes that have disk space available for applications. The local storage nodes are prepopulated from the primary and worker node IP addresses. You can edit these addresses. Enter IP addresses of the nodes on which you want to run applications that require local storage.

      Services such as Postgres, Zookeeper, and Kafka, use local storage or disk space partitioned inside export/local-volumes. By default, worker nodes have local storage available. If you do not add primary nodes here, you can run only applications that do not require local storage on the primary nodes.

      This is different from Ceph storage.

      Kubernetes nodes' username (e.g. root) Configure the user account and authentication methods to authenticate the installer with the cluster nodes. The user account must be root or in case of non-root users, the account must have superuser (sudo) privileges.
      SSH private key file (optional)

      If you chose ssh-key authentication, for the control host to authenticate with the nodes during the installation process, configure the directory (config-dir) where the ansible_ssh_private_key_file is located, and the id_rsa file, as "{{ config-dir }}/id_rsa".

      Kubernetes nodes' password (optional)

      If you chose password authentication for the control host to authenticate with the nodes during the installation process, enter the authentication password directly.

      Warning: The password is written in plain text. We do not recommend using this option for authentication.

      Kubernetes cluster name (optional) Enter a name for your Kubernetes cluster.
      Write inventory file?

      Click Yes to save the inventory information.

      For example:

    2. Alternatively, you can customize the inventory file manually using a text editor.

      Edit the following groups in the inventory file.

      1. Add the IP addresses of the Kubernetes primary and worker nodes of the cluster.

        The master group identifies the primary nodes, and the node group identifies the worker nodes. The same IP address cannot be in both master and node groups.

        To create a multi-primary setup, list the addresses or hostnames of all the nodes that will be acting as primary under the master group. Add the addresses or hostnames of the nodes that will be acting as workers under the node group.

      2. Define the nodes that have disk space available for applications under the local_storage_nodes:children group.

      3. Configure the user account and authentication methods to authenticate the the installer in the Ansible control host with the cluster nodes under the vars group.

      4. (Optional) Specify a name for your Kubernetes cluster in the kubernetes_cluster_name group.

  5. Configure the installer using the conf command.

    The conf command runs an interactive installation wizard that allows you to choose the components to be installed and configure a basic Paragon Automation setup. The command populates the config.yml file with your input configuration. For advanced configuration, you must edit the config.yml file manually.

    Enter the information as prompted by the wizard. Use the cursor keys to move the cursor, use the space key to select an option, and use a or i to toggle selecting or clearing all options. Press Enter to move to the next configuration option. You can skip configuration options by entering a period (.). You can re-enter all your choices by exiting the wizard and restarting from the beginning. The installer allows you to exit the wizard after you save the choices that you already made or to restart from the beginning. You cannot go back and redo the choices that you already made in the current workflow without exiting and restarting the wizard altogether.

    The configuration options that the conf command prompts for are listed in the following table:

    conf Command Prompts

    Description/Options

    Select components

    You can install one or more of the Infrastructure, Pathfinder, Insights, and base platform components. By default, all components are selected.

    Installation of the Pathfinder component is optional and based on your requirement. All other components must stay selected and be installed.

    Infrastructure Options

    These options are displayed only if you selected to install the Infrastructure component in the previous prompt.

    • Install Kubernetes Cluster—Install the required Kubernetes cluster. If you are installing Paragon Automation on an existing cluster, you can clear this selection.

    • Install MetalLB LoadBalancer—Install an internal load balancer for the Kubernetes cluster. By default, this option is already selected. If you are installing Paragon Automation on an existing cluster with preconfigured load balancing, you can clear this selection.

    • Install Nginx Ingress Controller—Install Nginx Ingress Controller is a load-balancing proxy for the Pathfinder components.
    • Install Chrony NTP Client—Install Chrony NTP. NTP is required to synchronize the clocks of the cluster nodes. If NTP is already installed and configured, you need not install Chrony. All nodes must run NTP or some other time-synchronization at all times.

    • Allow Master Scheduling—Master scheduling determines how the node acting as primary nodes are used. Master is another term for a node acting as primary.

      If you select this option, the primary nodes can also act as worker nodes, which means they not only act as control plane but can run application workloads as well. If you do not select master scheduling, the primary nodes are used only as the control plane.

      Master scheduling allows the available resources of the nodes acting as primary to be available for workloads. However, you run the risk that a misbehaving workload can exhaust resources on the primary node, and affect the stability of the whole cluster. Without master scheduling, if you have multiple primary nodes with high capacity and disk space, you risk wasting their resources by not utilizing them completely.

      Note:

      This option is required for Ceph storage redundancy.

    List of NTP servers

    Enter a comma-separated list of NTP servers. This option is displayed only if you chose to install Chrony NTP.

    Kubernetes for Master Virtual IP address

    Enter a VIP for the Kubernetes API Server for a multi-primary node deployment only. The VIP must be in the same Layer 2 domain as the primary nodes. This VIP is not part of the LoadBalancer pool of VIPs.

    This option is presented only when multiple primary nodes have been configured in the inventory file (multi-primary installation).

    Install LoadBalancer for Master Virtual IP address

    (Optional) Select to install keepalived LoadBalancer for the Master VIP.

    This option is presented only when multiple primary nodes have been configured in the inventory file (multi-primary installation).

    Virtual IP address (es) for ingress controller

    Enter a VIP to be used for Web access of the Kubernetes cluster or the Paragon Automation user interface. This must be an unused IP address that is managed by the MetalLB load balancer pool.

    Virtual IP address for Infrastructure Nginx Ingress Controller

    Enter a VIP for the Nginx Ingress Controller. This must be an unused IP address that is managed by the MetalLB load balancer pool. This address is used for NetFlow traffic.

    Virtual IP address for Insights services

    Enter a VIP for Paragon Insights services. This must be an unused IP address that is managed by the MetalLB load balancer pool.

    Virtual IP address for SNMP trap receiver (optional) Enter a VIP for the SNMP trap receiver proxy only if this functionality is required.

    If you do not need this option, enter a dot ".".

    PCE Server Proxy Select the proxy mode for the PCE Server. Select from NONE, HA proxy, or Nginx-Ingress.

    Virtual IP address for Pathfinder PCE server

    Enter a VIP to be used for Paragon Pathfinder PCE server access. This must be an unused IP address that is managed by the load balancer.

    If you selected Nginx-Ingress or HA proxy, as the PCE Server Proxy, this VIP is not necessary. You will not be prompted for this address and PCEP will use the same address as the Virtual IP address for Infrastructure Nginx Ingress Controller.

    Note:

    The addresses for ingress controller, Infrastructure Nginx Ingress Controller, Insights services, and PCE server must be unique. You cannot use the same address for all four VIPs.

    All these addresses are listed automatically in the LoadBalancer IP address ranges option.

    LoadBalancer IP address ranges

    The LoadBalancer IP addresses are prepopulated from your VIP addresses range. You can edit these addresses. The externally accessible services are handled through MetalLB, which needs one or more IP address ranges that are accessible from outside the cluster. VIPs for the different servers are selected from these ranges of addresses.

    The address ranges can be (but need not be) in the same broadcast domain as the cluster nodes. For ease of management, because the network topologies need access to Insights services and the PCE server clients, we recommend that the VIPs for these be selected from the same range.

    For more information, see Virtual IP Address Considerations.

    Addresses can be entered as comma separated values, or as a range, or as a combination of both. For example:

    • 10.x.x.1, 10.x.x.2, 10.x.x.3

    • 10.x.x.1-10.x.x.3

    • 10.x.x.1, 10.x.x.3-10.x.x.5

    • 10.x.x.1-3 is not a valid format

    Hostname of Main web application

    Enter a hostname for the ingress controller. This can be configured as an IP address or as hostname (FQDN). For example, you can enter 10.12.xx.100 or www.paragon.juniper.net (DNS name). Do not include http:// or https://.

    Note:

    You will use this hostname to access the Paragon Automation Web UI from your browser. For example, https://hostname or https://IP-address.

    BGP autonomous system number of CRPD peer

    Set up the Containerized Routing Protocol Daemon (cRPD) autonomous systems and the nodes with which cRPD creates its BGP sessions.

    You must configure the autonomous system number of the network to allow cRPD to peer with one or more BGP-LS routers in the network. By default, the autonomous system number is 64500.

    Note:

    While you can configure the autonomous system number at the time of installation, you can also modify the cRPD configuration later. See, Modify cRPD Configuration.

    Comma separated list of CRPD peers

    You must configure cRPD to peer with at least one BGP-LS router in the network to import the network topology. For a single autonomous system, configure the address of the BGP-LS router(s) that will peer with cRPD to provide topology information to Paragon Pathfinder. The CRPD instance running as part of a cluster will initiate a BGP-LS connection to the specified peer router(s) and import topology data once the session has been established. If more than one peer is required, these can be added as comma separated values, or as a range or as a combination of both, similar to how LoadBalancer IP addresses are added.

    Note:

    While you can configure the peer IP addresses at the time of installation, you can also modify the cRPD configuration later, as described in Modify cRPD Configuration.

    You must configure the BGP peer routers to accept BGP connections initiated from cRPD. The BGP session will be initiated from cRPD using the address of the worker where the bmp pod is running, as the source address.

    Since cRPD could be running on any of the workers at a given time, you must allow connections from any of these addresses. You can allow the range of IP addresses that the worker addresses belong to (for example, 10.xx.43.0/24), or the specific IP address of each worker (for example, 10.xx.43.1/32, 10.xx.43.2/32, and 10.xx.43.3). You could also configure this using the neighbor command combined with the passive option to prevent the router from attempting to initiate the connection.

    If you chose to enter each individual worker address, either with the allow command or the neighbor command, make sure you include all the workers, since any worker could be running cRPD at a given time. Only one BGP session will be initiated. If the node running cRPD fails, the bmp pod that contains the cRPD container will be created in a different node, and the BGP session will be re-initiated.

    The following example shows the options to configure a Juniper device to allow BGP-LS connections from cRPD.

    The following commands configure the router to accept BGP-LS sessions from any host in the 10.xx.43.0/24 network, where all the workers are connected.

    [edit groups northstar]
    root@system# show protocols bgp group northstar
    type internal;
    family traffic-engineering {
        unicast;
    }
    export TE;
    allow 10.xx.43.0/24;
    
    [edit groups northstar]
    root@system# show policy-options policy-statement TE
    from family traffic-engineering;
    then accept;
    

    The following commands configure the router to accept BGP-LS sessions from 10.xx.43.1, 10.xx.43.2, and 10.xx.43.3 (the addresses of the three workers in the cluster) only.

    [edit protocols bgp group BGP-LS]
    root@vmx101# show | display set 
    set protocols bgp group BGP-LS family traffic-engineering unicast
    set protocols bgp group BGP-LS peer-as 11
    set protocols bgp group BGP-LS allow 10.x.43.1
    set protocols bgp group BGP-LS allow 10.x.43.2
    set protocols bgp group BGP-LS allow 10.x.43.3
    set protocols bgp group BGP-LS export TE

    The BGP session is initiated by cRPD. Only one session is established at a time and is initiated using the address of the worker node currently running cRPD. If you choose to configure the specific IP addresses instead of using the allow option, configure the addresses of all the workers nodes for redundancy.

    The following commands also configure the router to accept BGP-LS sessions from 10.xx.43.1, 10.xx.43.2, and 10.xx.43.3 only (the addresses of the three workers in the cluster). The passive option was added to prevent the router from attempting to initiate a BGP-LS session with cRPD. The router will wait for the session to be initiated by any of these three routers.

    [edit protocols bgp group BGP-LS]
    root@vmx101# show | display set
    set protocols bgp group BGP-LS family traffic-engineering unicast
    set protocols bgp group BGP-LS peer-as 11
    set protocols bgp group BGP-LS neighbor 10.xx.43.1
    set protocols bgp group BGP-LS neighbor 10.xx.43.2
    set protocols bgp group BGP-LS neighbor 10.xx.43.3
    set protocols bgp group BGP-LS passive
    set protocols bgp group BGP-LS export TE

    You will also need to enable OSPF/ISIS and MPLS traffic engineering as shown:

    set protocols rsvp interface interface.unit
    
    set protocols isis interface interface.unit
    set protocols isis traffic-engineering igp-topology
    Or
    set protocols ospf area area interface interface.unit
    set protocols ospf traffic-engineering igp-topology
    
    set protocols mpls interface interface.unit
    set protocols mpls traffic-engineering database import igp-topology
    

    For more information, see https://www.juniper.net/documentation/us/en/software/junos/mpls/topics/topic-map/mpls-traffic-engineering-configuration.html.

    If you need to modify the cRPD configuration later (such as add a new neighbor), you will need to modify the BMP configuration file ,kube-cfg.yml, located in the /etc/kubernetes/po/bmp/ directory on the Paragon Automation primary node.

    For example:

    root@primary:/etc/kubernetes/po/bmp# vi kube-cfg.yml
    
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: northstar
      name: crpd-config
    data:
      config: |
        protocols {
          bgp {
            group northstar {
              neighbor 172.16.xx.105;
              neighbor 10.1.x.2
              }
          }
        }
    
    Finish and write configuration to file Click Yes to save the configuration information.

    This configures a basic setup, and the information is saved in the config.yml file in the config-dir directory.

    For example:
  6. (Optional) For more advanced configuration of the cluster, use a text editor to manually edit the config.yml file.

    The config.yml file consists of an essential section at the beginning of the file that corresponds to the configuration options that the installation wizard prompts you to enter. The file also has an extensive list of sections under the essential section that allows you to enter complex configuration values directly in the file.

    The following options are available.

    • Set the opendistro_es_admin_password password to log in to the Kibana application. Open Distro is used to consolidate and index application logs and Kibana is the visualization tool that enables you to search logs using keywords and filters.

      By default, the username is preconfigured as admin in #opendistro_es_admin_user: admin and install_opendistro_es option is set to true to replace the Elasticsearch version with Open Distro. Use admin as username and this password to log in to Kibana.

      By default, data is retained on the disks for seven days, before being purged, in a production deployment. You can edit the number of days to a smaller number in opendistro_es_retain if your disk size is low.

      If you do not configure the opendistro_es_admin_password, the installer will generate a random password. You can retrieve the password using the command:

      # kubectl -n kube-system get secret opendistro-es-account -o jsonpath={..password} | base64 -d

    • Set the iam_skip_mail_verification configuration option to true for user management without SMTP by Identity Access Management (IAM). By default, this option is set to false for user management with SMTP. You must configure SMTP in Paragon Automation so that the Paragon Automation users can be notified when their account is created, activated, locked, or when the password is changed for their account.

    • Configure the callback_vip option with an IP address different from that of the VIP for the ingress controller. You can configure a separate IP address, which is a part of the MetalLB pool of addresses, to enable segregation of management and data traffic from the southbound and northbound interfaces. By default, callback_vip is assigned the same or one of the addresses of the ingress controller.

    • If you want to use an interface other than the default interface for inter-cluster communication, set the kubernetes_system_interface variable. The current setting is "{{ ansible_default_ipv4.interface }}" which is the interface used by the default route. The kubernetes_system_interface variable configures the Kubernetes API server and Calico.

      To view the default interface, run this command on a primary node:

      In this example, ens3 is default interface for this machine.

      If you want to use an interface different from the default one and the same interface can be used on all cluster nodes, configure the kubernetes_system_interface in the config.yml file. For example:

      kubernetes_system_interface: ens4

      If you want to use an interface different from the default one but the interface is different on different nodes, you must remove kubernetes_system_interface from the config.yml file. Instead, configure the interface names in the inventory file. For example:

      Note that calico_ip_autodetect is set to "interface={{ kubernetes_system_interface }}", and takes on the same value as kubernetes_system_interface and does not need to be explicitly changed if the default interface is changed.

    Save and exit the file after you finish editing it.

  7. (Optional) If you want to deploy custom SSL certificates signed by a recognized certificate authority (CA), store the private key and certificate in the config-dir directory. Save the private key as ambassador.key.pem and the certificate as ambassador.cert.pem.

    By default, ambassador uses a locally generated certificate signed by the Kubernetes cluster-internal CA.

    Note:

    If the certificate is about to expire, save the new certificate as ambassador.cert.pem in the same directory, and execute the ./run -c config-dir deploy -t ambassador command.

  8. Install the Paragon Automation cluster based on the information that you configured in the config.yml and inventory files.

    The installation time to install the configured cluster depends on the complexity of the cluster. A basic setup installation takes at least 45 minutes to complete.

    The installer checks NTP synchronization at the beginning of installation. If clocks are out of sync, installation will fail.

    For multi-primary node deployments only, the installer checks the disk IOPS at the beginning of installation. If the IOPS is below 300, installation will fail. To disable disk IOPS check, use the # ./run -c config-dir deploy -e ignore_iops_check=yes command and rerun deployment.

    If you are installing Paragon Automation on an existing Kubernetes cluster, the deploy command upgrades the currently deployed cluster to the latest Kubernetes version. The command also upgrades the Docker CE version, if required. If Docker EE is already installed on the nodes, the deploy command will not overwrite it with Docker CE. When upgrading the Kubernetes version or the Docker version, the command performs the upgrade sequentially on one node at a time. Each node is cordoned off and removed from scheduling and upgrades are performed, Kubernetes is restarted on the node, and the node is finally uncordoned and brought back into scheduling.

  9. When deployment is completed, log in to the worker nodes.

    Use a text editor to configure the following recommended information for Paragon Insights in the limits.conf and sysctl.conf files. These values set the soft and hard memory limits for influx DB memory requirements. If you do not set these limits, you might see errors such as “out of memory” or “too many open files” because of default system limits.



Log in to the Paragon Automation UI

After you install Paragon Automation, log in to the Paragon Automation UI.

  1. Open a browser, and enter either the hostname of the main Web application or the VIP of the ingress controller that you entered in the URL field of the installation wizard.
    For example, https://vip-of-ingress-controller-or-hostname-of-main-web-application. The Paragon Automation login page is displayed.
  2. For first-time access, enter admin as username and Admin123! as the password to log in. You must change the password immediately.
    The Set Password page is displayed. To access the Paragon Automation setup, you must set a new password.
  3. Set a new password that meets the password requirements.
    The password should be between 6 to 20 characters and must be a combination of uppercase letters, lowercase letters, numbers, and special characters. Confirm the new password, and click OK.
    The Dashboard page is displayed. You have successfully installed and logged in to the Paragon Automation UI.
  4. Update the URL to access the Paragon Automation UI in Administration > Authentication > Portal Settings to ensure that the activation e-mail sent to users for activating their account contains the correct link to access the GUI. For more information, see Configure Portal Settings.
    For high-level tasks that you can perform after you log in to the Paragon Automation GUI, see Paragon Automation Getting Started.

Modify cRPD Configuration

You can configure the address of the BGP-LS router(s) that will peer with cRPD to provide topology information to Paragon Pathfinder, during installation of Paragon Automation. You can also modify the cRPD configuration after installation, in the following ways:

  • You can edit the BMP configuration file (kube-cfg.yml) located in the Paragon Automation primary node /etc/kubernetes/po/bmp/ directory, and then apply the new configuration.

    To edit the BMP configuration file and add a new neighbor:

    1. Edit the kube-cfg.yml file.

    2. Apply the changes in the kube-cfg.yml file.

    3. Connect to the cRPD container.

    4. Verify that the changes are applied.

      Note:

      Any additional neighbor will be added under a configuration group named extra. Hence, you need to add "| display inheritance" to see the new neighbor.

  • Connect to the cRPD container and edit the configuration like you would on any Junos device.

    To connect to cRPD and add a new neighbor or change the autonomous system number:

    1. Connect to the cRPD container and enter configuration mode.

    2. View the current BGP configuration and autonomous system number.

    3. Change the autonomous system number.

    4. Add a new neighbor.

      Note:

      You could also add the neighbor under the configuration group extra, but if the pod is restarted, this change will be overwritten by the configuration in the kube-cfg.yml file.

    5. Commit your configuration changes.