Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Installation Prerequisites

To successfully install and deploy a Paragon Automation cluster, you must have a dedicated machine that functions as the control host and installs the distribution software on a number of cluster nodes. You can download the distribution software on the control host, and then create and configure the installation files to run the installation from the control host. You must have internet access to download the packages on the control host. You must also have internet access on the cluster nodes to download any additional software such as Docker, and OS patches.

Before you download and install the distribution software, you must preconfigure the control host and the cluster nodes as described in this topic.

Prepare the Control Host

The control host is a dedicated machine that is used to orchestrate the installation of a Paragon Automation cluster. You must download the installer packages on the control host. The control host carries out the Ansible operations that runs the software installer and installs the software on the cluster nodes as illustrated in Figure 1. The control host also installs any additional packages such as optional OS packages, Docker, and Elasticsearch on the cluster nodes. The control node requires internet access to download software. All microservices, including third-party microservices, are downloaded onto the control host, and do not access any public registries during installation. The control host can be on a different broadcast domain from the cluster nodes, but needs SSH access to the nodes.

Figure 1: Control Host Control Host

Once installation is complete, the control host plays no role in the functioning of the cluster. However, you will need the control host to update the software or any component, make changes to the cluster, or re-install it if a node fails. You can also use the control host to archive configuration files. We recommend that you keep the control host available, and not use it for something else, after installation.

Ensure that the control host meets the following prerequisites:

  • A base OS of any Linux distribution that allows installation of Docker CE or Docker EE must be installed.

  • Docker must be installed and configured on the control host to implement the Linux container environment. Paragon Automation Release 21.2 supports Docker EE in addition to Docker CE. The Docker version you choose to install in the control host is independent of the Docker version you plan to use in the cluster nodes.

    If you want to install Docker EE, ensure that you have a trial or subscription before installation. For more information on Docker EE, supported systems, and installation instructions, see https://www.docker.com/blog/docker-enterprise-edition/.

    To download and install Docker CE, perform the following steps:

    • On RHEL:

      The following commands will install the latest stable version on x86 machines.

      To verify that Docker is installed and running, use the # docker run hello-world command.

      To verify the Docker version installed, use the # docker version or # docker --version commands.

    • On Ubuntu OS:

      The following commands install the latest stable version on x86 machines.

      To verify that Docker is installed and running, use the # docker run hello-world command.

      To verify the Docker version installed, use the # docker version or # docker --version commands.

      For full instructions and more information, see https://docs.docker.com/engine/install/ubuntu/.

    • On CentOS:

      The following commands will install the latest stable version on x86 machines.

      To verify that Docker is installed and running, use the $ docker run hello-world command.

      To verify the Docker version installed, use the $ docker version or $ docker --version commands.

      For full instructions and more information, see https://docs.docker.com/engine/install/centos/.

  • The installer running on the control host must be connected to the cluster nodes through SSH using the install user account.

  • The wget package must be installed. Use the wget tool to download the Paragon Automation distribution software.

    • On RHEL, use the $ dnf install wget command.

    • On Ubuntu OS, use the # apt install wget command.

    • On CentOS, use the $ yum install wget command.

Prepare Cluster Nodes

Paragon Automation is installed on a Kubernetes cluster of one or more primary nodes (control plane nodes) and one or more worker nodes (compute nodes) as illustrated in Figure 2. The control plane manages the cluster, and the compute nodes run the application workloads. The primary and worker nodes are collectively called the cluster nodes.

Figure 2: Cluster Nodes Cluster Nodes

Ensure that the cluster nodes meet the following prerequisites:

  • Each cluster node must have a static, unique IP address. We recommend that all the nodes be in the same broadcast domain. The hostnames of the node can have only lower case alphabets and can have no special characters other than “-” and “.” .

    For cluster nodes in different broadcast domains, see Load Balancing Configuration for additonal load balancing configuration.

    The cluster nodes need not be accessible from outside the cluster. Access to the Kubernetes cluster is managed by separate virtual IP addresses. For more information, see Virtual IP Address Considerations.

  • A base OS of Ubuntu version 18.04.4 or later, or RHEL version 8.3, or CentOS version later than 7.1, must be installed on each node. To verify the installed OS version, use the lsb_release -a command.

  • The cluster nodes must have raw storage block devices with unpartitioned disks or unformatted disk partitions attached. The nodes can also be partitioned such that a portion of the disk space available is used for the root partition and other files systems. The remaining space must be unpartitioned with no file systems and reserved for Ceph to use. For redundancy, you must have a minimum of three cluster nodes with storage space attached. Installation will fail if disks are unavailable. For more information, see Disk Partition Requirements.

    Ceph requires relatively newer Kernel versions. If your Linux kernel is very old, consider upgrading or reinstalling a new one. For a list of minimum Linux kernel versions supported by Ceph for your OS, see https://docs.ceph.com/en/latest/start/os-recommendations.

  • All nodes must run NTP or other time-synchronization at all times.

  • The install user must be a root user or have superuser (sudo) privileges.

  • An SSH server must be running on all nodes. The installer running on the control host connects to the cluster nodes through SSH using the install user account. You might need to edit the sshd_config file to allow root login, depending on the authentication method selected. See 2.d of the installation process.

  • Select one of the following Docker versions to install.

    • Docker CE—If you want to use Docker CE, you need not explicitly install it on the cluster nodes. The deploy script installs Docker CE on the nodes during installation of Paragon Automation.

    • Docker EE—If you want to use Docker EE, you must install Docker EE on all the cluster nodes. If you install Docker EE on the nodes, the deploy script uses the installed version and does not attempt to install Docker CE in its place. For more information on Docker EE, supported systems, download, and installation instructions, see https://www.docker.com/blog/docker-enterprise-edition/.

    The Docker version you choose to install in the cluster nodes is independent of the Docker version installed in the control host.

  • Kubernetes requires iptables rules to accept forwarding traffic. Installing Docker might create a firewall that prevents forwarding traffic. Configure the iptables firewall settings to accept all packets by default using the iptables -P FORWARD ACCEPT command.

    Inter-cluster communication between the nodes must be allowed. In particular, the ports listed in Table 1 must be kept open for communication.

    Table 1: Ports That Must Be Allowed by External Firewalls
    Port Numbers Purpose
    6443, 2379-2380, 10250, 10252, 10255 TCP
    30000-32767 Kubernetes port assignment range
    UI access
    22 SSH daemon
    80 HTTP
    443 HTTPS
    7000 Paragon Planner communications
    Communication between network elements
    7804 NETCONF callback
    161 SNMP (UDP)
  • Python must be installed on the cluster nodes. If not pre-installed on your OS, install Python 3 on the cluster nodes:

    • On RHEL:

      To install Python 3, use the # yum install python3 command.

      To verify the Python version installed, use the $ python3 --version command.

    • On Ubuntu OS:

      To install Python 3.8, use the # apt install python3.8 command.

      To verify the Python version installed, use # python -V or # python --version commands.

    • On CentOS:

      To install Python 3, use the $ yum install -y python3 command.

      To verify the Python version installed, use $ python -V or $ python --version commands.

    Python 2.7 is also supported.

Virtual IP Address Considerations

Access to the Paragon Automation cluster from outside the cluster is through virtual IP addresses (VIPs) that are managed by a load balancer. You require up to five VIPs for a cluster. The VIPs can be within the same broadcast domain as the cluster nodes or in a different broadcast domain.

You must identify the following VIPs before you install Paragon Automation.

  • In case of a multi-primary node setup, you need one VIP in the same broadcast domain as the cluster nodes. This IP address is used for communication between the primary and worker nodes. This IP address is referred to as the Kubernetes Master Virtual IP address in the installation wizard.

  • You also need a VIP for each of the following load-balanced services:

    • Ingress controller—Paragon Automation provides a common Web server that provides access for installing applications. Access to the server is managed through the Kubernetes Ingress Controller. This VIP is used for Web access of the Paragon Automation GUI.

    • Paragon Insights services—This VIP is used for DHCP services such as SNMP, syslog, and DHCP relay.

    • Paragon Pathfinder PCE server—Used to establish PCEP sessions with devices in the network.

  • SNMP trap receiver proxy (Optional)—You should configure a VIP for the SNMP trap receiver proxy only if this functionality is required.

Load Balancing Configuration

VIPs are managed in Layer 2 by default. When all cluster nodes are in the same broadcast domain, each VIP is assigned to one cluster node at a time. If the cluster nodes are in different broadcast domains, you must configure a load balancer in Layer 3 to load balance between the nodes.

You must configure a BGP router to advertise the VIP to the network. The BGP router should be configured to use ECMP to balance TCP/IP sessions between different hosts. Connect the BGP router directly to the cluster nodes.

To configure load balancing on the cluster nodes, edit the config.yml file. For example:

In this example, The BGP router at 192.x.x.1 is responsible to advertise reachability for the VIPs with the 10.x.x.0/24 prefix to the rest of the network. The cluster allocates the VIP of this range and advertises the address for the cluster nodes that can handle the address.

DNS Server Configuration (Optional)

You can access the main Web gateway either through the ingress controller VIP or through a hostname that is configured in the DNS that resolves to the ingress controller VIP. You need to configure DNS only if you want to use a hostname to access the Web gateway.

Add the hostname to DNS as A, AAAA, or CNAME record. For lab and POC setups, you can add the hostname to the /etc/hosts file on the cluster nodes.