Installation Prerequisites on CentOS
To successfully install and deploy a Paragon Automation cluster, you must have a control host and installs the distribution software on a single or multiple cluster nodes. You can download the distribution software on the control host, and then create and configure the installation files to run the installation from the control host. You must have internet access to download the packages on the control host. You must also have internet access on the cluster nodes to download any additional software such as Docker, and OS patches.
The order of installation tasks is shown at a high level in Figure 1.

Before you download and install the distribution software, you must preconfigure the control host and the cluster nodes as described in this topic.
Prepare the Control Host
The control host is a dedicated machine that is used to orchestrate the installation and upgrade of a Paragon Automation cluster. It carries out the Ansible operations that runs the software installer and installs the software on the cluster nodes as illustrated in Figure 2.
You must download the installer packages on the Ansible control host. As part of the Paragon Automation installation process, the control host installs any additional packages required on the cluster nodes. This includes optional OS packages, Docker, and Elasticsearch. Hence, the control node requires internet access to download software. All microservices, including third-party microservices, are downloaded onto the control host, and do not access any public registries during installation.
The control host can be on a different broadcast domain from the cluster nodes, though it is recommended that it is in the same domain. In either case, you must ensure that it can SSH into all the nodes.

Once installation is complete, the control host plays no role in the functioning of the cluster. However, you will need the control host to update the software or any component, make changes to the cluster, or re-install it if a node fails. You can also use the control host to archive configuration files. We recommend that you keep the control host available, and not use it for something else, after installation.
You need to prepare the control host for the installation process as follows:
Prepare Cluster Nodes
The primary and worker nodes are collectively called cluster nodes. Each cluster node must at least one static, and unique IP address as illustrated in Cluster Nodes Functions. When configuring the hostnames, use only lower case letters, and do not include any special characters other than “-” and “.”. If the implementation has a separate IP network to provide communication between the Paragon automation components, as described in the overview section, these IP addresses do not need to be reachable outside the cluster. However, in this case there should be a second set of IP addresses assigned to the worker nodes, which should be reachable from outside the cluster, to allow communication between Paragon Automation and the managed devices or between Paragon Automation and the network administrator.
We recommend that all the nodes be in the same broadcast domain. For cluster nodes in different broadcast domains, see Load balancing configuration for additional load balancing configuration.

As described in Paragon Automation System Requirements, you can install Paragon Automation as a single node or a multinode deployment. The node installation prerequisites are the same for both multinode and single-node deployments, except for storage requirements.
Virtual IP Address Considerations
The Kubernetes worker nodes host the pods that handle the applications workload.
A pod is the smallest deployable unit of computing created and managed in Kubernetes, and contains one or more containers, with shared storage and network resources, and with specific instructions on how to run the applications. Containers are the lowest level of processing and is where applications or microservices are executed.
The primary node in the cluster, determines which worker node will host a particular pod and containers.
All features of Paragon Automation are implemented using a combination of microservices. Some of these microservices need to be accessible from outside the cluster as they provide services to end users (managed devices), and administrators. For example, the pceserver service needs to be accessible to establish PCEP sessions between PE routers and Paragon Automation.
These services need to be exposed outside of the Kubernetes cluster with specific addresses that are reachable from the external devices. Because a service can be running on any of the worker nodes at a given time, the external addresses should be Virtual IP Addresses (VIPs), and not the address of any given worker node.

In this example:
Consider that WORKER1_IP = 10.1.x.3 and WORKER2_IP = 10.1.x.4.
SERVICE IP = PCEP VIP = 10.1.x.200
PCC_IP = 10.1.x.100
The services in Paragon Automation are configured to employ one of two methods of exposing services outside the cluster:
-
Load BalancerEach load balancer is associated with a specific IP address and routes external traffic to a specific service in the cluster. This is the default method for many Kubernetes installations in the cloud. It supports multiple protocols and multiple ports per service. Each service has its own load balancer, and IP address.
Paragon Automation uses MetalLB.
-
IngressIngress acts as a proxy to bring traffic into the cluster, then uses internal service routing to route the traffic to its destination. Under the hood, Ingress also uses a Load Balancer service to expose itself to the world so it can act as that proxy.
Paragon Automation uses:
- Ambassador
- Nginx
- HAProxy
The following services need to be accessible and thus require a VIP address.
Required VIP Address | Description | Load Balancer/Proxy |
---|---|---|
Ingress controller |
Used for Web access of the Paragon Automation GUI. Paragon Automation provides a common Web server that provides access to the components and applications. Access to the server is managed through the Kubernetes Ingress Controller. |
Ambassador MetalLB |
Paragon Insights services |
Used for Insights services such as syslog, DHCP relay, and JTI. |
MetalLB |
Paragon Pathfinder PCE server |
Used to establish PCEP sessions with devices in the network. |
MetalLB |
SNMP trap receiver proxy (Optional) |
User for the SNMP trap receiver proxy only if this functionality is required. |
MetalLB |
Virtual IP address for Infrastructure Nginx Ingress Controller |
Used as a proxy for Paragon Pathfinder netflowd server, and optionally Paragon Pathfinder PCE server. The Nginx Ingress Controller needs a VIP within the MetalLB load balancer pool. This means that during the installation process you need to include this address as part of the LoadBalancer IP address ranges that you will be required to include while creating the configuration file. |
Nginx MetalLB |
Ports used by Ambassador:
http 80 (TCP) redirect to https
https 443 (TCP)
Paragon Planner 7000 (TCP)
DCS/NETCONF initiated 7804 (TCP)

Ports used by Insights Services, PCE server, and SNMP.
-
Insights Services
JTI 4000 (UDP)
DHCP (ZTP) 67 (UDP)
SYSLOG 514 (UDP)
SNMP proxy 162 (UDP)
-
PCE Server
PCEP 4189 (TCP)
-
SNMP
SNMP Trap Receiver 162 (UDP)

Ports used by Nginx Controller:
NetFlow 9000 (UDP)
PCEP 4189 (TCP)
Using Nginx for PCEPDuring the installation process you will be asked whether you want to enable ingress proxy for PCEP.
-
If you select “None” or “HAProxy“ as the proxy for PCE server.
-
If you select "Nginx-Ingress" as a proxy for PCE server, you do not need to configure the VIP for PCE server described in the table. In this case, the Virtual IP address for Infrastructure Nginx Ingress Controller is used for both netflowd and PCE server.
-
Note:
The benefit of using Nginx is that a single IP address can be used for multiple services.

VIP for multi-primary node deployment
If you are deploying a multi-primary node setup, you need an additional VIP, in the same broadcast domain as the cluster nodes. This address will be used for communication between the elected primary node and the worker nodes.
In a single primary setup, the worker communicates with the primary function using the address assigned to that node acting as primary (IP address configured on the interface of the node acting as primary).
In a multi-primary setup, the worker communicates with the primary function using the VIP, instead of the address assigned to any of the nodes acting as primary.
This IP address is referred to as the Kubernetes Master Virtual IP address in the installation wizard. This VIP must not be a part of the MetalLB load balancer pool of VIPs.

You must identify all the required VIPs, before you start the Paragon Automation installation process. You will be asked to enter these addresses as part of the installation process.
Load balancing configuration
VIPs are managed in Layer 2 by default. When all cluster nodes are in the same broadcast domain, each VIP is assigned to one cluster node at a time. Layer 2 mode provides fail-over of the VIP and does not provide actual load balancing. For true load balancing between the cluster nodes or if the nodes are in different broadcast domains, you must configure load balancing in Layer 3.
You must configure a BGP router to advertise the VIP to the network. The BGP router should be configured to use ECMP to balance TCP/IP sessions between different hosts. Connect the BGP router directly to the cluster nodes.
To configure load balancing on the cluster nodes, edit the config.yml file. For example:
metallb_config: peers: - peer-address: 192.x.x.1 ## address of BGP router peer-asn: 64501 ## autonomous system number of BGP router my-asn: 64500 ## ASN of cluster address-pools: - name: default protocol: bgp addresses: - 10.x.x.0/24
In this example, The BGP router at 192.x.x.1 is responsible to advertise reachability for the VIPs with the 10.x.x.0/24 prefix to the rest of the network. The cluster allocates the VIP of this range and advertises the address for the cluster nodes that can handle the address.
DNS Server Configuration (Optional)
You can access the main Web gateway either through the ingress controller VIP or through a hostname that is configured in the DNS that resolves to the ingress controller VIP. You need to configure DNS only if you want to use a hostname to access the Web gateway.
Add the hostname to DNS as A, AAAA, or CNAME record. For lab and POC setups, you can add the hostname to the /etc/hosts file on the cluster nodes.