Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Solution Architecture

The solution architecture deploys spine-leaf access fronthaul topology and midhaul/backhaul ring topologies that are combined to include aggregation and core roles with the services gateway comprising the complete xHaul infrastructure.

The architecture of this solution is fundamentally similar, with the inclusion of Paragon Automation and a DHCP server, to the Low Latency QoS Design for 5G Solution architecture, thereby enhancing the solution with advanced automation capabilities. The final goal of this JVD is to demonstrate how automation can enhance and optimize an existing network design. Therefore, the most important part of this JVD is not the network design, but Paragon Automation.

Figure 1 of the network topology shows the location of the automation systems (Paragon Automation and DHCP server). The automation systems are located at the service edge and are connected via out-of-band (OOB) network to every aggregation and core nodes. There is no OOB connectivity to any of the access nodes. Management and telemetry traffic for access nodes is all in-band.

Figure 1: OOB and IB Networks A diagram of a network AI-generated content may be incorrect.

In the above figure,

  • Paragon Automation insertion point is at the Service Edge location.
  • DHCP server insertion point is also at the Service Edge location.
  • Management traffic to Access Nodes (including DHCP) and Telemetry traffic is in-band.
  • Aggregation and Core nodes have out-of-band (OOB) access to the DHCP server and Paragon Automation.

Paragon Automation

Paragon Automation is a collection of microservices that interact with one another through APIs and run within containers in a Kubernetes cluster. A Kubernetes cluster is a set of nodes or machines running containerized applications. Each node is a single machine, either physical (bare-metal server) or virtual (VMs).

A Kubernetes cluster comprises one or more primary and worker nodes.

  • Control Plane Node (primary) The primary node performs the Kubernetes control plane functions.
  • Compute node (worker)The worker node provides resources to run the pods. Worker nodes do not have control plane function.

During installation, you specify role of each node, and the installation playbooks install the corresponding components on each node accordingly.

Hardware Requirements

This section describes the minimum hardware resources that are required on each VM node in the Paragon Automation cluster. The compute, memory, and disk requirements of the cluster nodes can vary based on the intended capacity of the system. The intended capacity depends on the number of devices to be onboarded and monitored, types of sensors, and frequency of telemetry messages. If you increase the number of devices, you need higher CPU and memory capacities.

Note:

To get a scale and size estimates of a production deployment and to discuss detailed dimensioning requirements, contact your Juniper representative. Juniper Paragon Automation Release 2.3.0 supports a maximum of 800 devices.

Table 1: Paragon Automation Hardware Requirements
Paragon Hardware Details
Role Description Value
Primary/Worker cluster node Number of vCPUs 16
Primary/Worker cluster node Memory (GB) 32
Primary/Worker cluster node Hard disk (GB) [SSD mandatory] 750 + 250

Software Requirements

Use VMware ESXi 8.0 to deploy Paragon Automation. To install Paragon Automation for this JVD, you must create 4x virtual machine (VM) instances using the OVA (or OVF and .vmdk) files downloaded from the Juniper Software Downloads page (Type Paragon Automation in the All Products field and select the Paragon Automation Installation OVA from the application package section). These files contain the base operating system and all necessary software packages for VM creation and deployment of your Paragon Automation cluster on an ESXi server.

Table 2: Paragon Automation Software Details
Paragon Software Details
Software Description Release Caveats
Paragon Automation Paragon Automation version 2.3.0 Early Adopters release
Paragon cluster node OS Cluster node Operating System

Ubuntu 22.04 LTS

(Jammy Jellyfish)

Long Term Support

Management Network Requirements

The management network requirements are as follows:

  • All 4x Paragon nodes must be able to communicate with each other through SSH.
  • All 4x Paragon nodes must be able to sync to an NTP server.
  • SSH is enabled automatically during the VM creation, and you are asked to enter the NTP server address during the cluster creation. Ensure no firewall blocks NTP or SSH traffic between the nodes in case they are on different servers.
  • Total seven IP-addresses for the installation of Paragon Software suit (all in the same subnet).
    • 4x interface IP addresses.
    • 3x virtual IP addresses for:
      • Generic IP address shared between gNMI, OC-TERM (SSH connections from devices), and the Web GUI. This is a general-purpose VIP address that is shared between multiple services and used to access Paragon Automation from outside the cluster.
      • Paragon Active Assurance Test Agent Gateway (TAGW). This VIP address serves HTTP based traffic to the Paragon Active Assurance Test Agent endpoint. However, it is currently out of the scope of this JVD.
      • PCE server. This VIP address is used to establish Path Computational Element Protocol (PCEP) sessions between Paragon Automation and the devices. The PCE server VIP configuration is necessary to view live topology updates in your network. However, it is also out of scope of this JVD.

The VIP addresses are added to the outbound SSH configuration (that is required for a device) to establish a connection with Paragon Automation. The outbound SSH commands for OC-TERM and gNMI both uses VIP addresses.

Hostnames mapped to the VIP addresses can enable your devices to connect to Paragon Automation using hostnames. However, you must ensure that the host names and the VIP addresses are correctly mapped in the DNS and your device is able to connect to the DNS server. If you configure Paragon Automation to use hostnames, the hostnames take precedence over VIP addresses and are added to the outbound SSH configuration used during onboarding devices.

Note:

Paragon Automation cluster can be configured using IPv6 addresses in addition to the existing IPv4 addresses. With IPv6 addressing, you can use IPv6 addresses for OC-TERM, gNMI, the Active Assurance TAGW, and access to the Web GUI. For more information, see Paragon Automation v2.3.0 Installation Guide .

In addition to the listed IP addresses and hostnames, you need the following information at the time of installation:

  • Primary and secondary DNS server addresses for IPv4 (and IPv6 if needed).
  • NTP server information.

For more information about the intra-cluster communication between nodes and communications from outside the cluster (Ports that firewalls MUST allow), see Paragon Automation v2.3.0 Installation Guide .

System Requirements

Once the Paragon Automation cluster has been successfully created and deployed, further configuration is necessary to fully integrate and optimize the system. The following steps are essential for completing the setup process:

  1. Create an Organization in the Paragon cluster. An organization in Paragon Automation represents a customer.
  2. Create different Sites within the Organization. A Site is the location of a device in an organization. While a site can have more than one device, a device can be associated with only one site.
  3. Fetch onboarding details such as Organization ID (orgID) and Site ID (siteID) for each access node to be used later at the ZTP process.

Use Cases

Paragon Automation focuses on targeted use cases and specifications gained from years of experience of network operations. As described earlier, the product is offered as license-based predefined use cases.

In this JVD, Paragon Automation focuses on the following use cases:

  • Onboarding
  • Trust and Compliance
  • Device Lifecycle Management
  • Observability
Figure 2: Paragon Automation Base A close-up of a sign AI-generated content may be incorrect.

This includes:

  • Automated onboarding process using ZTP.
  • Device health visibility.
  • Configuration, inventory, and software management.

By implementing an automated onboarding process, new devices can be seamlessly integrated with the existing or greenfield network, and onboarded into the management system, while minimizing manual configuration efforts and reducing the risk of human error, which cause delays, and increase in operational costs. This automated process guarantees that:

  • Correct configuration is deployed.
  • There is configuration consistency across devices acting in the same role.
  • Devices are running the correct software images.
  • There is version consistency across multiple devices.

As a result, not only is the deployment of new devices improved, but troubleshooting complexities are also reduced, and any necessary changes are completed faster and consistently across one or more devices.

Zero Touch Provisioning (ZTP)

Zero Touch Provisioning technology automates the setup and configuration of new devices remotely, eliminating the need for manual intervention. This streamlined process allows consistent, simultaneous, and automated configuration of network devices. They can be shipped factory-fresh to remote sites and local operators can connect these devices to the network without installing any image or configuring them.

ZTP in this JVD automates the onboarding process for access nodes in the metro network, accelerating the deployment of network infrastructure closer to customer sites. In conjunction with Paragon Automation, engineers can create standardized template configurations and deploy them to access devices using ZTP. This eliminates manual intervention, ensuring consistent configurations and minimizing human error. By streamlining the provisioning process, ZTP significantly reduces the time required to bring up network services, enabling faster service delivery to customers.

Pre-requisites

The pre-requisites to validate this Automation JVD are as follows:

Figure 3 shows a flowchart depicting different steps involved in designing the topology for enabling ZTP provisioning:

Figure 3: Flowchart of ZTP Pre-requisites A diagram of a process AI-generated content may be incorrect.

In the previous image, onboarding details such as Organization ID (orgID), Site ID (siteID) are fetched for each access node to be ZTP. Onboarding details are used to generate device specific configuration templates.

Onboarding Configuration Template (DHCP Server)

DHCP is a network management protocol used to automate the process of configuring devices on IP networks. It enables devices to be automatically assigned IP addresses and other network settings, thereby greatly simplifying the management of networks. DHCP is essential in environments where devices frequently connect and disconnect from the network, such as in offices, homes, and public spaces.

When a device connects to a network, it sends out a DHCP Discover message to find available DHCP servers. The DHCP server responds with an offer, providing an IP address and other network settings such as subnet mask, default gateway, and DNS servers. The device then requests the offered configuration, and the DHCP server acknowledges the request, finalizing the configuration process. This seamless interaction ensures that devices can quickly and efficiently join the network without manual intervention.

The DHCP server used in this JVD is an Ubuntu server running Ubuntu 22.04.4 LTS connected to the OOB network with the following packages installed:

  • vsftpd
  • isc-dhcp-server

The following output shows those packages:

DHCP Server Configuration

Following is a sample configuration file that sets up a DHCP server to provide network parameters and configuration instructions to devices on the network:

General DHCP Server Settings

The following are the general DHCP server settings:

  • Vendor-Specific Information: The server identifies itself with a specific vendor string.
  • Lease Time: Clients can lease IP addresses for a maximum of 7200 seconds (2 hours).
  • Logging: Log messages are sent to the local system's log facility (local7).

DHCP Relay Agent

The following are the steps to configure the AG nodes (for example, ACX7509) as a DHCP relay agent:

  1. Ensure proper routing is established between the AG node (for example, ACX7509) and the DHCP server.
  2. Configure the DHCP server (file: /etc/dhcp/dhcpd.conf) to:
    1. Process DHCP requests from ZTP clients.
    2. Relay DHCP requests from pre-aggregation nodes (ACX7509).
    3. Assign IP addresses to ZTP clients.
    4. Provide necessary configuration options (for example, FTP path, filename) through DHCP offers.
  3. Create and store necessary template configurations for onboarding JVD in the /tftpboot/ directory, which is the default FTP path.
  4. Ensure the ZTP client is plugged into the same interface on the relay using which the server configuration is built to ensure smoother interaction with ZTP.

DHCP Relay Agent Configuration

Use the following sample configuration to set up a network device (AG1.x nodes) as a DHCP relay agent, forwarding DHCP requests from client devices to a specific DHCP server.

ZTP Client Configuration for Access Node

In this implementation of Zero Touch Provisioning (ZTP) process, the client initiating the request requires two essential configuration files:

  • Onboarding Configuration: Is defined under prerequisites in the system configuration as the minimum required configuration for Juniper device, which is usually referred as device baseline configuration (root password and configuration required to establish an SSH outbound connection to Paragon Automation).
  • Client-Specific Configuration: Is relevant to the specific client. For example, access node configuration as outlined in Low Latency QoS Design for 5G Solution .

The following two methods can be employed to deliver these configurations:

  • Merged Configuration:
    • Combine both configurations into a single "device.conf" file.
    • Store this file on the DHCP server.
    • Specify the "device.conf" filename within the server's dhcpd.conf file.
  • Python Automation (current implementation):
    • Store the base configuration and JVD configuration for each access node in the /tftpboot/ directory on the DHCP server.
    • When a ZTP request is made, the DHCP options trigger the transfer of a Python script to the access node.
    • This script executes on-box automation that initiates FTP transfers for both the base and JVD configurations files. It also onboards as the onboarding configuration is part of base configuration stored on the server.
    • This approach enables the re-use of base and JVD configuration templates as needed.

ZTP Specific Options

Alongside basic IP address allocation, DHCP includes various options that provide additional information, ensuring seamless network operation. These are the ZTP options used:

  • Option 150: Specifies the IP address of the TFTP server, where configuration files are stored.
  • Option 67 and 66: These options are commonly used for vendor-specific configuration information, but their specific usage depends on the vendor's implementation.
  • Option 143: This option indicates that the following options are specific to ZTP.
    • Options 0-7: These options define configuration file details, such as its name, type, and transfer method (FTP).
    • Option 6: This option specifies the subscriber ID, which can be used for device identification and policy enforcement.
    • Option 43: This option encapsulates the ZTP-specific options, ensuring they are delivered to the client.
    • Network Subnet (Subnet 20.6.0.60): This subnet is configured for a specific device or group of devices, namely the ZTP Client. It assigns IP addresses from a specific range (20.6.0.61) and specifies the TFTP server address and configuration file name.

ZTP Process

Following is a ZTP process that operates after the devices are being set to factory default settings:

  1. The ZTP client (access node) boots up and sends a DHCP Discovery message through its connected interfaces.
  2. The pre-aggregation node (ACX7509) receives one of the DHCP Discovery messages from the ZTP client from one of its connected interfaces. It then forwards this message to the DHCP server, acting as a relay agent. This allows the ZTP client to communicate with the DHCP server even if they are not on the same network segment.
    Note:

    Communication is seen in a similar manner from Client to Relay and vice versa throughout the ZTP lifecycle.

  3. The DHCP server processes the request and sends a DHCP offer message to the ZTP client through the relay. The offer includes:
    1. IP address.
    2. Subnet mask.
    3. Default gateway.
    4. DNS server address.
    5. FTP server address.
    6. Filename of the configuration file.
  4. The ZTP Client accepts the offer and requests for the details of the offer which are relayed to the server.
  5. DHCP Server acknowledges the request and assigns the IP address to the client and updates the lease table.
  6. The ZTP client retrieves the configuration file from the FTP server using the provided information.

Figure 4 describes the ZTP process:

Figure 4: DHCP Flowchart A diagram of a computer AI-generated content may be incorrect.

In the above figure:

  1. After downloading the configuration from the access node, the ZTP client installs the configuration as retrieved from the server. This configuration also contains the base configuration plus the onboarding configuration that is needed by Paragon cluster to adopt the device (SSH outbound connection) including the OrgId and SiteId where the device is assigned to. Therefore, once the configuration is committed, the device adoption process begins and within a few minutes the device is being adopted successfully in Paragon.
    Figure 5: Paragon Inventory with Device Successfully Onboarded A screenshot of a computer Description automatically generated
  2. During the adoption process, a Configuration Template (CT) is associated with the device profile being onboarded. A network architect creates a device profile based on the role of the device in the network (an access node in this case; only access nodes are being onboarded as part of this solution). As device profiles can also be used to modify the configurations after a device is onboarded, the solution takes advantage of this approach to configure the final service of the access nodes. As the full-service configuration is not created from Paragon Automation, the following approach uses automation to provision the whole service configuration required by access node through another configuration file pulled out by the device at ZTP time. It contains all the service configurations (for example, /var/tmp/jvd.configs).
Figure 6: Device Profile with a Configuration Template Associated A screenshot of a computer Description automatically generated
Note:

Figure 7 shows a process to create a configuration template to load a configuration file including the full-service configuration from LLQ JVD.

Figure 7: Configuration Template to Load LLQ Full-Service Configuration A screenshot of a computer Description automatically generated
Note:

Once the configuration template has been created, it can be seen in the configuration template inventory. In this case, the configuration template is a workaround to provision the service configuration of the AN node stored in the /var/tmp/jvd.configs file downloaded by the python script while ZTP.

Figure 8: List of Configuration Templates (Including Family and Type). A screenshot of a computer Description automatically generated

Networking Proposal

The proposed configuration establishes a dedicated Virtual Routing and Forwarding (VRF) instance named L3VPN_MGD for telemetry data collection and device management from Paragon Automation servers. The VRF is specially designed to facilitate the collection of data using gRPC communication, outbound SSH, static routing, and interface association.

Note:

This is the piece of configuration related to this L3VPN_MGD from one of the AG nodes:

AG nodes L3VPN_MGD Configuration

The AG device, an Aggregation Node, as referred to in the previous section functions as a DHCP relay. It can operate within the default routing instance (in forward-only mode) or within a VRF, encapsulating all network traffic, including ZTP/DHCP requests. The described configuration utilizes a VRF (MGMT) for both DHCP relaying and forwarding management traffic to the SAG device.

Some DUTs in JVD establish management sessions (SSH, Telemetry) via out-band management network. There is an assumption that AN (Access Nodes) cannot access out-band network via physical connection. Because of this separate in-band management, network is configured via above mentioned VRF instance (L3VPN_MGD). On AN device, there are separate loopback interfaces configured with new units (other than default unit 0 in main routing instance) and placed in the management VRF. This VFR exists on all routers in topology particularly on SED (Service Edge Device) that is connected to Paragon Automation cluster of servers. New loopback interface routing information is propagated via VRF signaling mechanism to SED device. Also, static routing configuration is applied to Paragon cluster virtual machines so IP routing connectivity can be achieved between Paragon Automation cluster and new loopback address of AN node. Because of this design, management network resiliency is achieved as any link failure from AN to AGN node can be mitigated by existence of alternate path for underlay network (except total node isolation).

This configuration creates a VRF instance named VR-ONBOARD, which is a virtual router that maintains its own routing tables and forwarding tables, separate from the global routing table, on the AN router. The loopback interface, lo0 unit 1, is associated with the VRF instance to allow it to participate in the VRF routing and forwarding decisions. The loopback interface is configured with an IP address 11.11.11.11/32 to be used for identification. The route distinguisher and VRF target are set for the VRF instance to control the import and export of routes. This IP address can be used as the source or destination address for routing and streaming network data on the AN router.

After ZTP process is finished with management, VRF is provisioned on AN device and Paragon Automation onboarding starts.

Service Edge Configuration for Forwarding the Telemetry Data to Paragon Automation

This piece of configuration sets the gRPC extension service to operate within the L3VPN_MGD VRF.

Following sample configuration sets outbound SSH traffic to utilize the L3VPN_MGD VRF.

This defines L3VPN_MGD as a VRF instance, enabling traffic isolation and improved security.

This code defines a static route within the L3VPN_MGD VRF. The route points to the telemetry collector, ensuring that the telemetry traffic is correctly routed. In this case, the telemetry collector is the ingress VIP of our paragon cluster.

This is the specified interface (xe-0/1/5:0.0) with L3VPN_MGD VRF. This interface is used for traffic related to telemetry collection.

This assigns a Route Distinguisher (RD) to the L3VPN_MGD VRF. The RD is a unique identifier used in VPN environments to distinguish routes belonging to different VRFs.

This configures the Route Target (RT) for the L3VPN_MGD VRF. The RT controls route import and export between different VRFs.

Configuration on Access Node

As the access nodes cannot be accessed directly as they are not part of the OOB network with respect to the Paragon Cluster, use the loopback interface as a stable endpoint for management traffic and routing protocols. By using the loopback interface, access node is always up and reachable, even if the physical interfaces or management interfaces are down.

The above sample configuration creates a VRF instance named VR-ONBOARD, which is a virtual router that maintains its own routing tables and forwarding tables, separate from the global routing table, on the ACX7100 router. The loopback interface, lo0 unit 2, is associated with the VRF instance to allow it to participate in the VRF routing and forwarding decisions. The loopback interface is configured with an IP address, ‘11.11.11.11/32’ to be used for identification. The route distinguisher and VRF target are set for the VRF instance to control the import and export of routes. This IP address can be used as the source or destination address for routing and streaming network data on the ACX7100 router.

Configuration on SAG Node

The following sample configuration is required on the SAG node connecting Paragon and DHCP server with the L3VPN_MGD routing instance running from AG nodes to SAG.

Configuration on Paragon Cluster Nodes

Following is the sample network configuration on each Paragon cluster node (8.1.1.x), where x as 1 is the gateway configured and 2,3, 4, and 5 are the cluster node IPs. Also, 4.1.x is AN1, 5.1.x is AN3 and 6.1.x is AN4, and 11.11.11.11 and 33.33.33.33 and 44.44.44.44 are the loopback IPs for AN1, AN3, and AN4 nodes.