Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Prepare the Cluster Nodes

This topic describes the steps you must perform to prepare the Paragon Automation cluster nodes for deployment.

To prepare your cluster nodes for deployment on ESXi 8.0, KVM, and Proxmox VE hypervisors, perform the following steps:

  1. Download the OVA File.

  2. Create the Node VMs.

    The creation method differs based on the bare-metal hypervisor on which you want to deploy the cluster.

Download the OVA File

Download and verify the integrity of the OVA file.

  1. Download the Paragon Automation Installation OVA file from the Juniper Paragon Automation software download site. The OVA is used to create the node VMs and deploy your cluster.

    Note that the actual filename will include the release date in it, such as paragon-2.4.0-builddate.ova.

    The file is large in size, and it might take considerable time to download it and then create the VMs from your computer. So, we recommend that you create a local installer VM, which can be a basic Ubuntu desktop VM, either on the same server where you want to install Paragon Automation or on a different server. You must be able to download the OVA file to this local installer VM and you must have enough space on the VM to store the file. Configure connectivity to the management IP addresses of the servers as show in Figure 1.

    Figure 1: Local Installer VM to download the OVA/OVF files Local Installer VM to download the OVA/OVF files

    Alternatively, you can use the wget "http://cdn.juniper.net/software/file-download-url/paragon-2.4.0-builddate.ova" command to download the OVA directly on to your hypervisor.

  2. (Optional) Validate the integrity of the OVA file. If you are using an Ubuntu desktop, use the following command:

    Verify that the number displayed onscreen is the same as the SHA512 checksum number available on the Juniper Paragon Automation software download site. Click Checksums to view the valid SHA512 checksum.

  3. On ESXi 8.0

    If you are using ESXi 8.0, you can use the OVA directly to create the VMs.

    You can also extract and use the OVF and .vmdk files from the OVA to create your VMs. To extract the files, use the following command:

    If your installation desktop is running Windows, you can download and use the tar utility from https://gnuwin32.sourceforge.net/packages/gtar.htm to extract the files.

    Note:

    If you are using a stand-alone ESXi 8.0 server without vCenter, due to a limitation of the VMware host client, you cannot upload large OVA files to the client. In such cases, you must extract and use the OVF, .vmdk, and .nvram files to create your VMs.

    On KVM and Proxmox VE

    If you are using KVM, you must extract the .vmdk files from the OVA using the tar -xvf paragon-2.4.0-builddate.ova command.

    The rest of this document assumes that the OVA is downloaded to a single KVM server. If you have multiple servers, then download the files onto all the servers. The steps described in this document are general guidelines to create the VMs using the CLI method of deployment. You can also use GUI-based deployment and alter the steps to match your hypervisor requirements. Network configuration of the hypervisor is out of scope of this document.

Now you can create the node VMs.

Create the Node VMs

After verifying the integrity of the OVA file, create the node VMs. Use one of the following methods to create the node VMs based on the hypervisor on which you are deploying the cluster.

On ESXi 8.0

Perform the following steps on ESXi 8.0 hypervisors.

  1. From your Web browser, connect and log in to the VMware ESXi 8.0 server where you will install Paragon Automation.

    If you are using a local installer VM, use the browser in the VM to connect to the VMware ESXi server.

  2. Create the node VMs.

    Perform the following steps to create the VMs.

    1. Right-click the Host icon and select Create/Register VM.

      The New virtual machine wizard appears.

    2. On the Select creation type page, select Deploy a virtual machine from an OVF or OVA file.

      Click Next.

    3. On the Select OVF and VMDK files page, enter a name for the node VM.

      Click to upload or drag and drop the OVA file (or the OVF and .vmdk files).

      Review the list of files to be uploaded and click Next.

    4. On the Select storage page, select the appropriate datastore that can accommodate 300-GB SSD for the node VM. Note that SSD is mandatory.

      Click Next. The extraction of files takes a few minutes.

    5. On the Deployment options page:

      • Select the virtual network to which the node VM will be connected.

      • Select the Thick disk provisioning option.

      • Enable the VM to power on automatically.

      Click Next.

    6. On the Ready to complete page, review the VM settings.

      Click Finish to create the node VM.

      Note:

      If you used the OVF and .vmdk files to create your VMs and the VM creation failed, retry creating the VMs with the .nvram file. On step 2.c, upload the .nvram file along with the OVF and .vmdk files. For standalone ESXi 8.0 servers without vCenter, you must upload the .nvram file as well.

    7. To power on the VM, right-click the newly created VM on the Inventory page, and click Power > Power on.

    8. Repeat steps 2.a through 2.g for the other three node VMs.

      Alternatively, if you are using VMware vCenter, you can right-click the VM, and click the Clone > Clone to Virtual Machine option to clone the newly created VM. Clone the VM three times to create the remaining node VMs.

      Enter appropriate VM names when prompted.

    9. (Optional) Verify the progress of the VM creation in the Recent tasks section at the bottom of the page. When a VM is created, it appears in the VMware Host Client inventory under Virtual Machines.

    10. When all the VMs have been created, verify that the VMs have the correct specifications and are powered on.

You have completed the node preparation steps and created all the VMs. You are ready to configure and deploy the cluster. Go to Deploy the Cluster Nodes.

On KVM

Perform the following steps on KVM hypervisors with RHEL 8.1.0 host OS.

In this example, we are deploying the cluster on a single hypervisor server with the following location and naming parameters:

  • Create two VMs each in two data locations (SSDs) on the same hypervisor.

    • /data1/paragon1/for VM1 and /data1/paragon2/ for VM2

    • /data2/paragon3/ for VM3 and /data2/paragon4/ for VM4

    Where VM1, VM2, VM3, and VM4 are the four cluster node VMs. We recommend using two SSDs for the node VMs to optimize IOPS rate.

    Note: While this example showcases VMs on a single hypervisor server, for server and node high availability, you must create one VM per hypervisor server.
  • VM1 is named as paragon1, VM2 is named as paragon2, VM3 is named as paragon3, and VM4 is named as paragon4.

  • The two disk images for each VM are named paragon-disk1.img and paragon-disk2.img. Both the disk images for each VM are located in the corresponding paragonx directory. Where x is the VM number (1 through 4).

  • For all VMs, the CPU = 16, RAM = 32-GB, and Mode = host cpu. These are the bare minimum hardware resources that are required on each node VM.

  • VMs are attached to the br-ex bridged network.

Perform the following steps to create the node VMs.

  1. Log in to the KVM hypervisor CLI.

  2. Ensure that you have the required libvirt, libvirt-daemon-kvm, qemu-kvm, bridge-utils, and qemu-kvm packages installed.

    Use the dnf update and dnf install libvirt libvirt-daemon-kvm qemu-kvm bridge-utils commands to install the packages.

  3. Convert the .vmdk files to the raw format.

    Here, paragon-disk1.img is the main disk and paragon-disk2.img is the Ceph disk.
  4. Adjust and expand the disk size. In this example, we have used the bare minimum hardware resources that are required on each node VM.

  5. Create the folders where you want the VMs to be located.

    • /data1/paragon1/ for VM1

    • /data1/paragon2/ for VM2

    • /data2/paragon3/ for VM3

    • /data2/paragon4/ for VM4

  6. Copy both the paragon-disk1.img and paragon-disk2.img raw disk images to the location of each VM using the cp copy command. For example, root@kvm:~/paragon# cp paragon-disk1.img /data1/paragon1/.

    Once copied, the folder and file structure will be similar to:

  7. Generate and customize an XML file for configuring each VM.

    Configure the machine type as q35 and emulator binary as /usr/libexec/qemu-kvm.

    A sample XML file for RHEL 8.1.0 is described here:

    Save this file as /root/paragon/paragon1.xml. In this example:

    • VM1 is named as paragon1.

    • VM1 CPU = 16, VM1 RAM = 32-GB, VM1 Mode = host cpu

    • VM1 has two image disk images at /data1/paragon1/paragon-disk1.img and /data1/paragon1/paragon-disk2.img

    • The VM is attached to bridged network with name br-ex

    • The VNC port for graphical console is listening on port 5911. If you want to dynamically assign ports, then set autoport='yes'.

  8. Define the VM using the XML file.

  9. Verify that the VM is registered.

  10. Set the VM to autostart if the KVM is rebooted.

  11. Power on the VM.

  12. Connect to the VM console in one of the following ways:

    • Using the serial console.

      Connect to the VM using the serial console.

    • Using a VNC client.

      Use any VNC compatible client and connect to kvm-ip::5911.

      Ensure that the firewall allows port 5911 for communication between your computer and the VM.

  13. Repeat steps 7 through 12 for the remaining three VMs.

    When you create XML files for the remaining three VMs, change the VM name and disk locations as appropriate. Also, change the graphical console port numbers for each VM.

You have completed the node preparation steps and created all the VMs. You are ready to configure and deploy the cluster. Go to Deploy the Cluster Nodes.

On Proxmox VE

Perform the following steps on Proxmox VE hypervisors.

In this example, we are installing Paragon Automation on a single server with three provisioned datastores, data0, data1, and data2. data0 is used to save the OVA file, data1 to save the first disk image, and data2 to save the second disk image. The VMs are named, VM1, VM2, VM3, and VM4. We are also configuring the bare minimum hardware resources required to deploy a cluster.

Perform the following steps to create the node VMs.

  1. Log in to the Proxmox VE server CLI.

  2. Create the data0, data1, and data2 datastores.

  3. Copy the OVA to the data0 datastore and extract the .vmdk files.

    In this example, we have created a folder called ova in data0, copied the paragon-2.4.0-builddate.ova file to the ova folder, and used the tar -xvf paragon-2.4.0-builddate.ova command to extract the files.

  4. Create the first VM with a VirtIO network interface (net0), and configure the VM name, ID, and memory.

    Here, the VM ID is 100, VM name is VM1, and VM memory is 32-GB.

    Ensure that the VM ID is a unique identifier and is not shared by any other VM on the same server or in the same Proxmox cluster.

  5. Configure the total number of vCPUs as 16.

  6. Configure storage for the VM.

  7. Configure the CPU type as host.

  8. Navigate to the data0/ova folder.

  9. Import disk 1 to the VM as a raw image. Here, import the paragon-2.4.0-builddate-disk1.vmdk file to the data1 datastore.

  10. Import disk 2 to the VM as a raw image. Here, import the paragon-2.4.0-builddate-disk2.vmdk file to the data2 datastore.

  11. Assign the raw disks to virtio0 and virtio1 and configure the IO thread, backup, discard, and replication options and as shown.

  12. Assign a boot device for the VMs.

    Disk 1 is the boot device and Disk 2 is used for Ceph storage.

  13. (Optional) Configure the VM to be a non-ballooning device.

  14. (Optional) If you are not using a GUI for the OS, set tablet as 0 to save on CPU and memory.

  15. (Optional) Set the display option to Standard VGA to save on CPU and memory.

  16. (Optional) If you want to connect to the VM console using the CLI, configure a serial terminal using a socket.

  17. Power on the VM.

  18. Launch the VM console.

    • Using CLI—If you have configured a serial terminal, you can access the VM console through the CLI using the following command.

      The VM console appears.

    • Using the GUI—Log in to the Proxmox VE GUI. Select the powered-on VM1, and click >_ Console. The VM console appears.

  19. Repeat steps 4 through 18 for the other three VMs. Enter appropriate unique VM IDs and names.

You have completed the node preparation steps and created all the VMs. You are ready to configure and deploy the cluster. Go to Deploy the Cluster Nodes.