Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Installing the proNX Optical Director Software (Release 2.2)



  • The control machine is set up. See Setting Up the Control Machine (Examples).

  • You have copied the proNX Optical Director software package into the directory on the control machine where you want to run the installation scripts. This directory is called the <install-dir> in the examples in this procedure. See Downloading the proNX Optical Director Software.

  • The cluster servers are set up with the proper, freshly-installed operating system and are accessible by the control machine. See Installing the Host OS on the Cluster Server.


    The operating system must be freshly installed. If you are currently running an existing version of the proNX Optical Director software and you would like to install the latest version, you must reinstall the operating system first.


If you are currently running an existing version of the proNX Optical Director software, close all browser windows to that proNX Optical Director before you install the new version.

Use this procedure on the control machine to perform a fresh installation of the proNX Optical Director software on all the servers in the cluster. This procedure applies to the installation of release 2.2 only. You do not require Internet access to use this procedure.


This procedure can take two to three hours or longer since it takes time for the servers to synchronize their databases.


To make the examples in this procedure generic to all releases, the example output does not show release numbers in the filenames.

  1. Untar and uncompress the downloaded archive, for example:
  2. Specify the cluster members.

    Use a text editor to edit the <install-dir>/pod-installer-<version>/inventory file and specify one of the cluster servers as the master node and all of the cluster servers (including the master) as cluster nodes. It does not matter which server you specify as the master.

    The inventory file contains a [masters] section and a [nodes] section. Specify the master node in the [masters] section and all nodes (including the master node) in the [nodes] section.

    For example, this sets up a three-node cluster with master node at and the other cluster members at and These three nodes must have the required Atomic Host OS installed and must be on the same subnet.


    Although the above example uses IP addresses, it is recommended that you use resolvable hostnames instead.

  3. Install the software. The supplied script installs the software on all cluster servers. You do not need to install the software on each server individually.

    When you install the software, you have to specify a virtual IP address for the cluster. The virtual IP address is the IP address that users and devices use to communicate with the proNX Optical Director. It is virtual in that the IP address is not permanently associated with an individual server. By decoupling the IP address from the hardware, users and devices have a consistent IP address to use, regardless of which actual server is handling the communication, making it easier to handle individual server failures.

    • To install the software with a virtual IP address:


    The installation script must execute in a bash shell. In most cases, this is done automatically as the script itself directs the current shell to run the script in a bash shell. In the event that the script fails with a syntax error, rerun the script by explicitly specifying the bash shell. For example:

    You are prompted for the following:

    • A username and password to log in to the cluster servers. This user must be root or a user with no-password sudo access. See Installing the Host OS on the Cluster Server for more information.

    • The virtual IP address to use. This virtual IP address must be on the same subnet as the cluster servers. The example below uses virtual IP address

    • A shared secret to be used by the proNX Optical Director to secure internal data. This secret is used internally by Kubernetes pods and should be retained for advanced debugging purposes. This secret is encoded as a Kubernetes secret (and can be retrieved using standard kubectl commands after installation if the secret is accidentally lost).


    The control machine does not store the username, password, or shared secret.

    When the script finishes, the command outputs configuration information that you can use to configure the kubectl configuration file on the control machine. The configuration can be found between the [BEGIN] and [END] tags in the output of the install command.

    Here is the command and an example of the script output:


    If the installation script returns an error, ensure that the control machine has been set up correctly.

    The installation log file is located at <install-dir>/pod-installer-<version>/logs/deploy-cluster.log.

  4. Optionally, create and populate the kubectl configuration file.

    The kubectl configuration file is required by the kubectl utility to manage the nodes in the cluster. You have a choice of running the kubectl utility on the following:

    • on the master node (automatically set up as part of the installation)

    • on the control machine (if you have the kubectl utility set up)

    • on another machine (for example, on a machine that you currently use to manage your other kubernetes installations)

    If you only want to run the kubectl utility on the master node, then you can skip this step. If you want to run the kubectl utility on either the control machine or on another machine, then you will need to create or modify the kubectl configuration file on that machine.

    1. Create or edit the configuration file on the machine where you are running the kubectl utility. You might need to create the .kube directory and the config file if one or both do not exist. This example shows .kube in the home directory. By default, kubectl looks for the config file in the ~/.kube directory.

    2. Copy and paste the output of the command into this file. The text to copy is between the [BEGIN] and [END] tags. Do not copy the [BEGIN] and [END] tags themselves. If you are modifying an existing config file that is being used to manage other kubernetes installations, then append the output of the command to the end of the existing config file.

    3. Save the file.

  5. Verify that the installation is successful. Perform this step from the master node or from the machine where you modified the config file in step 4.

    On the master node, the kubectl utility is automatically set up in the path so you can issue the command from any directory. The following command can be used to display the state of all pods on all nodes.

    # kubectl get pods -o wide

    On other machines, if the kubectl utility is not set up in the path, then you will need to issue the kubectl commands in the directory where you installed the kubectl utility, for example:


    If you issue this command on other machines and the command returns an error or prompts you for a username and password, verify that you have copied the output to the kubectl config file faithfully.

    As the proNX Optical Director starts up, you will see the STATUS of some Kubernetes pods cycle between Running and other values. This is normal. A Kubernetes pod is operational when it has a STATUS of Running and a READY state of 1/1 (number of containers in the pod that are ready / total number of containers in the pod). The installation is complete once all the Kubernetes pods become operational.

You have now completed the proNX Optical Director installation. You can connect to the user interface by pointing your browser to http://<ip_address> where <ip_address> is the virtual IP address that you configured.


If you did not remember to close all browser windows to the previous version of the proNX Optical Director before you installed this version of the proNX Optical Director, perform a browser window refresh on each open browser window.