Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Installing a NorthStar Cluster Using a HEAT Template

 

This topic describes installing a NorthStar cluster in an OpenStack environment using a HEAT template. These instructions assume that you are using one of the provided HEAT templates.

System Requirements

In addition to the system requirements for installing the NorthStar Controller in a two-VM environment, a cluster installation also requires that:

  • An individual compute node is hosting only one NorthStar Controller VM and one JunosVM. You can ensure this by launching the NorthStar Controller VM into a specific availability zone and compute node, or by using a host affinity such as OS::Nova::ServerGroup with an anti-affinity rule.

  • The cluster has a single virtual IP address for the client facing connection. If promiscuous mode is disabled in OpenStack (blocking the virtual IP address), you can use the Neutron::Port allowed-address-pair attribute to permit the additional address.

Launch the Stack

Create a stack from the HEAT template file using the heat stack-create command.

Obtain the Stack Attributes

  1. Ensure that the stack creation is complete by examining the output of the heat stack-show command.
  2. Obtain the UUID of the NorthStar Controller VM and the JunosVM instances for each node in the cluster by executing the resource-list command.
  3. Using the UUIDs obtained from the resource-list command output, obtain the associated IP addresses by executing the interface-list command for each UUID.
  4. Verify that each compute node in the cluster has only one NorthStar Controller VM and only one JunosVM by executing the following command for each UUID:

Configure the Virtual IP Address

  1. Find the UUID of the virtual IP port that is defined in the HEAT template by examining the output of the heat resource-list command.
  2. Find the assigned virtual IP address for that UUID by examining the output of the neutron port-show command.
  3. Find the UUID of each public-facing NorthStar Controller port by examining the output of the neutron port-list command.

    For example:

  4. Update each public-facing NorthStar Controller port to accept the virtual IP address by executing the neutron port-update command for each port.

    For example:

  5. Wait until each NorthStar Controller VM finishes its booting process, at which time, you should be able to ping its public IP address. You can also use the nova console-log command to monitor the booting status of the NorthStar Controller VM.

Resize the Image

The CentOS 6 official cloud image does not resize correctly for the selected OpenStack flavor. This results in the NorthStar Controller VM filesystem size being set at 8G instead of the size that is actually specified by the flavor. Using the following procedure, you can adjust your filesystem to be in sync with the allocated disk size. Alternatively, you can hold off on the resizing procedure until after you complete the NorthStar RPM bundle installation. There is a resize-vm script inside /opt/northstar/utils/.

Caution

The fdisk command can have undesirable effects if used inappropriately. We recommend that you consult with your system administrator before proceeding with this workaround, especially if you are unfamiliar with the fdisk command.

Use the following procedure for each NorthStar Controller VM. Replace XX in the commands with the number of the VM (01, 02, 03, and so on).

  1. Determine whether the size of the VM is correct. If it is correct, you do not need to proceed with the resizing.
  2. Use the fdisk command to recreate the partition.
  3. Reboot the VM to apply the partition changes.
  4. Wait until the NorthStar Controller VM has returned to an up state.
  5. Reconnect to the VM using SSH.
  6. Check the partition size again to verify that the partition was resized.
  7. If the partition size is still incorrect, use the resize2fs command to adjust the filesystem.

Install the NorthStar Controller RPM Bundle

Install the NorthStar Controller RPM bundle for an OpenStack environment. The procedure uses the rpm and install-vm.sh commands.

Configure the JunosVM

For security reasons, the JunosVM does not come with a default configuration. Use the following procedure to manually configure the JunosVM using the OpenStack novnc client.

  1. Obtain the novnc client URL.
  2. Configure the JunosVM as you would in a fresh install of the Junos OS.
  3. Copy the root user of the NorthStar Controller VM SSH public key to the JunosVM. This allows configuration from the NorthStar Controller VM to the JunosVM using an ssh-key based connection.
  4. On the NorthStar Controller VM, run the net_setup.py script, and select option B to complete the configuration of the JunosVM. Once complete, you should be able to remotely ping the JunosVM IP address.

Configure SSH Key Exchange

Use the following procedure to configure SSH key exchange between the NorthStar Controller VM and the JunosVM. For High Availability (HA) in a cluster, this must be done for every pair of VMs.

  1. Log in to the NorthStar Controller server and display the contents of the id_rsa.pub file by executing the concatenate command.

    You will need the ssh-rsa string from the output.

  2. Log in to the JunosVM and replace the ssh-rsa string with the one from the id_rsa.pub file by executing the following commands.
  3. On the NorthStar Controller server, update the known hosts file by executing the following commands.

Configure the HA Cluster

HA on the NorthStar Controller is an active/standby solution. That means that there is only one active node at a time, with all other nodes in the cluster serving as standby nodes. All of the nodes in a cluster must be on the same local subnet for HA to function. On the active node, all processes are running. On the standby nodes, those processes required to maintain connectivity are running, but NorthStar processes are in a stopped state.

If the active node experiences a hardware- or software-related connectivity failure, the NorthStar HA_agent process elects a new active node from amongst the standby nodes. Complete failover is achieved within five minutes. One of the factors in the selection of the new active node is the user-configured priorities of the candidate nodes.

All processes are started on the new active node, and the node acquires the virtual IP address that is required for the client-facing interface. This address is always associated with the active node, even if failover causes the active node to change.

See the NorthStar Controller User Guide for further information on configuring and using the HA feature.