Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Using an Ansible Playbook to Automate NorthStar Installation

An Ansible palybook is bundled with the NorthStar download package. You can download the NorthStar download package from the NorthStar download page. The playbook enables automation of NorthStar software installation, and is appropriate for both lab and production systems. If you are not familiar with the Ansible open-source automation tool, information is readily available online. Sample resources include:

The Ansible playbook installs NorthStar with cRPD. There is no playbook available for a Junos VM installation.

See Installing the NorthStar Controller for installation procedures not using the Ansible playbook.

The Ansible playbook requires a host machine (VM or a laptop/desktop) from which the installation is initiated. This host is called the “control node”. The status of the installation is maintained on the control node to facilitate future configuration changes. This is also a good place to save the inventory file (hosts.yml) and license files for a future reinstallation or update. You will install the public SSH keys existing in the control node on the hosts targeted for installation (the “managed nodes”) so Ansible can communicate with those nodes.

Before You Begin

To prepare for executing the Ansible playbook:

  1. Install the following on your control node:

    • Linux operating system

    • Python

    • Python-pip (for installing Ansible)

    • SSH

    • Ansible

    We recommend using virtualenv to create an isolated Python environment in which to install Ansible. It creates a folder with all the necessary executables. You can install Ansible using the pip command within the virtualenv. You could also use pip to install Ansible in the system environment.

    Here is an example of Ansible installation using virtualenv:

  2. Identify all the managed nodes where NorthStar software is to be installed. Ensure that each node has the following:

    • Basic operating system (Red Hat Enterprise Linux 7.x or CentOS 7.x)

    • Network connectivity

    • SSH server

  3. Ensure that you can SSH from the control node to all of the managed nodes.

    To execute the playbook, you must be able to connect to each managed node and become root. You can set this up by adding your SSH key to the ~/.ssh/authorized_keys file on each managed node either for root directly, or for another account that can become root by executing sudo without a password.

    An alternate method is to use variables to specify the username/password to connect to the managed nodes and the sudo password in the inventory file. These are the variables:

    • ansible_user

      The user to connect to, such as root. The default is the current user.

    • ansible_password

      Password that authenticates the ansible_user.

    • ansible_become

      Set this to true if ansible_user is not root.

    • ansible_sudo_password

      Password provided to sudo.

  4. Copy the Ansible playbook for NorthStar and the NorthStar-Bundle.rpm to the control node and change to that directory.

Creating the Ansible Inventory File

Create a custom inventory file for your NorthStar installation. The inventory is a group of lists that define the managed nodes in your planned NorthStar installation. The Ansible playbook for NorthStar contains a sample inventory file named hosts.yml.sample that you can use as a template to start a custom inventory file. The default name for the inventory file is hosts.yml. Use a text editor to customize the inventory file.

The template inventory file is organized into several groups:

  • all

    Contains the subsection vars to define variables that apply to all managed nodes. For example, ansible_user defines the account name used to connect to the managed nodes.

  • northstar

    Defines nodes and variables for managed nodes that will run NorthStar services such as PCS, TopoServer, and web front end. Nodes in the northstar group should define a northstar_license variable that contains the license information for that node

  • northstar_ha

    Contains nodes or subgroups of nodes that are configured for NorthStar high availability.

  • northstar_analytics

    Contains nodes and variables for analytics.

  • northstar_collector

    Contains nodes and variables for analytics secondary collectors.

This example shows a portion of an inventory file including some of these groups:

You can encrypt secret variables, such as northstar_password or ansible_password, using the ansible-vault encrypt_string command. More information is available here:

Executing the Playbook

After defining the inventory file, execute the ansible -m ping all command to verify that all managed nodes are defined correctly, are reachable, and that SSH login was successful.

Execute the ./install.yml (or ansible-playbook install.yml) command to execute the installation playbook and install all managed nodes as defined in the inventory file. You can add optional arguments to the install.yml command. Some useful examples include:

  • -e key=value

    Extra variables. For example, -e

  • -i inventory-file

    Use a different inventory file. You might utilize this, for example, if you use the control node to install software for independent clusters.

  • -l limit

    Limit execution to a subset of managed nodes. For example, -l, would only install on those two managed nodes.

  • -t taglist

    Limit execution to a set of tagged tasks. For example, -t northstar would only install the NorthStar application.

  • --ask-vault-pass

    Ask for the decryption key for embedded secrets.

Installing Data Collectors and Secondary Collectors for Analytics

You can install NorthStar data collectors to support either of two analytics configurations:

  • Analytics co-hosted with the NorthStar application

    For this configuration, add the same managed nodes to the northstar_analytics inventory group that are in the northstar inventory group.

  • External analytics node or cluster

    For this configuration, add one or more managed nodes to the northstar_analytics inventory group.

Install analytics secondary collectors by adding managed nodes to the northstar_collector inventory group. In order to successfully install secondary collectors, the installation script needs access to a node running the NorthStar application. The primary node must either be installed together with analytics/collector nodes, or it must be running before the analytics/collector nodes are installed. The script takes the required information from the northstar inventory group, but you can override that by using the variable northstar_primary.


The variables provided specifically for use with the NorthStar playbook are listed in Table 1.

Table 1: NorthStar Ansible Playbook Variables

Variable Name



Name of bundle RPM to install.


List of NTP servers.

If you do not specify any NTP servers, the managed nodes are configured to synchronize with the following four NTP servers:






Per-node NorthStar license file.


Per-node NorthStar license (inline).


ASN for cRPD route reflector.

The default behavior is that the default ASN in the NorthStar configuration script is not modified. The default ASN in the NorthStar configuration script is 64512.


cRPD license file.


cRPD license (inline).


Managed nodes that are part of HA.

By default, all members of the northstar_ha inventory group are included.


Virtual IP address for HA.


Name of a geo-HA site.

The default is site1.


Per-node HA priority.

The default is 100.


The NorthStar application node used to configure remote analytics and collector nodes. The primary node must either be installed together with analytics/collector nodes, or it must be running before the analytics/collector nodes are installed.

The default is the first member of the northstar inventory group .


Managed nodes running the NorthStar application.

By default, all members of the northstar inventory group are included.


Per-node HA priority.

The default is 100.


Virtual IP address for the analytics cluster.

Table2 lists some other useful variables.




User to connect to the managed node.


Password to connect to the control node.


Set this to true if ansible_user is not root.


Password provided to sudo.