Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

NorthStar Controller System Requirements

 

You can install the NorthStar Controller in the following ways:

  • Installation on a physical server

  • Two-VM installation in an OpenStack environment (JunosVM is not bundled with the NorthStar Controller software)

Before you install the NorthStar Controller software, ensure that your system meets the requirements described in Table 1.

Table 1: Hardware Requirements for NorthStar Servers

Server Type

RAM

HDD

Core Processor

Host must support hardware virtualization (VT-d)

NorthStar Application Only

48 GB

500 GB

Intel i5/i7

Yes

NorthStar Application with Analytics

64 GB

1.5 T

Intel i5/i7

Yes

Analytics Only

32 GB

1 T

Intel i5/i7

No

Secondary Collector Only

12 GB

100 GB

Intel i5/i7

No

In addition to the hardware requirements, ensure that:

  • You use a supported version of CentOS Linux or Red Hat Enterprise Linux. These are our Linux recommendations:

    • CentOS Linux or Red Hat Enterprise Linux 6.10, 7.2, 7.5, or 7.6 image–earlier CentOS versions are not supported

    • Install your choice of supported Linux version using the minimal ISO

    • CentOS Linux or Red Hat Enterprise Linux release 7.x, manually add the following utilities to your installation:

    CentOS can be downloaded from https://www.centos.org/download/.

  • The ports listed in Table 2 must be allowed by any external firewall being used. The ports with the word cluster in their purpose descriptions are associated with high availability (HA) functionality. If you are not planning to configure an HA environment, you can ignore those ports. The ports with the word Analytics in their purpose descriptions are associated with the Analytics feature. If you are not planning to use Analytics, you can ignore those ports. The remaining ports listed must be kept open in all configurations.

    Table 2: Ports That Must Be Allowed by External Firewalls

    Port

    Purpose

    179

    BGP: JunosVM for router BGP-LS—not needed if IGP is used for topology acquisition

    161

    SNMP

    450

    NTAD

    830

    NETCONF communication between NorthStar Controller and routers

    1514

    Syslog: Default Junos Telemetry Interface reports for RPM probe statistics (supports Analytics)

    2000

    JTI: Default Junos Telemetry Interface reports for IFD (supports Analytics)

    2001

    JTI: Default Junos Telemetry Interface reports for IFL (supports Analytics)

    2002

    JTI: Default Junos Telemetry Interface reports for LSP (supports Analytics)

    2888

    Zookeeper cluster

    3000

    JTI: In previous NorthStar releases, three JTI ports were required (2000, 2001, 2002). Starting with Release 4.3.0, this single port can be used instead.

    3888

    Zookeeper cluster

    4189

    PCEP: PCC (router) to NorthStar PCE server

    5672

    RabbitMQ

    6379

    Redis

    7000

    Communications port to NorthStar Planner

    7001

    Cassandra database cluster

    8091

    Web: Web client/REST to web server (http)

    8124

    Health Monitor

    8443

    Web: Web client/REST to secure web server (https)

    9000

    Netflow

    9201

    Elasticsearch

    9300

    Elasticsearch cluster

    10001

    BMP passive mode: By default, the monitor listens on this port for incoming connections from the network.

    17000

    Cassandra database cluster

    50051

    PRPD: NorthStar application to router network

    Figure 1 details the direction of data flow through the ports, when node clusters are not being used. Figure 2 and Figure 3 detail the additional flows for NorthStar application HA clusters and analytics HA clusters, respectively.

    Figure 1: NorthStar Main Port Map
    NorthStar Main Port Map
    Figure 2: NorthStar Application HA Port Map
    NorthStar Application HA
Port Map
    Figure 3: Analytics HA Port Map
    Analytics HA Port Map
Note

When upgrading NorthStar Controller, files are backed up to the /opt directory.

System Requirements for VMDK Deployment

The following requirements apply when preparing to run the NorthStar Controller on VMWare ESXi by outputting a VMDK file of the NorthStar disk from the VMWare build software:

  • ESXi 5.5, 6.0, and 6.5 are supported.

Analytics Requirements

In addition to ensuring that ports 2000, 2001, 2002, and 1514 are kept open, using the NorthStar analytics features requires that you counter the effects of Reverse Path Filtering (RPF) if necessary. If your kernel does RPF by default, you must do one of the following to counter the effects:

  • Disable RPF.

  • Ensure there is a route to the source IP address of the probes pointing to the interface where those probes are received.

  • Specify loose mode reverse filtering (if the source address is routable with any of the routes on any of the interfaces).

Two-VM Installation Requirements

A two-VM installation is one in which the JunosVM is not bundled with the NorthStar Controller software.

Disk and Memory Requirements

The disk and memory requirements for installing NorthStar Controller in an OpenStack or other hypervisor environment are described in Table 3.

Table 3: Disk and Memory Requirements for NorthStar OpenStack Installation

VM

Virtual CPU

Virtual RAM

Disk Size

Virtual NIC

NorthStar Application VM

4

32 GB

100 GB

2 minimum

NorthStar-JunosVM

1

4 GB

20 GB

2 minimum

See Table 1 for analytics and secondary collector server requirements.

VM Image Requirements

  • The NorthStar Controller application VM is installed on top of a Linux VM, so Linux VM is required. You can obtain a Linux VM image in either of the following ways:

    • Use the generic version provided by most Linux distributors. Typically, these are cloud-based images for use in a cloud-init-enabled environment, and do not require a password. These images are fully compatible with OpenStack.

    • Create your own VM image. Some hypervisors, such as generic DVM, allow you to create your own VM image. We recommend this approach if you are not using OpenStack and your hypervisor does not natively support cloud-init.

  • The JunosVM is provided in Qcow2 format when inside the NorthStar Controller bundle. If you download the JunosVM separately (not bundled with NorthStar) from the NorthStar download site, it is provided in VMDK format.

  • The JunosVM image is only compatible with IDE disk controllers. You must configure the hypervisor to use IDE rather than SATA controller type for the JunosVM disk image.

    glance image-update --property
    hw_disk_bus=ide --property
    hw_cdrom_bus=ide

JunosVM Version Requirements

If you have, and want to continue using a version of JunosVM older than Release 17.2R1, you can change the NorthStar configuration to support it, but segment routing support would not be available. See Installing the NorthStar Controller 5.0.0 for the configuration steps.

VM Networking Requirements

The following networking requirements must be met for the two-VM installation approach to be successful:

  • Each VM requires the following virtual NICs:

    • One connected to the external network

    • One for the internal connection between the NorthStar application and the JunosVM

    • One connected to the management network if a different interface is required between the router facing and client facing interfaces

  • We recommend a flat or routed network without any NAT for full compatibility.

  • A virtual network with one-to-one NAT (usually referenced as a floating IP) can be used as long as BGP-LS is used as the topology acquisition mechanism. If IS-IS or OSPF adjacency is required, it should be established over a GRE tunnel.

    Note

    A virtual network with n-to-one NAT is not supported.

Server Sizing Guidance

The guidance in this section should help you to configure your servers with sufficient memory to efficiently and effectively support the NorthStar Controller functions. The recommendations in this section are the result of internal testing combined with field data.

Server Requirements

The baseline server specifications presented here apply when the NorthStar application (including the NorthStar Planner and JunosVM) is co-located on the same server with analytics and the collector workers. Also included are server specifications for the NorthStar application, analytics, and secondary collectors on separate servers in the network.

Table 4 describes the server specifications we recommend for various network sizes.

Note

See our recommendations later in this section for additional disk space to accommodate JTI analytics in ElasticSearch, storing network events in Cassandra, and secondary collector (celery) memory requirements.

Table 4: Server Specifications by Network Size

Extra Small

  • < 50 nodes

  • 20 PCCs

  • 10K LSPs

Small

  • < 150 nodes

  • 50 PCCs

  • 20K LSPs

Medium

  • < 500 nodes

  • 150 PCCs

  • 80K LSPs

Large

  • < 1000 nodes

  • 350 PCCs

  • 160K LSPs

Extra Large

  • < 2000 nodes

  • 650 PCCs

  • 320K LSPs

Baseline Configuration (all-in-one)

  • CPU: 4 core, 2.4G

  • RAM: 16G

  • HD: 50G

  • CPU: 8 core, 2.4G

  • RAM: 64G

  • HD: 500G

  • CPU: 16 core, 2.6G

  • RAM: 128G

  • HD: 500G

  • CPU: 24 core, 2.6G

  • RAM: 192G

  • HD: 1T

  • CPU: 24 core, 2.8G

  • RAM: 288G

  • HD: 1T

NorthStar Application Server

  • CPU: 4 core, 2.4G

  • RAM: 8G

  • HD: 50G

  • CPU: 8 core, 2.4G

  • RAM: 32G

  • HD: 500G

  • CPU: 16 core, 2.6G

  • RAM: 32G

  • HD: 500G

  • CPU: 24 core, 2.6G

  • RAM: 96G

  • HD: 1T

  • CPU: 24 core, 2.8G

  • RAM: 144G

  • HD: 1T

Analytics Server

  • CPU: 2 core, 2.4G

  • RAM: 8G

  • HD: 50G

  • CPU: 4 core, 2.4G

  • RAM: 64G

  • HD: 500G

  • CPU: 8 core, 2.6G

  • RAM: 64G

  • HD: 500G

  • CPU: 16 core, 2.6G

  • RAM: 96G

  • HD: 1T

  • CPU: 16 core, 2.8G

  • RAM: 144G

  • HD: 500G

Secondary Collectors (installed with collector.sh)

  • CPU: 2 core, 2.4G

  • RAM: 4G

  • HD: 50G

  • CPU: 4 core, 2.4G

  • RAM: 8G

  • HD: 500G

  • CPU: 8 core, 2.6G

  • RAM: 16G

  • HD: 500G

  • CPU: 16 core, 2.6G

  • RAM: 16G

  • HD: 1T

  • CPU: 16 core, 2.8G

  • RAM: 32G

  • HD: 1T

Note

An extra small all-in-one server network is rarely large enough for a production environment, but could be suitable for a demo or trial.

Additional Disk Space for JTI Analytics in ElasticSearch

Considerable storage space is needed to support JTI analytics in ElasticSearch. Each JTI record event requires approximately 330 bytes of disk space. A reasonable estimate of the number of events generated is (<num-of-interfaces> + <number-of-LSPs>) ÷ reporting-interval-in-seconds = events per second.

So for a network with 500 routers, 50K interfaces, and 60K LSPs, with a configured five-minute reporting interval (300 seconds), you can expect something in the neighborhood of 366 events per second to be generated. At 330 bytes per event, it comes out to 366 events x 330 bytes x 86,400 seconds in a day = over 10G of disk space per day or 3.65T per year. For the same size network, but with a one-minute reporting interval (60 seconds), you would have a much larger disk space requirement—over 50G per day or 18T per year.

There is an additional roll-up event created per hour per element for data aggregation. In a network with 50K interfaces and 60K LSPs (total of 110K elements), you would have 110K roll-up events per hour. In terms of disk space, that would be 110K events per hour x 330 bytes per event x 24 hours per day = almost 1G of disk space required per day.

For a typical network of about 100K elements (interfaces + LSPs), we recommend that you allow for an additional 11G of disk space per day if you have a five-minute reporting interval, or 51G per day if you have a one-minute reporting interval.

See NorthStar Analytics Raw and Aggregated Data Retention in the NorthStar Controller User Guide for information about customizing data aggregation and retention parameters to reduce the amount of disk space required by ElasticSearch.

Additional Disk Space for Network Events in Cassandra

The Cassandra database is another component that requires additional disk space for storage of network events.

Using that same example of 50K interfaces and 60K LSPs (110 elements) and estimating one event every 15 minutes (900 seconds) per element, there would be 122 events per second. The storage needed would then be 122 events per second x 300 bytes per event x 86,400 seconds per day = about 3.2 G per day, or 1.2T per year.

Using one event every 5 minutes per element as an estimate instead of every 15 minutes, the additional storage requirement is more like 9.6G per day or 3.6T per year.

For a typical network of about 100K elements (interfaces + LSPs), we recommend that you allow for an additional 3-10G of disk space per day, depending on the rate of event generation in your network.

By default, NorthStar keeps event history for 35 days. To customize the number of days event data is retained:

  1. Modify the dbCapacity parameter in /opt/northstar/data/web_config.json
  2. Restart the pruneDB process using the supervisorctl restart infra:prunedb command.

Collector (Celery) Memory Requirements

When you use the collector.sh script to install secondary collectors on a server separate from the NorthStar application (for distributed collection), the script installs the default number of collector workers described in Table 5. The number of celery processes started by each worker is the number of cores in the CPU plus one. So in a 32-core server (for example), the one installed default worker would start 33 celery processes. Each celery process uses about 50M of RAM.

Table 5: Default Workers, Processes, and Memory by Number of CPU Cores

CPU Cores

Workers Installed

Total Worker Processes

Minimum RAM Required

1-4

4

20

(CPUs +1) x 4 = 20

1 GB

5-8

3

18

(CPUs +1) x 2 = 18

1 GB

16

1

17

(CPUs +1) x 1 = 17

1 GB

32

1

33

(CPUs +1) x 1 = 33

2 GB

See Secondary Collector Installation for Distributed Data Collection for more information about distributed data collection and secondary workers.

The default number of workers installed is intended to optimize server resources, but you can change the number by using the provided config_celery_workers.sh script. See Collector Worker Installation Customization for more information. You can use this script to balance the number of workers installed with the amount of memory available on the server.

Note

This script is also available to change the number of workers installed on the NorthStar application server from the default, which also follows the formulas shown in Table 5.