Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

New Features

 

The following new features are introduced in NorthStar Release 4.2.0:

  • Bandwidth Sizing Controlled by NorthStar

    NorthStar Controller can now be configured to periodically compute a new planned bandwidth for each bandwidth sizing-enabled LSP based on aggregated LSP traffic statistics. This feature is not to be confused with auto-bandwidth which is done on the router side.

    For bandwidth sizing to occur, you must:

    • Enable NorthStar analytics

      NorthStar supports bandwidth sizing for all LSPs for which it can obtain LSP statistics, so you must enable/use NorthStar analytics, and confirm that NorthStar is receiving traffic from the LSPs.

    • Configure LSPs so their bandwidth sizing attribute is set to yes (bandwidth sizing enabled). LSPs without this setting are not sized.

    • Create and schedule a bandwidth sizing task in the Task Scheduler.

    For more information, see the following NorthStar Controller User Guide topics:

    • Provision LSPs

    • Bandwidth Sizing

  • Fail-Safe Mode

    NorthStar Controller fail-safe mode has been introduced to prevent complete disruption of visibility into the network by NorthStar as a result of the Cassandra database becoming inaccessible. Previously, when connectivity to Cassandra was lost, the web UI and REST API were completely unusable and the user had no visibility into the state of the network. With fail-safe mode, a view-only version of the web UI is accessible via a fail-safe web UI landing page and Admin login credentials. Fail-safe mode works even if there is only one node in the NorthStar cluster that is up and running.

    In fail-safe mode:

    • A stored snapshot of the network topology can be loaded from the file system.

    • The HA agent is able to elect a new active node if necessary. The NorthStar processes on the new active node start in fail-safe mode because Cassandra is not available.

    • Existing delegated or PCE-initiated LSPs can be rerouted by the PCS in the event of network outages. New LSPs cannot be created and LSPs cannot be deleted in NorthStar. LSPs can still be created on the router and delegated to NorthStar.

    • The status of the NorthStar cluster is displayed for all users via a banner in the web UI. The NorthStar health reporting function also reports the status of nodes, even when they are down.

    See NorthStar Controller Fail-Safe Mode in the NorthStar Controller User Guide for more information.

  • Telemetry Data Aggregation

    Telemetry data is now rolled up (aggregated) every hour and retained in Elasticsearch for a user-configurable number of days. The purpose of aggregation is to make longer retention of data more feasible given limited disk space. When you modify retention parameters, keep in mind that there is an impact on your storage resources.

    See the following NorthStar Controller User Guide topics for more information:

    • NorthStar Analytics Raw and Aggregated Data Retention

    • Introduction to the Task Scheduler

  • Resource Optimization for Collector Worker Installation

    When you install the NorthStar application, a default number of collector workers are installed on the NorthStar server, depending on the number of cores in the CPU. This is now regulated differently in order to optimize server resources, but you can change the number by using a provided script (config_celery_workers.sh). This also applies when you install slave worker groups for distributed data collection. Each installed worker starts a number of celery processes equal to the number of cores in the CPU plus one.

    For more information, see the following NorthStar Controller Getting Started Guide topics:

    • Collector Worker Installation Customization

    • Slave Collector Installation for Distributed Data Collection

  • Server Sizing Guidance Documentation

    We are now providing server sizing guidance in the NorthStar Controller Getting Started Guide to help customers configure their servers with sufficient memory to effectively support the NorthStar Controller functions. The recommendations are the result of internal testing combined with field data.

    Server specifications for various network sizes are included along with special considerations related to JTI analytics in ElasticSearch, storing network events in Cassandra, and slave collector (celery) memory requirements.

    For more information, see NorthStar Controller System Requirements in the NorthStar Controller Getting Started Guide.

  • Documentation for Configuring the Cassandra Database in a Multiple Data Center Environment

    NorthStar Controller uses the Cassandra database to manage database replicas in a NorthStar cluster. The default setup of Cassandra assumes a single data center. In other words, Cassandra knows only the total number of nodes; it knows nothing about the distribution of nodes within data centers.

    But in a production environment, it is typical to have multiple data centers with one or more NorthStar nodes in each data center. In that environment, it is preferable for Cassandra to have awareness of the data center topology and to take that into consideration when placing database replicas.

    We now provide instructions for configuring Cassandra for use in a multiple data center environment. In NorthStar 4.3.0, these manual instructions will be automated and part of the regular installation procedures for the NorthStar software.

    Because Apache Cassandra is an open source software, Cassandra usage, terminology, and best practices are well documented elsewhere, and our documentation focuses specifically on the NorthStar use of Cassandra.

    For more information, see Configuring the Cassandra Database in a Multiple Data Center Environment in the NorthStar Controller Getting Started Guide.

  • NorthStar Multilayer Support Using Open ROADM Interface

    In the past, NorthStar supported the Juniper Networks proNX Service Manager product as a transport controller. At that time, the proNX Service Manager used a TE interface to integrate with NorthStar. The proNX Service Manager (now called proNX Optical Director) no longer uses the TE interface, so NorthStar has added support for the Open ROADM interface in order to continue to integrate with the proNX Optical Director.

    For more information, see the following topics in the NorthStar Controller User Guide:

    • Multilayer Feature Overview

    • Configuring the Multilayer Feature

    • Linking IP and Transport Layers

    • Managing Transport Domain Data Display Options

    See https://www.juniper.net/documentation/product/en_US/pronx-optical-director for Juniper Networks proNX Optical Director documentation.