ON THIS PAGE
Component-specific and service-specific NorthStar settings previously
maintained in the
are now maintained in an internal cache and are configurable using
the NorthStar CLI. The NorthStar CLI is very similar to the Junos
Certain bootstrap and infrastructure configuration settings continue to be maintained in the northstar.cfg file.
To launch the NorthStar CLI:
[root@ns]# /opt/northstar/utils/cmgd_cli root@ns>
The high level command categories include:
root@ns1# set northstar ? Possible completions: > analytics General configuration parameters related to analytics > config-server Config Server run time parameters > mladapter General configuration parameters related to ML Adapter. Common configuration parameters like amqp or database are taken from amqpSettings, but can be overridden for MLAdapter. > netconfd General configuration parameters related to netconfd > path-computation-server Path computation server run time parameters > peer-engineering General configuration parameters for EPE and IPE > programmable-rpd-client General configuration parameters related to the PRPD client > system > topology-server General configuration parameters related to the Topology Server. Common configuration parameters like amqp or database are taken from amqpSettings, but can be overridden for the Topology Server.
See Configuring NorthStar Settings Using the NorthStar CLI in the NorthStar Getting Started Guide for more information.
Remote Server for NorthStar Planner
You can install NorthStar Controller with a remote Planner server (a server separate from the NorthStar application server), to distribute the NorthStar Operator and NorthStar Planner server loads. This also helps ensure that the processes of each do not interfere with the processes of the other. Both the web Planner and the desktop Planner application are then run from the remote server. You must still log in from the NorthStar Controller web UI login page.
Using a remote server for NorthStar Planner does not make NorthStar Planner independent of NorthStar Controller. As of now, there is no standalone Planner.
We recommend using a remote Planner server if any of the following are true:
Your network has more than 250 nodes
You typically run multiple Planner users and/or multiple concurrent Planner sessions
You work extensively with Planner simulations
Install and set up the remote Planner server after you have successfully installed NorthStar and run the net_setup.py setup utility. On the remote Planner server, run the install-remote_planner.sh installation script followed by the setup_remote_planner.py setup utility. These two programs configure both the application server and the remote Planner server and ensure the two servers can communicate. For HA cluster networks, there is one remote Planner server for the entire cluster, configured to use the VIP address of the cluster for communication with the application servers. You run the setup_remote_planner.py setup utility on the remote Planner server once for each node in the cluster, ending with the active node.
See Using a Remote Server for NorthStar Planner in the NorthStar Getting Started Guide for more information.
Interactive Simulation in Web Planner
Interactive simulation allows you to specify the nodes, links, and facilities for which you want to run failure simulation, and see how the network would be impacted. This is different from exhaustive failure simulation for which you use a different tool in the web Planner. To run interactive simulation, enter the new Simulation Mode from the left side of the upper tool bar, after you have opened a network or session. The network information table tabs and tools change to support interactive simulation:
Tabs in which you select or deselect the elements you want to fail by clicking the corresponding table rows. At the bottom of the network information table for these tabs, click Run to complete the simulation and view the changes both in the topology map and in the network information table. Click Reset Simulation to start over.
Optional tabs in which you can see the changes resulting from the simulated failures.
You can download reports in .csv format for all simulation data. You can also view resulting changes in the topology map.
Traffic Aggregation in Web Planner
Traffic aggregation is now available in the NorthStar Planner web UI. In the Traffic Aggregation window, you select the traffic aggregation parameters that suit your purpose. For example, you can choose which types of traffic to include (interface, tunnel, demand), what range of dates you want to cover, and what aggregation series type you want to use (hour of day, time series, time series-hourly). The traffic aggregation process on the server requests the analytics database to aggregate the performance data according to the selections you provided, and the data are stored to the corresponding traffic files. The generated traffic results are optionally displayed in the Planner network information table. Note that performing data collection in the NorthStar Controller using Network Archive, LDP Collection, and SNMP Collection tasks is a prerequisite.
See Web Planner Traffic Aggregation in the NorthStar Planner Web UI Guide for more information.
Database for Network and Session in Web Planner
In this release, the web-based NorthStar Planner is moving from file system to a database for the storage and management of networks and sessions. A database system has advantages in the areas of accessibility, preservation of data, and support for NorthStar’s future microservices-based direction. The Planner working sessions will still leverage the file system implementation, but instead of separate directories for input data (specs directory) and output data (sessions directory), all of the data is now consolidated in the sessions directory.
The change to a database is largely transparent to users, but there are some special notes for users who are upgrading from an earlier release to NorthStar 6.1.0. There are two options for migrating pre-6.1.0 Planner networks to the database from the file system:
You can use the Import Network Wizard in the Web Planner UI. The wizard can step you through the process of uploading a tarred network from your local machine.
Alternatively, you can copy your existing network directory containing spec files to the NorthStar server (
/opt/northstar/data/specs), use the File Browser in the web Planner UI to open a spec file (which starts a session), and then save the session as a network to create a new network entry in the database. You can reach the File Browser by clicking the More Options icon (three vertical dots) in the upper right corner of the Planner window and selecting Browse Files. Open the
specsdirectory. The spec file file that can be launched as a network displays a launch icon beside it when you hover your mouse over the file name.
Be aware of the following caveats regarding migrating existing networks to the database:
Be sure the directory and file permissions are readable by user:pcs.
The file needs to be flat. NorthStar is unable to support nested directories at this time.
Auto-Save and Restore Feature in Web Planner
Session data is intended to last as long as the session remains open. However, it can happen that a session is interrupted by some sort of failure. The auto-save feature prevents the loss of session data in such circumstances by serving as a recovery mechanism. Auto-save checks for changes to the network model every five minutes when there is an attached session. If change is detected, the session data is automatically saved to the database separately from the network data.
To restore from an auto-saved session, click the network icon (world) in the upper right corner of the Planner window for a drop-down list of saving and closing options. Select Restore Network. This action overwrites the current network with the last auto-saved network. A dialog box is displayed listing the timestamp of the current data and the timestamp of the last auto-saved data, so you can compare and be sure of which data you want to keep. Be aware that proceeding with the restore action from the dialog box means the current data would be lost and you would not be able to undo the action.
Link Latency, SRLGs, and Affinities Included in Network Archives
Link delay, SRLG, and affinity information available in the Network Controller is now included network archives and is made available in the NorthStar Planner (both desktop application and web UI). This better supports offline modeling of the NorthStar Controller behavior.
NorthStar Controller Tile Map Improvements
This release features the following tile map improvements:
In the NorthStar Controller topology settings, you can now select from several tile map providers. The standard “NorthStar” map is bundled with NorthStar and served locally. All other maps listed are served by a tile provider and require an internet connection to your client (web browser). We recommend that you explore the map styles to find the one that best suites your needs.
You can also now add your own tile map provider which involves a user-defined JSON file.
Zooming capability is now faster and smoother with the new seamless zoomable feature. This provides an improved user experience.
The Diagnostics Manager allows you to run CLI commands on the routers in the network from the NorthStar Controller UI without manually logging into the routers. You can select the routers, select the commands, specify variable command parameters, execute the commands, and view/save the results. This is a unified way to manage ping and traceroute results, and is a very useful tool for troubleshooting. Juniper, Cisco, Alcatel, and Huawei command sets are provided by default, and you can add other vendor command sets as needed.
You can access Diagnostics from Applications > Diagnostics, from the topology map, or from the network information table. See Diagnostics Manager in the NorthStar Controller User Guide (in the Troubleshooting section) for more information.
The topology filter service allows you to limit the nodes appearing in your NorthStar topology to a subset of the nodes in your network. This capability might be important if your network contains more nodes than your NorthStar license covers and you want to control which nodes NorthStar recognizes. You might also want to filter out nodes that are not important for traffic engineering management such as aggregation layer nodes or route reflectors, for example. The topology filter service is only available in NorthStar installations where BMP (as opposed to NTAD) is the topology acquisition method.
In the web UI, access the topology filter by navigating to Administration > Topology Filter where you can create a series of rules, each one consisting of the field to search on (condition field), the value to look for (condition value), and the action to take if the value is matched (action). The rules are applied in sequence order, the results are displayed in a table, and the topology is updated accordingly.
See Topology Filter in the NorthStar Controller User Guide for more information.
Ingress Peer Engineering (IPE)
The goal of Ingress Peer Engineering (IPE) in NorthStar is to influence the ingress links at which traffic enters the NorthStar managed network from other domains in order to steer traffic away from congested links. To do that, you configure a BGP policy to be applied to an ingress ASBR. The policy (conditions and actions) is inserted as the first item in the export list in order to ensure the policy is applied. You can have one policy per ingress ASBR, with support for multiple terms (rules) within the policy. Conditions can include route filters on prefixes (you can specify different prefixes for each term). You define a route filter list which is then referenced in the condition. Conditions can also include regular expressions on AS paths.
Actions can include:
Prepending of the AS path with a local AS number. This results in diverting traffic away from the ingress link, but does not influence where the traffic goes instead. With this action, the shortest AS path is preferred.
Multi-Exit Discriminator (MED). MED allows you to influence the choice of link for incoming traffic. This action prefers the path with the lowest MED metric.
The NorthStar web UI supports creating IPE policies, viewing IPE policy traffic, and IPE demand report generation.
As of this release, NorthStar does not automatically apply the BGP policies based on any traffic threshold crossings. It has to be done manually, through REST.
See NorthStar Ingress Peer Engineering in the NorthStar Controller User Guide for more information.
Support for Anycast Groups
You can now visualize anycast groups in the NorthStar Controller UI:
There is a tab in the network information table for Anycast Groups.
Click an anycast group in the table to highlight the group in the topology map.
Anycast groups are derived from the node prefixes, and are therefore, read-only in NorthStar; you cannot add, modify, or delete them.
Anycast group support in this release also includes the ability to add an anycast group SID as a loose hop for an LSP. In the Provision LSP window (Path tab), when you are defining required loose hops, you can see available anycast groups in the drop-down options.
Anycast group support will continue to evolve in future releases.
The following SNMP-related enhancements have been added to NorthStar Release 6.1.0:
NorthStar now supports SNMPv3 user-based security model (USM) for data collection and device test connectivity. You can configure device profiles with SNMPv3 parameters including V3 authentication (None, MD5, SHA-1) and V3 privacy (None, DES, 3DES, AES (128-bit encryption only). See Device Profile and Connectivity Testing for more information.
When you create an SNMP data collection task, you can now opt to collect Class of Service (CoS) data. CoS data is not collected unless you enable it by clicking the check box in the Create New Task window.
Additional OIDs to support Huawei devices are now included in SNMP collection tasks.
The following metric constraints (if not overridden in NorthStar) are now supported:
Used as Routing Method (yes/no)
Hop count supported (corresponds to constant in the NorthStar web UI)
Path delay metric supported (corresponds to delay in the NorthStar web UI)
Segment ID (SID) depth
Segment-ID depth is always minimized in the dedicated SR path computation engine.
Segment Routing Enhancements
The following Segment Routing (SR) enhancements are introduced in NorthStar 6.1.0:
In addition to the PCC-wide MSD, the per-LSP RFC8664 SID Depth is enforced by the path computation engines.
SR-anycast prefixes are managed as separate resources and can be visualized in the NorthStar web UI.
A new dedicated multipath SR routing and label stack compression path computation is now available.
This dedicated path computation engine provides SR ECMP routing with label stack compression. The label stack compression is node SID, anycast SID, and adjacency-SID aware. The engine has to be explicitly enabled using the NorthStar CLI by using one of the following configurations:
set northstar path-computation-server lsp-to-path-computation-instance lsp-request-discriminator-SR-nodeSID instance-type SRPCServer
set northstar path-computation-server lsp-to-path-computation-instance lsp-request-discriminator-SR-test instance-type SRPCServer
After changing that setting, the dedicated instance needs to be restarted, using supervisorctl restart northstar_pcs:SRPCServer.
lsp-request-discriminator-SR-nodeSIDis used, the new engine will be used for SR LSPs configured with “Use Node Sid For Path Computation (requires specific NS Global Config)” in the LSP design tab (useNodeSIDs in the REST data model).
lsp-request-discriminator-SR-testis used, the new engine will be used for all SR LSPs.
The default path computation engine has the following limitations:
Anycast segments are not supported
Label stack compression is not supported
The new path computation engine has the following limitations compared to the default engine:
Bandwidth constraints are ignored
Best-effort diversity is not supported
ECMP routing takes precedence over diversity constraint
All LSPs are provisioned using PCEP
Symmetric LSP pairs are not supported
Maximum delay, hop and user cost are not supported
Analytics-based rerouting is not supported
The LSP are not part of the global optimization
Scheduling parameters are ignored
Binding SID and color are not supported
In both engines, the diversity is considered for LSP in that same engine.