Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Known Issues

 

This section lists known issues in Juniper Networks CSO Release 5.1.0.

SD-WAN

  • Addition and deletion of mesh tags are not captured in the DVPN audit logs.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32252

  • When you add or remove any intent on the SD-WAN Policy page, a +0 is added after every element even though you selected only one element.

    Workaround: This issue does not have any functional impact. The +0s disappear when you refresh the page.

    Bug Tracking Number: CXU-32068

  • Traffic from a spoke site that has a dynamic SLA policy enabled and is connected to an MX Series cloud hub device takes asymmetric paths—that is different paths for upstream and downstream.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32506

  • On gateway site, when there are no non-data center departments, SD-WAN policy deploy job may return the following message and fail:

    No update of SD-WAN policy configuration on device due to missing required information.

    Workaround: There is no functional impact; the deploy job completes successfully when a non-data center department with a LAN segment is deployed on Gateway site.

    Bug Tracking Number: CXU-31365

  • SD-WAN deployment policy job may fail if policy intent involves datacenter department or department without any LAN segment. This does not impact SD-WAN policy deployment for other sites.

    Workaround: Use more specific SD-WAN intents, with department or department with site, to exclude datacenter departments and departments without LAN segments.

    Bug Tracking Number: CXU-31313

  • In a bandwidth-optimized, hub-and-spoke topology where network segmentation is enabled, a new LAN segment that has an existing department added to it might cause a deploy to fail.

    Workaround: Delete the LAN segment and retry the deploy. If there are policy dependencies, remove the dependencies before you delete the LAN segment.

    Bug Tracking Number: CXU-25968

  • OAM configurations remain on an MX device that you have deactivated as cloud hub from CSO.

    Workaround: Manually remove the configuration from the device.

    Bug Tracking Number: CXU-25412

  • If the Internet breakout WAN link of the cloud hub is not used for provisioning the overlay tunnel by at least one spoke site in a tenant, then traffic from sites to the Internet is dropped.

    Workaround: Ensure that you configure a firewall policy to allow traffic from security zone trust-tenant-name to zone untrust-wan-link, where tenant-name is the name of the tenant and wan-link is the name of the Internet breakout WAN link.

  • Bug Tracking Number: CXU-21291

  • If a WAN link on a CPE device goes down, the WAN tab of the Site-Name page (in Administration Portal) displays the corresponding link metrics as N/A.

    Workaround: None.

    Bug Tracking Number: CXU-23996

  • If you delete a cloud hub that is created in Release 3.3.1, CSO does not delete the stage-2 configuration.

    Workaround: You must manually delete the stage-2 configuration from the device.

    Bug Tracking Number: CXU-25764

SD-LAN

  • At times, recall with the recovery configuration fails to revert EX2300 and EX3400 devices to the recovery configuration because some devices do not have the /var/db/scripts/events directory.

    Workaround: Keep a copy of the recovery configuration and use the load override recovery filename command to revert the devices to the required configuration.

    Bug Tracking Number: CXU-34430

  • For an EX Series switch, on the Configuration Template page the Maximum Power field is not validated. The range for Maximum Power is 0 through 30 watts. The deployment fails if you specify any other values.

    Workaround: Specify a value within the range (0 through 30 watts).

    Bug Tracking Number: CXU-38850

  • ZTP of an EX Series switch fails if you add an EX Series switch behind an enterprise hub.

    Workaround: For onboarding an EX Series switch behind an enterprise hub, manually configure the stage-1 configuration.

    Bug Tracking Number: CXU-38994

  • For an EX Series switch, if you enable or disable a port from the UI, the port status is reflected in Port Chassis View and Port Grid only after an approximate time of 5 minutes.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-37846

  • For an EX Series switch, you cannot filter or search for the device ports on the Resources > Devices Device-Name> Ports tab.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-38564

  • If you reboot an NFX250 device, the EX Series switch behind the NFX250 device might not renew the DHCP request, and the operational status of the switch might be displayed as down.

    Workaround: On the EX Series switch, manually run the request dhcp client renew all command.

    Bug Tracking Number: CXU-39127

  • The phone-home process might not be triggered if you zeroize an EX Series switch and disable the management interface on the switch.

    Workaround: To trigger the phone-home process, run the delete chassis auto-image-upgrade command and commit the delete operation.

    Bug Tracking Number: CXU-39129

  • If you are using an EX Series switch with Junos OS Release 18.3R1.9, the Current System Users widget always displays the login time as Jan 1, 1970.

    Workaround: Upgrade the EX Series switch to Junos OS Release 18.4R2.7.

    Bug Tracking Number: CXU-38647

  • The deployment of a port profile fails if the values you have configured for the firewall filter are not supported on the device running Junos OS.

    Workaround:

    • Edit the firewall filter.

    • Update the values according to the supported configuration specified for a firewall filter, in this link.

    • Redeploy the port profile.

    Bug Tracking Number: CXU-39629

  • The Chassis View page for an EX Series switch is not automatically refreshed to display the status of the newly configured ports.

    Workaround: Manually refresh the Device-name page. Alternatively, navigate to some other page on the UI and then revisit the Device-name page to view the status of the newly configured ports on the chassis view page.

  • The Zero Touch Provisioning toggle button is displayed for EX4600 and EX4650 switches although these switches do not support ZTP.

    Workaround: Disable the Zero Touch Provisioning toggle button and manually configure the stage-1 configuration on the switches.

    Bug Tracking Number: CXU-41608

  • The Chassis View page for an EX Series Virtual Chassis incorrectly displays member 0 as the primary member although the Virtual Chassis was successfully provisioned without member 0, through ZTP.

    Workaround: Add an EX Series device as member 0 before provisioning the Virtual Chassis.

    Bug Tracking Number: CXU-40322

  • If you upgrade a CSO Release 5.0.3 site with an EX Series switch to CSO Release 5.1, the port profile configuration or manual configuration of a port profile on an already configured port may not work as expected. 

    Workaround: Delete and re-create the site with an EX Series switch.

    Bug Tracking Number: CXU-41763

CSO High Availability

  • In an HA setup, some of the VRRs are incorrectly reported as down even though those VRRs are up and running. This problem occurs because some of the alarms that are created when VRRs are down after a power failure fail to be cleared even after the VRRs come back online.

    Workaround: Though this issue does not have any functional impact, we recommend that you restart the VRR to clear the alarms.

    Bug Tracking Number: CXU-31448

  • In an HA setup, deployment of NAT and firewall policies fail if secmgt-sm pods fail to initialize after a snapshot process and remain in 0/1 Running state.

    Workaround: Run the following curl command from the microservices VM and make sure scemgt-sm pods comes to 1/1 Running state:

    curl -XPOST "https://<central-vip>/api/juniper/sd/csp-web/database-initialize" -H 'Content-Type: application/json' -H 'Accept: application/json' -H "X-Auth-Token: token

    Bug Tracking Number: CXU-31446

  • In a multinode CSO installation, CSO workflows do not work as expected if you restart any of the three available servers. This is because of the Cassandra database-related issue.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41620

Security Management

  • If a cloud hub is used by two tenants, one with public key infrastructure (PKI) authentication enabled and other with preshared key (PSK) authentication enabled, the commit configuration operation fails. This is because only one IKE gateway can point to one policy and if you define a policy with a certificate then the preshared key does not work.

    Workaround: Ensure that the tenants sharing a cloud hub use the same type of authentication (either PKI or PSK) as the cloud hub device.

    Bug Tracking Number: CXU-23107

  • If UTM Web-filtering categories are installed manually (by using the request system security UTM web-filtering category install command from the CLI) on an NFX150 device, the intent-based firewall policy deployment from CSO fails.

    Workaround: Uninstall the UTM Web-filtering category that you installed manually by executing the request security utm web-filtering category uninstall command on the NFX150 device and then deploy the firewall policy.

    Bug Tracking Number: CXU-23927

  • If SSL proxy is configured on a dual CPE device and if the traffic path is changed from one node to another node, the following issue occurs:

    • For cacheable applications, if there is no cache entry the first session might fail to establish.

    • For non-cacheable applications, the traffic flow is impacted.

    Workaround: None.

    Bug Tracking Number: CXU-25526

Site and Tenant Workflow

  • On a site with an NFX250 device and EX Series switch, the EX Series switch is not detected if there are no LAN segments.

    Workaround: Onboard the site with at least one LAN segment.

    Bug Tracking Number: CXU-38960

General

  • App Visibility functionality for NFX250 and NFX150 Hybrid WAN Managed Internet CPE may not work as expected because application tracking is not enabled by default.

    Workaround: Enable application-tracking through device configuration from the CSO UI. Go to Devices, select an NFX250 or NF150 site, and then select Configuration > Zones > Edit Untrust Zone, and select the Application-Tracking check box and deploy the configuration.

    Bug Tracking Number: CXU-37713

  • When a WAN link that is configured with DHCP is used as a DVPN tunnel endpoint, a change in the DHCP IP address of the WAN link causes the DVPN tunnel to be down.

    Workaround: Delete the DVPN tunnel from the Resources > Resource Name > WAN tab and create a new tunnel.

    Bug Tracking Number: CXU-36761

  • The display name field of the monitor object deleted alarm shows the UUID of deleted sites instead of the name of the site.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-36367

  • In next-generation firewall sites with LAN, the recall of EX2300 and EX3400 devices with the zeroize option does not work. This issue occurs because EX2300 and EX3400 do not support the zeroize option.

    Workaround: Manually clean up the EX2300 and EX3400 devices.

    Bug Tracking Number: CXU-35208

  • For Hybrid sites that use NFX150 or NFX250 CPE, you cannot use default configuration templates to configure physical interfaces, zones, or routing instances.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-35021

  • You cannot filter the device ports for SRX Series devices while adding an on-premise spoke site or while adding a switch.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32826

  • UTM Web filtering fails at times even though the Enhanced Web Filtering (EWF) server is up and online.

    Workaround: From the device, configure the EWF Server with the IP address 116.50.57.140 as shown in the following example:

    root@SRX-1# set security utm feature-profile web-filtering juniper-enhanced server host 116.50.57.140

    Bug Tracking Number: CXU-32731

  • After you do an RMA of a spoke device, the LAN segment fails to connect to the enterprise hub.

    Workaround: Reboot the spoke device.

    Bug Tracking Number: CXU-35379

  • On the Shared Objects page, if you edit a custom application or application group settings, the firewall policies or SD-WAN policies are marked as Pending Deployment even though there are no changes to the policies.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-38706

  • When you configure and deploy IPS on the firewall rule, IDP does not detect the attacks and processes the traffic on an NFX150 device with Junos OS Release 18.2X85-D12 when a dynamic application is configured.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-38388

  • If you create or delete a DVPN tunnel, you cannot reach the LAN interface on the SRX Series device.

    Workaround: Reboot the spoke or execute the following commands and then roll back the changes.

    • set groups dept-configuration interfaces ge-0/0/4 vlan-tagging

    • set groups dept-configuration interfaces ge-0/0/5 vlan-tagging

    Bug Tracking Number: CXU-35379

  • If you click a specific application on the Resources > Sites Management > WAN tab > Top applications widget, the Link Performance widget does not display any data.

    Workaround: You can view the data from the Monitoring >Application Visibility page or Monitoring >Traffic Logs page.

    Bug Tracking Number: CXU-39167

  • While adding a spoke site if you add and associate one or more departments with one or more LAN segments, sometimes the department's VRF tables might not be created at the enterprise hub. This causes the enterprise hub's 0/0 (default) route to be missing in the spoke site department's VRF tables.

    Workaround: Delete and redeploy the LAN segments.

    Bug Tracking Number: CXU-37770

  • The Contrail health check fails for a non-HA deployment after you run the deploy.sh script on a startup server.

    Workaround: Reboot Contrail Analytic Node (CAN). Wait for 10 minutes and rerun the components_health_check.sh script to see if all components are healthy. If all the components are healthy, then proceed with the installation.

    Bug Tracking Number: CXU-41463

  • On a newly installed CSO setup, core files are generated in the CAN virtual machines (VMs).

    Workaround: No workaround. However, to see whether the processes are running as expected, check the Contrail Status in all the dockers.

    Bug Tracking Number: CXU-41338

  • When DVPN tunnels (GRE_IPSEC tunnels) are established between a pair of SRX3XX devices that have Internet WAN links behind NAT, the GRE OAM status of the tunnels is displayed as DOWN and hence the tunnels are marked as DOWN and not usable for traffic.

    Workaround : Disable the GRE OAM keepalive configuration to make the tunnel usable for traffic.

    Bug Tracking Number: CXU-41281

  • The health check in the CAN node fails while you run the deploy.sh script on the startup server during the HA deployment. This is because the Kafka process is inactive in one of the CAN nodes.

    Workaround:

    1. Log in to the CAN node.
    2. Run the docker restart analyticsdb analytics controller command and wait for around 10 minutes.
    3. Rerun the components_health_check.sh script on the startup server.
    4. If the CAN node components are still unhealthy, repeat 2 and 3.

    If all the components are healthy, then proceed with the installation.

    Bug Tracking Number: CXU-41232

  • Alarms are not getting generated if the date and time is not in sync with the NTP server.

    Workaround: CSO and devices must be NTP-enabled. Make sure CSO and device time are in sync.

    Bug Tracking Number: CXU-40815

  • UTM Web filtering is not supported in the active-active SRX Series chassis cluster. The UTM Web filter will be up only on one node of the cluster. The up status depends on which node was able to setup connection to the cloud server from the PFE directory.

    Workaround: None

    Bug Tracking Number: CXU-32738

  • The bootstrap process remains in the In Progress state because the phone-home server fails to receive the bootstrap completion notification from the phone-home client.

    Workaround: Reconfigure the name server and the phone-home server (https://redirect.juniper.net), and restart the phone-home client.

    Bug Tracking Number: CXU-41449

  • Signature database installation might fail for an SRX Series device, with the following error message:

    Application signature version 3229 install failed for device 4100HAEH. Error copy on device/node failed : file copy /tmp/application_groups2.xml.gz node0:/var/db/idpd/nsm-download/application_groups2.xml.gz error: put-file failed error: could not send local copy of file {primary:node0} cspuser@4100HAEH.4100HAEH

    Workaround: Run the following commands as the root user on the device shell:

    • chmod -R 777 /var/db/idpd/nsm-download

    • chmod -R 777 /var/db/appid/sec-download

    For dual CPE devices, you must run these commands on node 0 and node 1.

    Bug Tracking Number: CXU-41678

  • The firewall policy deployment fails if the system has more than 10,000 addresses.

    Workaround: In the elasticsearch.yml file, update the index.max_result_window parameter to 20000.

    Bug Tracking Number: CXU-41678

  • The bootstrap job for a device remains in the In Progress state for a considerable time. This is because CSO fails to receive the bootstrap completion notification from the device.

    Workaround: If the bootstrap job is in the In Progress state for more than 10 minutes, add the following configuration to the device:

    set system phone-home server https://redirect.juniper.net

    Bug Tracking Number: CXU-35450

  • After Network Address Translation (NAT), only one DVPN tunnel is created between two spoke sites if the WAN interfaces (with link type as Internet) of one of the spoke site have the same public IP address.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41210

  • You cannot edit a device profile for an NFX150 device.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41719

  • On an SRX Series device, the deployment fails if you use the same IP address in both the Global FW policy and the Zone policy.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41259

  • The deployment of the MariaDB pod fails if you are installing CSO on an installation server or a startup server.

    Workaround: Redeploy the MariaDB pod by running the deploy script again.

    Bug Tracking Number: CXU-41734

  • In case of an AppQoE event (packet drop or latency), the application may not switch to the best available path among the available links.

    Workaround: Reboot the device.

    Bug Tracking Number: CXU-41922

  • You must have access to the Internet while you are installing CSO at the customer premises.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41920

  • The IP addresses for the Contrail Analytics Nodes (CAN) are not populated in the HAproxy service during the deployment.

    Workaround: Log in to the startup server and run the following command:

    salt -C 'G@roles:haproxy_confd' state.apply haproxy_confd saltenv='central'

    Bug Tracking Number: CXU-41914

  • You cannot delete a LAN segment in a site that is associated with an EX Series standalone switch.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41907

  • ZTP of an SRX Series device with a WAN link fails, if the SRX Series device is connected to a provider data hub that was onboarded in releases earlier than CSO Release 5.1.

    Workaround:

    As a user with SP admin or OpCo admin privileges, perform the following steps:

    1. For an On-prem instance, create a custom stage-2 template with the following command:
      set policy-options community ipvpn-community members target:192.168.0.1:10
    2. Associate the stage-2 template with the provider hub. Note

      For a SAAS instance, Juniper Networks has created the P-HUB-UPGRADE stage-2 template and associated it with the SRX as SD-WAN hub device template.

    3. Upgrade the provider data hub to CSO 5.1.0 by using REST API (POST https://cso-UI/tssm/upgrade-site).

      The following is a sample for using the REST API:

      1. Obtain the UUID of the provider hub site by using the GET https://cso-ui/tssm/site/ API:

      2. Upgrade the site by using the POST https://cso-UI/tssm/upgrade-site API:

    Bug Tracking Number: CXU-41861

  • While you are using a remote console for a tenant device, if you press the Up arrow or the Down arrow, then instead of the command history irrelevant text (that includes the device name and the tenant name) appears on the console.

    Workaround. To clear the irrelevant text, press the down arrow key a few times and then press Enter.

    Bug Tracking Number: CXU-41666

  • In case of a central breakout (traffic breaking out over hub links), because the GRE tunnel on the hub device is down, the hub device may not forward the return traffic from the Internet to the spoke device.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41239

  • While you are editing a tenant, if you modify Tenant-owned Public IP Pool under Advanced Settings (optional), then the changes that you made to the Tenant-owned Public IP pool field are not reflected after the completion of the edit tenant operation job.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41139

  • The TAR file installation of a distributed deployment fails. This issue occurs if the version of the bare-metal server that you are using is later than the recommended version.

    Workaround: You must install the python-dev script before running the deploy-sh script.

    After you extract the CSO TAR file on the bare-metal server:

    1. Navigate to the /etc/apt directory and execute the following commands:

      • cp sources.list sources.list.cso

      • cp orig-sources.list sources.list

    2. Install the python2.7-dev script by running the following commands:

      • apt-get update && apt-get install python2.7-dev

      • cp sources.list.cso sources.list

    3. Navigate to the /root/Contrail_Service_Orchestration_5.1.0 folder and then run the deploy.sh script.

    Bug Tracking Number: CXU-41845

  • If you create two users with same names but different roles (SP administrator and OpCo administrator) and delete one of the users, the Users page continues to display the name of the user that you deleted. This is because the Users page is not automatically refreshed.

    Workaround: Manually refresh the page.

    Bug Tracking Number: CXU-41793

  • After ZTP of an NFX Series device, the status of some tunnels are displayed as down. This issue occurs if you are using the subnet IP address192.168.2.0 on WAN links, which causes an internal IP address conflict.

    Workaround: Avoid using the 192.168.2.0 subnet on WAN links.

    Bug Tracking Number: CXU-41511

  • If you have installed CSO Release 5.1 on a single node and if there is a power failure, the UI is not accessible even if the power resumes.

    Workaround:

    1. On the infraservices virtual machine (VM),
      1. Stop the kubernetes and dockers on both infra service and microservice by running the service kubelet stop and service docker stop commands.

      2. Navigate to the /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt folder and take a backup of the meta.db file.

        root@k8-infra1-vm:~# cd /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt/
        root@k8-infra1:/var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt# mv meta.db meta.db.bak
      3. Navigate to the /var/lib/docker folder and take a backup of the network file.

        root@k8-infra1-vm:/var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt# cd /var/lib/docker
        root@k8-infra1:/var/lib/docker# mv network network_bkp
    2. On the microservice VM,
      1. Stop the kubernetes and dockers on both infra service and microservice by running the service kubelet stop and service docker stop commands.

      2. Navigate to the /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt folder and take a backup of the meta.db file.

        root@k8-microservices_1:~# cd /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt/
        root@k8-microservices_1:/var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt# mv meta.db meta.db.bak
      3. Navigate to the /var/lib/docker folder and take a backup of the network file.

        root@k8-microservices_1:/var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt# cd /var/lib/docker
        root@k8-microservices_1:/var/lib/docker# mv network network_bkp
    3. Restart the kubernetes and dockers on both infra service and microservice by running the service docker start and service kubelet start commands.
    4. Navigate to the Contrail_Service_Orchestration_5.1.0 folder and run the setup_NAT_rule.sh script on the bare-metal server to enable traffic flow from outside the network.
      root@ccra-68:~/Contrail_Service_Orchestration_5.1.0/ci_cd# ./setup_NAT_rule.sh
    5. On the Startup server, run the kubectl delete pods –all -n central && kubectl delete pods –all -n regional command to restart CS0 microservices.

    Bug Tracking Number: CXU-41460