Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Known Issues


This section lists known issues in Juniper Networks CSO Release 4.1.0.

Audit Logs

  • For purge audit log job, the recurrence is not working as expected.

    Workaround: Schedule separate jobs for each of the recurring instances.

    Bug Tracking Number: CXU-32608

  • Run Now option does not work when you try to select the option while editing a scheduled purge audit log to run immediately.


    • Create a new job and select the Run Now option.


    • While editing a scheduled job to run immediately, instead of using the Run Now option, modify the schedule to use the current time.

    Bug Tracking Number: CXU-32604

  • The audit log does not contain job IDs for the following tasks:

    • Reboot

    • License push

    You can view the job details from the Jobs page.

    Workaround: For license push jobs, you can use the license name and the timestamp from the audit logs to view the corresponding job details from the Jobs page.

    Bug Tracking Numbers: CXU-29488

  • Addition and deletion of mesh tags are not captured in the DVPN audit logs.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32252

  • The job log does not state why remote archiving of logs failed if the log archiving failed because of an incorrect path. This issue occurs while doing audit log purge with remote archiving.

    Workaround: Before you initiate an audit log purge with remote archiving, ensure that the remote server path is valid and the server is accessible from CSO.

    Bug Tracking Number: CXU-32454

  • On the Audit Logs page, the Username and Role columns do not display the actual name and the role of the user, respectively. Instead, the name of the user is displayed as Admin and role of the user is displayed as _member_.admin.

    Workaround: None.

    Bug Tracking Number: CXU-25189

AWS Spoke

  • The AWS device activation process takes up to 30 minutes. If the process does not complete in 30 minutes, a timeout might occur and you must retry the process. You do not need to download the cloud formation template again.

    To retry the process:

    1. Log in to Customer Portal.
    2. Access the Activate Device page, enter the activation code, and click Next.
    3. After the CREATE_COMPLETE message is displayed on the AWS server, click Next on the Activate Device page to proceed with device activation.

    Bug Tracking Number: CXU-19102.

CSO High Availability

  • In an HA setup, if CAN has gone down because of a power outage, the contrail-database-nodemgr in Analyticsdb remains in down state even after CAN comes back online. In such cases, you see the following status:

    Workaround: Run the nodetool repair and make sure that Cassandra is up and running.

    Bug Tracking Number: CXU-32214

  • In an HA setup, some of the virtual route reflectors (VRRs) are incorrectly reported as down even though those VRRs are up and running. This problem occurs because some of the alarms that are generated when VRRs are down after a power failure fail to be cleared even after the VRRs come back online.

    Workaround: Though this issue does not have any functional impact, we recommend that you restart the VRR to clear the alarms.

    Bug Tracking Number: CXU-31448

  • In an HA setup, deployment of NAT and firewall policies fail if secmgt-sm pods fail to initialize after a snapshot process and remain in 0/1 Running state.

    Workaorund: Run the following curl command from the microservices VM and make sure that the scemgt-sm pods come to 1/1 Running state:

    curl -XPOST "https://<central-vip>/api/juniper/sd/csp-web/database-initialize" -H 'Content-Type: application/json' -H 'Accept: application/json' -H "X-Auth-Token: token

    Bug Tracking Number: CXU-31446

  • In an HA setup, report generation may not work as expected when one of the infrastructure nodes or servers is down.

    Workaround: Recover the infrastructure node that is down and generate the reports.

    Bug Tracking Number: CXU-31443

  • In an HA setup, with three load-balancer VMs, if the master load balancer goes down, one of the remaining load-balancer VMs is switched over as the master. However, after the original load-balancer VM comes up, it is switched over as the master again.

    Workaround: There is no functional impact and no known workround.

    Bug Tracking Number: CXU-15441

  • After a power failure, CAN installed on a physical server does not come up online correctly.


    Follow these steps to restore CAN installed on a physical server:

    1. Log in to the CAN server and scp the /root/can_bkp folder to installer VM.
    2. Reimage the server.
    3. Navigate to the Contrail_Service_Orchestration_4.1.0 folder.
    4. Execute DEPLOYMENT_ENV=central ./ on installer VM until it starts deploying NTP.
    5. Execute salt '*contrail*' network.hw_addr eth0.

      csp-contrailanalytics-3.4D5UTX.central: 52:54:00:2c:a6:4d

      csp-contrailanalytics-1.4D5UTX.central: 52:54:00:2b:4f:da

      csp-contrailanalytics-2.4D5UTX.central: 52:54:00:ea:ee:67

    6. Open deployments/central/roles.conf. Search for can1, can2 and can3. Make sure that the MAC addresses listed in (4) matches the field hardware_address for each of can1, can2 and can3.
    7. Open deployment/central/topology.conf, and edit servers under [TARGETS] servers = csp-contrailanalytics-1, csp-contrailanalytics-2, csp-contrailanalytics-3.
    8. Execute DEPLOYMENT_ENV=central ./
    9. Check health of CAN by executing ./
    10. To restore the data back, copy the backed up can_bkp folder from installer VM to the respective CAN servers under root/.
    11. Execute the following steps on all three CAN servers:
      1. root@sspt-ubuntu5-vm7:~# docker exec controller service cassandra stop
      2. root@sspt-ubuntu5-vm7:~# docker exec analyticsdb service cassandra stop
      3. root@sspt-ubuntu5-vm7:~# docker exec -it controller bash
      4. root@sspt-ubuntu5-vm7(controller):/var/lib/cassandra# rm -rf *
      5. root@sspt-ubuntu5-vm7(controller): exit
      6. root@sspt-ubuntu5-vm7:~# docker exec -it analyticsdb bash
      7. root@sspt-ubuntu5-vm7(analyticsdb):/var/lib/cassandra# rm -rf *
      8. root@sspt-ubuntu5-vm7(analyticsdb): exit
      9. root@sspt-ubuntu5-vm7:~# cd can_bkp/analyticsdb_old/
      10. root@sspt-ubuntu5-vm7:~/can_bkp/analyticsdb_old/cassandra# docker cp commitlog/ analyticsdb:/var/lib/cassandra
      11. root@sspt-ubuntu5-vm7:~/can_bkp/analyticsdb_old/cassandra# docker cp data/ analyticsdb:/var/lib/cassandra
      12. root@sspt-ubuntu5-vm7:~/can_bkp/analyticsdb_old/cassandra# docker cp saved_caches/ analyticsdb:/var/lib/cassandra
      13. root@sspt-ubuntu5-vm7:~# cd can_bkp/controller_old/
      14. root@sspt-ubuntu5-vm7:~/can_bkp/controller_old/cassandra# docker cp commitlog/ controller:/var/lib/cassandra
      15. root@sspt-ubuntu5-vm7:~/can_bkp/controller_old/cassandra# docker cp data/ controller:/var/lib/cassandra
      16. root@sspt-ubuntu5-vm7:~/can_bkp/controller_old/cassandra# docker cp saved_caches/ controller:/var/lib/cassandra
      17. root@sspt-ubuntu5-vm7:~# docker exec controller chown -R cassandra:cassandra /var/lib/cassandra/
      18. root@sspt-ubuntu5-vm7:~# docker exec analyticsdb chown -R cassandra:cassandra /var/lib/cassandra/
      19. root@sspt-ubuntu5-vm7:~# docker exec analyticsdb service cassandra start

      20. root@sspt-ubuntu5-vm7:~# docker exec controller service cassandra start
    12. Check health of CAN by executing ./
  • As part of VRR recovery process in case of power failure, a tenant named recovery is created for restoring the VRR configuration. However, if the configuration that needs to be recovered is huge, the recovery tenant creation times out and fails even though the configuration is successfully restored to VRR in due course.

    Workaround: There is no workaround required as the configuration is usually restored to VRR even if the recovery tenant creation has timed out.

    Bug Tracking Number: CXU-27197

  • In a CSO HA environment, two RabbitMQ nodes are clustered together, but the third RabbitMQ node does not join the cluster. This might occur just after the initial installation, if a virtual machine reboots, or if a virtual machine is powered off and then powered on.

    Workaround: Do the following:

    1. Log in to the installer VM.
    2. Navigate to the current deployment directory for CSO—for example, /root/Contrail_Service_Orchestration_4.1.0/.
    3. Execute the ./ command.
    4. Specify the option to recover RabbitMQ and press Enter.
    5. In the RabbitMQ dashboards for the central and regional microservices VMs, confirm that all the available infrastructure nodes are present in the cluster.

    Bug Tracking Number: CXU-12107

  • When a high availability (HA) setup comes back up after a power outage, MariaDB instances do not come back up on the VMs.


    Perform the following steps to recover the MariaDB instances:

    1. Log in to the installer VM.
    2. Navigate to the current deployment directory for CSO; for example, /root/Contrail_Service_Orchestration_4.1.0/.
    3. Execute the sed -i "s@/var/lib/mysql/grastate.dat@/mnt/data/mysql/grastate.dat@g" recovery/components/ command
    4. Execute the ./ command.
    5. Specify the option to recover MariaDB and press Enter.

    Bug Tracking Number: CXU-20260


  • Traffic from a spoke site that has a dynamic SLA policy enabled and is connected to an MX Series device functioning as a cloud hub device takes asymmetric paths, that is different paths for upstream and downstream.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32506

  • SLA and DVPN reports do not contain data for tenants that have bandwidth-optimized SD-WAN deployments.

    Workaround: Exclude SLA and DVPN reports when you create SD-WAN report definitions.

    Bug Tracking Number: CXU-32496

  • The firewall policy status changes to undeployed after the create DVPN and certificate renewal events even though the policy remains active on devices.

    Workaround: No workaround required as the functionality is not affected.

    Bug Tracking Number: CXU-32464

  • When there are multiple breakout rules that apply to any application (when the ANY option is selected), even if you delete one of the rules, the corresponding configuration is not removed from the device.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32380

  • Class-of-service configuration is not deployed if the gateway site has only a data center department.

    Workaround: Deploy at least one department other than the data center department on the gateway site and apply SD-WAN policies for the department.

    Bug Tracking Number: CXU-30365

  • The tenant name appears as default project when you generate a report based on the predefined template, SD-WAN Performance Report.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-31653

  • On gateway site, when there are no non-datacenter departments, SD-WAN policy deploy job may return the following message and fail:

    No update of SD-WAN policy configuration on device due to missing required information.

    Workaround: There is no functional impact; the deploy job completes successfully when a non-datacenter department with a LAN segment is deployed on Gateway site.

    Bug Tracking Number: CXU-31365

  • SD-WAN deployment policy job may fail if policy intent involves datacenter department or department without any LAN segment. This does not impact SD-WAN policy deployment for other sites.

    Workaround: Use more specific SD-WAN intents, with department or department with site, to exclude datacenter departments and departments without LAN segments.

    Bug Tracking Number: CXU-31313

  • In a bandwidth-optimized, hub-and-spoke topology where network segmentation is enabled, a new LAN segment that has an existing department added to it might cause a deploy job to fail.

    Workaround: Delete the LAN segment and retry the deploy job. If there are policy dependencies, remove the dependencies before you delete the LAN segment.

    Bug Tracking Number: CXU-25968

  • OAM configurations remain on an MX Series device that you have deactivated as cloud hub from CSO.

    Workaround: Manually remove the configuration from the device.

    Bug Tracking Number: CXU-25412

  • SD-WAN policies fail to deploy if a site is not upgraded from CSO Release 4.0.2 to CSO Release 4.1.0.

    Workaround: Upgrade CSO Release 4.0.2 sites to CSO Release 4.1.0 before you deploy SD-WAN policies.

    Bug Tracking Number: CXU-32289

  • When the WAN link endpoints are of different types and if overlay tunnels are created based on matching mesh tags, the static policy for site-to-site or central Internet breakout traffic might give preference to the remote link type instead of the local link type.

    Bug Tracking Number: CXU-28358

  • On the Site SLA Performance page, applications with different SLA scores are plotted at the same coordinate on the x-axis.

    Workaround: None.

    Bug Tracking Number: CXU-19768

  • If the Internet breakout WAN link of the cloud hub is not used for provisioning the overlay tunnel by at least one spoke site in a tenant, then traffic from sites to the Internet is dropped.

    Workaround: Ensure that you configure a firewall policy to allow traffic from security zone trust-tenant-name to zone untrust-wan-link, where tenant-name is the name of the tenant and wan-link is the name of the Internet breakout WAN link.

  • Bug Tracking Number: CXU-21291

  • If a WAN link on a CPE device goes down, the WAN tab of the Site-Name page (in Administration Portal) displays the corresponding link metrics as N/A.

    Workaround: None.

    Bug Tracking Number: CXU-23996

  • If you delete a cloud hub that is created in Release 3.3.1, CSO does not delete the stage-2 configuration.

    Workaround: You must manually delete the stage-2 configuration from the device.

    Bug Tracking Number: CXU-25764

Security Management

  • UTM web filtering fails even though the Enhanced Web Filtering (EWF) server is up and online.

    Workaround: From the device, configure the EWF Server with the IP address as shown in the following example:

    root@SRX-1# set security utm feature-profile web-filtering juniper-enhanced server host

    Bug Tracking Number: CXU-32731

  • On the Active Database page in Customer Portal, the wrong installed device count is displayed. The count displayed is for all tenants and not for a specific tenant.

    Workaround: None.

    Bug Tracking Number: CXU-20531

  • If a cloud hub is used by two tenants, one with public key infrastructure (PKI) authentication enabled and other with preshared key (PSK) authentication enabled, the commit configuration operation fails. This is because only one IKE gateway can point to one policy and if you define a policy with a certificate then the preshared key does not work.

    Workaround: Ensure that the tenants sharing a cloud hub use the same type of authentication (either PKI or PSK) as the cloud hub device.

    Bug Tracking Number: CXU-23107

  • If UTM Web-filtering categories are installed manually (by using the request system security UTM web-filtering category install command from the CLI) on an NFX150 device, the intent-based firewall policy deployment from CSO fails.

    Workaround: Uninstall the UTM Web-filtering category that you installed manually by executing the request security utm web-filtering category uninstall command on the NFX150 device and then deploy the firewall policy.

    Bug Tracking Number: CXU-23927

  • If SSL proxy is configured on a dual CPE device and if the traffic path is changed from one node to another node, the following issue occurs:

    • For cacheable applications, if there is no cache entry the first session might fail to establish.

    • For non-cacheable applications, the traffic flow is impacted.

    Workaround: None.

    Bug Tracking Number: CXU-25526

  • The UTM policy configuration is not deployed on an SD-WAN site with the SRX device model SRX345-DUAL-AC.


    1. Add the SRX345-DUAL-AC device model to the schema file.


      In the schema-svc docker, the schema file is available at /opt/csp-schema-data/*configuration.json.

    2. Restart the pod.

    Bug Tracking Number: CXU-25706

Site and Tenant Workflow

  • Status of an overlay link is not always updated in the WAN tab of the Site page.

    Workaround: Reload the page to refresh the link status information.

    Bug Tracking Number: CXU-32812

  • For sites that were provisioned in Release 4.0.2 and upgraded to 4.1.0, the Create and Delete threshold values for DVPN tunnels are shown as undefined in the WAN tab of the Site page.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32753

  • Post site upgrade on NFX150-S1E, ge-1/0/8 fails to initialize.

    Workaround: On the device, configure the following statement: set vmhost virtualization-options interfaces ge-1/0/8.

    Bug Tracking Number: CXU-32747

  • For dual CPE sites activated in 4.1.0 and for older sites that were activated in 4.0.2, sites column in the alarms page does not provide the site name. The site name, however, is shown in the description column and also in the detailed alarm view.

    Workaround: There is no functional impact.

    Bug Tracking Number: CXU-32681

  • For centralized deployments, site status is shown as down. However, there is no impact to the traffic and the VNF is operational.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32663

  • Site upgrade job remains in the In Progress state.

    Workaround: Before you run the site upgrade job, run the request system storage cleanup command to clean up the device.

    Bug Tracking Number: CXU-32643

  • WAN links for an NFX250 site that is upgraded to Release 4.1.0 and has local breakout enabled, appear in red in the WAN tab of the Site page.

    Workaround: There is no known workaround.

    Bug Tracking Number:CXU-32450

  • Sometimes, sites created with PKI may fail to activate.

    Workaround: Delete the site by

    Bug Tracking Number CXU-32350

  • After a revert action following an upgrade failure or backup restore, the user is unable to onboard tenants. This problem occurs because sometimes after the revert action, the csp-service-lookup fails as flannel is unable to provide subnet lease.

    Workaround: Run service flanneld restart on all microservices VMs.

    Bug Tracking Number: CXU-32110

  • Dual CPE site is shown as down after site upgrade.

    Workaround: Delete and readd the site.

    Bug Tracking Number: CXU-31941

  • Porting of cloud hub sites to tenants fails if the cloud hub site names exceed 15 characters.

    Workaround: Ensure that cloud hub site names do not exceed 15 characters even though you can have cloud hub site names of up to 256 characters in the global instance.

    Bug Tracking Number: CXU-28078

  • During site activation, activation of NFX250 dual CPE connected to MX series cloud hub device may fail with the following error message: No existing device_initiated device connection.

    Workaround: Retry the failed ZTP job from the administration portal.

    Bug Tracking Number: CXU-27902

  • After a site upgrade, status of policies that are associated with the site appears as pending deployment even though they are already deployed.

    Workaround: Trigger a policy deployment job to deploy the policies. CSO does not deploy the policies unless there are updates to the policy, but the status of policies are appropriately updated after you run a deployment job.

    Bug Tracking Number: CXU-27528

  • SLA profiles created by a tenant are not deleted when the tenant is deleted.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-27054

  • If you create a new tenant with the name of a tenant that was deleted, certain inconsistencies such as policy deployment failure are noticed.

    Workaround: When you create a tenant, ensure that you do not use the same name as that of a deleted tenant.

    Bug Tracking Number: CXU-26886

  • Site upgrade for hub sites that were created using custom device profile or cloned device profile is incomplete.


    • After the upgrade, go to tssm core docker by entering the following command: docker exec -it docker name bash

    • In the docker run the following command: root@csp:/# cd /opt/meta_data/

    • From /opt/meta_data, run cp SRX_Advanced_SDWAN_HUB_option_1_upgrade.yaml custom_device_profile_upgrade.yaml

    Bug Tracking Number: CXU-26532

  • The tenant delete operation fails when CSO is installed with an external Keystone.

    Workaround: You must manually delete the tenant from the Contrail OpenStack user interface.

    Bug Tracking Number: CXU-9070

  • If you try to activate a branch SRX Series device with the factory-default configuration, the stage-1 configuration commit might fail when there are active DHCP server bindings on the device. This is because of the default DHCP server settings present in factory-default configuration.

    Workaround: When you are pre-staging the CPE device for activation, remove the DHCP server-related configuration from the device by executing the following commands on the Junos OS CLI:

    set system services dhcp-local-server group jdhcp-group interface fxp0.0
    set system services dhcp-local-server group jdhcp-group interface irb.0

    Bug Tracking Number: CXU-13446

  • In some cases, if automatic license installation is enabled in the device profile, after ZTP is complete, the license might not be installed on the CPE device even though license key is configured successfully.

    Workaround: Reinstall the license on the CPE device by using the Licenses page on the Administration Portal.

    Bug Tracking Number: PR1350302.

  • For a tenant, LAN segments with overlapping IP prefixes across sites are not supported.

    Workaround: Create LAN segments with unique IP prefixes across sites for the tenant.

    Bug Tracking Number: CXU-20494

  • When the primary and backup interfaces of the CPE device uses the same WAN interface of the hub, the backup underlay might be used for Internet or site-to-site traffic even though the primary links are available.

    Workaround: Ensure that you connect the WAN links of each CPE device to unique WAN links of the hub.

    Bug Tracking Number: CXU-20564

  • After you configure a site, you cannot modify the configuration either before or after activation.

    Workaround: None.

    Bug Tracking Number: CXU-21165

  • On an NFX250 device, if you disable (detach) a failed service successfully and then try to delete the site, the site is not deleted.

    Workaround: None.

    Bug Tracking Number: CXU-24355

  • If you try to activate a site with an MPLS link by using DHCP, the default route pointing to the MPLS gateway is added to the hub device, which results in Internet traffic from the hub taking the MPLS link.

    Workaround: None.

    Bug Tracking Number: CXU-24666

  • If you trigger the tenant creation workflow, the tenant might be displayed in the CSO GUI even before the job is completed. If you then try to trigger workflows for that tenant, the subsequent jobs fail because the tenant creation job is not completed.

    Workaround: Wait for the tenant creation job to complete successfully before triggering any workflows for the tenant.

    Bug Tracking Number: CXU-24783

  • The Configure Site operation fails if you import a cloud hub with a name that is different from that of other tenants.

    Workaround: While you are importing a cloud hub, specify the same name that is used while onboarding a cloud hub for a global service provider.

    Bug Tracking Number: CXU-25740

  • You cannot configure a site with dual CPE devices if WAN links are used exclusively for local breakout traffic.

    Workaround: While you are creating a site and enabling the link for local breakout, instead of selecting the Use only for breakout traffic option, select Use for breakout & WAN traffic. Also, while you are configuring a site ensure that the WAN link is connected to a hub.

    Bug Tracking Number: CXU-25776

  • The sites performance report does not show any data if you select all sites for the report.

    Workaround: Select specific sites for generating the site performance report.



  • Status of the pods appear as Unknown or Pending instead of Running.

    Workaround: Run the reinitialize_pods_py script. The script calculates the pod count per node and if the pod count is not properly distributed across the nodes, deletes and redeploys services, pods, and deployments.

    Bug Tracking Number: CXU-32574

  • During CSO installation from the installer UI, infrastructure services deployment returns the following error message and fails:


    Add the following entry in /etc/hosts in installer VM:

    Intaller IP installervm your local domain


    After you add this, click the Retry button.

    Bug Tracking number: CXU-32721

  • ZTP for SRX 3xx devices may fail during the default trust certificate installation.

    Workaround: Because default trust certificates are used for application firewall, which is not a supported feature on SRX300 and SRX320 devices, disable installation of default trust certificates in the device template for SRX300 and SRX320 devices.

    For SRX 340 and 345 devices, retry the failed ZTP job. If application firewall is not required, you can consider disabling the installation of default trust certificates for SRX340 and SRX345 as well.

    Bug Tracking number: CXU-32627

  • LAN segments added after a site is activated are not monitored for alarm events. Because of this, link down events for the LAN port are not reported by CSO.

    Workaround: Add LAN segments while you create a site.

    Bug Tracking Number: CXU-32508

  • Information related to deleted VMs remains on an ESXi server.

    Workaround: Before you install CSO on an ESXi server, manually delete any VM folder that is available under /vmfs/volumes/datastore/vm_folder.

    Bug Tracking Number: CXU-32337

  • When you delete a tenant in CSO without deleting the corresponding virtual user account in the JIMS server, JIMS keeps attempting CSO authorization for the deleted tenant. Authorization failures are recorded in the CSO audit logs. This causes spamming of the CSO audit logs and might cause deterioration of database performance.

    Workaround: Before you delete a tenant from CSO, delete information specific to that tenant from JIMS.

    Bug Tracking Number: CXU-32315

  • After power shutdown of servers, SD-WAN site configuraton fails because of issues with arangodb.

    Workaround: Delete the failed site and retry site configuration.

    Bug tracking number: CXU-32169

  • If a restore operation fails, even subsequent attempts from a healthy backup fail and return the following error message:


    1. Run a health check: cso_backuprestore -b health check.
    2. Fix any components that are in stopped or failing state.
    3. When CSO is in healthy state, run the restore operation again.

    Bug Tracking Number: CXU-32064

  • When you configure a site for SRX Series devices, the stage-1 configuration might fail if you use the fully qualified domain name (FQDN) for NTP server configuration.

    Workaround: Use the IP address of the NTP server instead of its FQDN.

    Bug Tracking Number: CXU-31415

  • Stage-1 configuration on an NFX Series device might fail if no OAM-and-data link is configured during site configuration.

    Workaround: Delete and add back the site after configuring an OAM-and-data link.

    Bug Tracking Number: CXU-31304

  • The LAN segment state is changed to VPN attached if the LAN segment deployment failed because of network connectivity issues.

    Workaround: Delete and redeploy the LAN segment after you resolve the network connectivity issue.

    Bug Tracking Number: CXU-31039

  • Signature installation fails on some sites when you attempt to install signatures on more than a hundred sites in a single deploy job.

    Workaround: Install signatures separately on the site where the installation failed.

    Bug Tracking Number: CXU-28923

  • The Revert to Default function does not restore default APN settings if the SIM is already connected to a network with a custom APN.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-28724

  • MySql fails to come back online after an abnormal shutdown or restart of an infrastructure VM.

    Workaround: Use recovery utilities to recover MySql.

    Bug Tracking Number: CXU-32046

  • Source tunnel information is missing for SD-WAN link switch events when traffic switches:

    • From the overlay tunnel toward the gateway or hub to site-to-site DVPN tunnels.

    • From site-to-site DVPN tunnels to the overlay tunnel toward the gateway or hub.

    Workaround: There is no functional impact.

    Bug Tracking Number: CXU-31714

  • Images and licenses that customers uploaded are lost during a disaster recovery.

    Workaround: Upload the images and licenses again.


  • Reverting from CSO Release 4.1.0 to Release 4.0.2 fails because the controller container on the contrail_analytics node is in Exited mode.


    1. Log in to the CAN VM.
    2. Run docker ps-a.
    3. If the status of the controller container is Exited, run docker restart controller.

    Bug Tracking Number: CXU-31469

  • Importing POP or onboarding tenants remain In Progress state for long and fail.

    Workaround: Clear data files in /var/lib/zookeeper/version-2 and restart zookeeper.

    Bug Tracking Number: CXU-29856

  • If multiple sites are using the same MX series cloud hub, IPSec overlay tunnels for some of the WAN links may fail to come up and show the following error: Negotiation failed with error code NO_PROPOSAL_CHOSEN received from peer (5 times).

    Workaround: Clear the IPSec session from the connected MX series cloud hub by executing the clear services ipsec-vpn ipsec security-associations command.

    Bug Tracking Number: CXU-27638

  • ZTP for SRX devices fails. This problem occurs if the SRX device was connected to clients on the LAN side before ZTP and has bindings that are not cleared during ZTP.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-27376

  • ZTP of NFX-250 over PPPoE fails and incomplete configuration is pushed to the device.

    Workaround: Use links other than PPPoE for ZTP of NFX-250.

    Bug tracking number: CXU-27357

  • On an Ubuntu VNF spawned on an NFX250 device, the ping command to a website address (fully qualified domain name) does not work.


    1. In Resource Designer, clone the existing ubuntu-fw-NFX250 template for the NFX250 device.
    2. Edit the template and ensure that offloads are disabled for the Left Interface.
    3. Click Next and complete the edit operation.

    Bug tracking number: CXU-24985

  • If you create VNF instances in the Contrail cloud by using Heat Version 2.0 APIs, a timeout error occurs after 120 instances are created.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-15033

  • The provisioning of CPE devices fails if all VRRs within a redundancy group are unavailable.

    Workaround: Recover the VRR that is down and retry the provisioning job.

    Bug Tracking Number: CXU-19063

  • After the upgrade, the health check on the standalone Contrail Analytics Node (CAN) fails.


    1. Log in to the CAN VM.
    2. Execute the docker exec analyticsdb service contrail-database-nodemgr restart command.
    3. Execute the docker exec analyticsdb service cassandra restart command.

    Bug Tracking Number: CXU-20470

  • The load services data operation or health check of the infrastructure components might fail if the data in the Salt server cache is lost because of an error.

    Workaround: If you encounter a Salt server-related error, do the following:

    1. Log in to the installer VM.
    2. Execute the salt '*' deployutils.get_role_ips 'cassandra' command to confirm whether one or more Salt minions have lost the cache.
      • If the output returns the IP address for all the Salt minions, this means that the Salt server cache is fine; proceed to step 7.

      • If the IP address for some minions is not present in the output, this means that the Salt server has lost its cache for those minions and must be rebuilt as explained from step 3.

    3. Navigate to the current deployment directory for CSO; for example, /root/Contrail_Service_Orchestration_4.1.0/.
    4. Redeploy the central infrastructure services (up to the NTP step):
      1. Execute the DEPLOYMENT_ENV=central ./ command.
      2. Press Ctrl+c when you see the following message on the console:
    5. Redeploy the regional infrastructure services (up to the NTP step):
      1. Execute the DEPLOYMENT_ENV=regional ./ command.
      2. Press Ctrl+c when you see a message similar to the one for the central infrastructure services.
    6. Execute the salt '*' deployutils.get_role_ips 'cassandra' command and confirm that the output displays the IP addresses of all the Salt minions.
    7. Re-run the load services data operation or the health component check that had previously failed.

    Bug Tracking Number: CXU-20815

  • For an MX Series cloud hub device, if you have configured the Internet link type as OAM_and_DATA, the reverse traffic fails to reach the spoke device if you do not configure additional parameters by using the Junos OS CLI on the MX Series device.


    1. Log in to the MX Series device and access the Junos OS CLI.
    2. Find the next-hop-service outside-service-interface multiservices interface as follows:
      1. Execute the show configuration | display set | grep outside-service-interface command.
      2. In the output of the command, look for the multiservices (ms-) interface corresponding to the service set that CSO created on the device.

        The name of the service set is in the format ssettenant-name_DefaultVPN-tenant-name, where tenant-name is the name of the tenant.

        The following is an example of the command and output:

        show configuration | display set | grep outside-service-interface

        In this example, the tenant name is Acme and the multiservices interface used is ms-1/0/0.4008.

    3. After you determine the correct interface, add the following configuration on the device: set routing-instances WAN_0 interface ms-interface

      where ms-interface is the name of the multiservices interface obtained in the preceding step.

    4. Commit the configuration.

    Bug Tracking Number: CXU-21818

  • In Resource Designer, if you add a VNF that does not require a password and trigger the Add VNF Manager workflow, you are asked to enter a password even though the VNF does not require it.

    Workaround: Even for VNFs that do not require a password, enter a dummy password in Resource Designer when you are creating a VNF package.

    Bug Tracking Number: CXU-21845.

  • In a full mesh topology, the simultaneous deletion of LAN segments on all sites is not supported.

    Workaround: Delete LAN segments on one site at a time.

    Bug Tracking Number: CXU-21936

  • On a CSO setup with secure OAM configured, if you bring up the FortiGate VNF and then apply the license on the VNF, the VNF reboots. However, after rebooting, sometimes the VNF does not come back up.

    Workaround: To ensure that the VNF comes back up, deactivate the VNF and then reactivate it by performing the following steps:

    1. Log in to the JDM CLI of the NFX Series device and access configuration mode.
    2. Deactivate the VNF by executing the deactivate virtual-network-functions Fortinet-oob-2-Firewall command.
    3. Commit the changes by executing the commit command.
    4. Rollback the changes by executing the rollback 1 command
    5. Commit the changes by executing the commit command.
    6. Exit the configuration mode by executing the quit command.
    7. Execute the show virtual-network-functions command and confirm that the status is Running alive, which means that the VNF is up.

    Bug Tracking Number: CXU-23371

  • If you are using the GUI installer to install CSO, sometimes the installation page freezes (percentage completion on the VMs does not change) during the installation because of a Rest API timeout.

    Workaround: Reload the CSO installation page in the browser, which will update the status of the installation.

    Bug Tracking Number: CXU-24471

  • When you reboot a device from the Tenant Devices or Devices pages, the reboot job fails because the connectivity is lost during the reboot.

    Workaround: Check the operational status of the device on the Tenant Devices or Devices page. During the reboot phase, the operational status of the device is Down. After the device is successfully rebooted and connectivity is restored, the operational status of the device changes to Up. You can now trigger operations on the device by using the CSO GUI.

    Bug Tracking Number: CXU-24512

  • If you are using the GUI installer to install CSO, sometimes the UI freezes during the installation and no installation progress is seen. However, the installation continues in the backend.

    Workaround: Perform the following tasks:

    1. Reload the installation UI page in the browser.

      If the UI page loads successfully, no further action is needed. If the UI page does not load, proceed to step 2.

    2. Log in to the installer VM as root.
    3. Kill the existing processes triggered by the GUI installer by executing the kill $(sudo lsof -t -i:8080) command.
    4. Navigate to the /root/cso_dl/Contrail_Service_Orchestration_4.1.0/ directory.
    5. Restart the Flask server by executing the bash command.
    6. After you see the ==== INFO Installer App initialized ===== message on the console, reload the installation UI page in the browser.

    Bug Tracking Number: CXU-24552

  • For an NFX250 device, the Ubuntu VNF service chain configuration is incorrect if you set SINGLE_SSH_TO_NFX to False and then instantiate a service.

    Workaround: None.

    Bug Tracking Number: CXU-25018

  • An error occurs while EEPROM contents for copper ports are being read.

    Workaround: None.

    Bug Tracking Number: PR1372217

  • Because of insufficient buffer size, vSRX performs queue scheduling incorrectly and drop packets.

    Workaround: Set the buffer size to 3000 microseconds by executing the set class-of-service schedulers scheduler-name buffer-size temporal 3000 command.

    Bug Tracking Number: PR1361720.