Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Known Issues

 

This section lists known issues in Juniper Networks CSO Release 4.1.1.

Audit Logs

  • For purge audit log job, the recurrence is not working as expected.

    Workaround: Schedule separate jobs for each of the recurring instances.

    Bug Tracking Number: CXU-32608

  • Run Now option does not work when you try to select the option while editing a scheduled purge audit log to run immediately.

    Workaround:

    • Create a new job and select the Run Now option.

      or

    • While editing a scheduled job to run immediately, instead of using the Run Now option, modify the schedule to use the current time.

    Bug Tracking Number: CXU-32604

  • The audit log does not contain job IDs for the following tasks:

    • Reboot

    • License push

    You can view the job details from the Jobs page.

    Workaround: For license push jobs, you can use the license name and the timestamp from the audit logs to view the corresponding job details from the Jobs page.

    Bug Tracking Numbers: CXU-29488

  • Addition and deletion of mesh tags are not captured in the DVPN audit logs.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32252

AWS Spoke

  • The AWS device activation process takes up to 30 minutes. If the process does not complete in 30 minutes, a timeout might occur and you must retry the process. You do not need to download the cloud formation template again.

    To retry the process:

    1. Log in to Customer Portal.
    2. Access the Activate Device page, enter the activation code, and click Next.
    3. After the CREATE_COMPLETE message is displayed on the AWS server, click Next on the Activate Device page to proceed with device activation.

    Bug Tracking Number: CXU-19102.

CSO High Availability

  • In an HA setup, when one of the CAN nodes is down, some of the widgets do not show link metrics.

    Workaround: Restart the CAN node to view link metrics for all widgets.

    Bug Tracking Number: CXU-30813

  • In an HA setup, if CAN has gone down because of a power outage, the contrail-database-nodemgr in Analyticsdb remains in down state even after CAN comes back online. In such cases, you see the following status:

    Workaround: Run the nodetool repair and make sure that Cassandra is up and running.

    Bug Tracking Number: CXU-32214

  • In an HA setup, some of the virtual route reflectors (VRRs) are incorrectly reported as down even though those VRRs are up and running. This problem occurs because some of the alarms that are generated when VRRs are down after a power failure fail to be cleared even after the VRRs come back online.

    Workaround: Though this issue does not have any functional impact, we recommend that you restart the VRR to clear the alarms.

    Bug Tracking Number: CXU-31448

  • In an HA setup, with three load-balancer VMs, if the primary load balancer goes down, one of the remaining load-balancer VMs is switched over as the primary. However, after the original load-balancer VM comes up, it is switched over as the primary again.

    Workaround: There is no functional impact and no known workround.

    Bug Tracking Number: CXU-15441

  • After a power failure, CAN installed on a physical server does not come up online correctly.

    Workaround:

    Follow these steps to restore CAN installed on a physical server:

    1. Log in to the CAN server and scp the /root/can_bkp folder to installer VM.
    2. Reimage the server.
    3. Navigate to the Contrail_Service_Orchestration_4.1.1 folder.
    4. Execute DEPLOYMENT_ENV=central ./deploy_infra_services.sh on installer VM until it starts deploying NTP.
    5. Execute salt '*contrail*' network.hw_addr eth0.

      csp-contrailanalytics-3.4D5UTX.central: 52:54:00:2c:a6:4d

      csp-contrailanalytics-1.4D5UTX.central: 52:54:00:2b:4f:da

      csp-contrailanalytics-2.4D5UTX.central: 52:54:00:ea:ee:67

    6. Open deployments/central/roles.conf. Search for can1, can2 and can3. Make sure that the MAC addresses listed in (4) matches the field hardware_address for each of can1, can2 and can3.
    7. Open deployment/central/topology.conf, and edit servers under [TARGETS] servers = csp-contrailanalytics-1, csp-contrailanalytics-2, csp-contrailanalytics-3.
    8. Execute DEPLOYMENT_ENV=central ./deploy_infra_services.sh.
    9. Check health of CAN by executing ./components_health.sh.
    10. To restore the data back, copy the backed up can_bkp folder from installer VM to the respective CAN servers under root/.
    11. Execute the following steps on all three CAN servers:
      1. root@sspt-ubuntu5-vm7:~# docker exec controller service cassandra stop
      2. root@sspt-ubuntu5-vm7:~# docker exec analyticsdb service cassandra stop
      3. root@sspt-ubuntu5-vm7:~# docker exec -it controller bash
      4. root@sspt-ubuntu5-vm7(controller):/var/lib/cassandra# rm -rf *
      5. root@sspt-ubuntu5-vm7(controller): exit
      6. root@sspt-ubuntu5-vm7:~# docker exec -it analyticsdb bash
      7. root@sspt-ubuntu5-vm7(analyticsdb):/var/lib/cassandra# rm -rf *
      8. root@sspt-ubuntu5-vm7(analyticsdb): exit
      9. root@sspt-ubuntu5-vm7:~# cd can_bkp/analyticsdb_old/
      10. root@sspt-ubuntu5-vm7:~/can_bkp/analyticsdb_old/cassandra# docker cp commitlog/ analyticsdb:/var/lib/cassandra
      11. root@sspt-ubuntu5-vm7:~/can_bkp/analyticsdb_old/cassandra# docker cp data/ analyticsdb:/var/lib/cassandra
      12. root@sspt-ubuntu5-vm7:~/can_bkp/analyticsdb_old/cassandra# docker cp saved_caches/ analyticsdb:/var/lib/cassandra
      13. root@sspt-ubuntu5-vm7:~# cd can_bkp/controller_old/
      14. root@sspt-ubuntu5-vm7:~/can_bkp/controller_old/cassandra# docker cp commitlog/ controller:/var/lib/cassandra
      15. root@sspt-ubuntu5-vm7:~/can_bkp/controller_old/cassandra# docker cp data/ controller:/var/lib/cassandra
      16. root@sspt-ubuntu5-vm7:~/can_bkp/controller_old/cassandra# docker cp saved_caches/ controller:/var/lib/cassandra
      17. root@sspt-ubuntu5-vm7:~# docker exec controller chown -R cassandra:cassandra /var/lib/cassandra/
      18. root@sspt-ubuntu5-vm7:~# docker exec analyticsdb chown -R cassandra:cassandra /var/lib/cassandra/
      19. root@sspt-ubuntu5-vm7:~# docker exec analyticsdb service cassandra start

      20. root@sspt-ubuntu5-vm7:~# docker exec controller service cassandra start
    12. Check health of CAN by executing ./components_health.sh.
  • When a high availability (HA) setup comes back up after a power outage, MariaDB instances do not come back up on the VMs.

    Workaround:

    Perform the following steps to recover the MariaDB instances:

    1. Log in to the installer VM.
    2. Navigate to the current deployment directory for CSO; for example, /root/Contrail_Service_Orchestration_4.1.1/.
    3. Execute the sed -i "s@/var/lib/mysql/grastate.dat@/mnt/data/mysql/grastate.dat@g" recovery/components/recover_mariadb.py command
    4. Execute the ./recovery.sh command.
    5. Specify the option to recover MariaDB and press Enter.

    Bug Tracking Number: CXU-20260

SD-WAN

  • SD-WAN policies that use the Cloud-Zscaler profile fails to deploy if the traffic type traffic profile Internet is not enabled.

    Workaround: Create a custom cloud breakout profile and use it instead of the default cloud breakout profile in the SD-WAN policy.

    Bug Tracking Number: CXU-35901

  • Traffic from a spoke site that has a dynamic SLA policy enabled and is connected to an MX Series device functioning as a cloud hub device takes asymmetric paths, that is different paths for upstream and downstream.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32506

  • The firewall policy status changes to undeployed after the create DVPN and certificate renewal events even though the policy remains active on devices.

    Workaround: No workaround required as the functionality is not affected.

    Bug Tracking Number: CXU-32464

  • Class-of-service configuration is not deployed if the gateway site has only a data center department.

    Workaround: Deploy at least one department other than the data center department on the gateway site and apply SD-WAN policies for the department.

    Bug Tracking Number: CXU-30365

  • The tenant name appears as default project when you generate a report based on the predefined template, SD-WAN Performance Report.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-31653

  • On gateway site, when there are no non-datacenter departments, SD-WAN policy deploy job may return the following message and fail:

    No update of SD-WAN policy configuration on device due to missing required information.

    Workaround: There is no functional impact; the deploy job completes successfully when a non-datacenter department with a LAN segment is deployed on Gateway site.

    Bug Tracking Number: CXU-31365

  • SD-WAN deployment policy job may fail if policy intent involves datacenter department or department without any LAN segment. This does not impact SD-WAN policy deployment for other sites.

    Workaround: Use more specific SD-WAN intents, with department or department with site, to exclude datacenter departments and departments without LAN segments.

    Bug Tracking Number: CXU-31313

  • In a bandwidth-optimized, hub-and-spoke topology where network segmentation is enabled, a new LAN segment that has an existing department added to it might cause a deploy job to fail.

    Workaround: Delete the LAN segment and retry the deploy job. If there are policy dependencies, remove the dependencies before you delete the LAN segment.

    Bug Tracking Number: CXU-25968

  • OAM configurations remain on an MX Series device that you have deactivated as cloud hub from CSO.

    Workaround: Manually remove the configuration from the device.

    Bug Tracking Number: CXU-25412

  • When the WAN link endpoints are of different types and if overlay tunnels are created based on matching mesh tags, the static policy for site-to-site or central Internet breakout traffic might give preference to the remote link type instead of the local link type.

    Bug Tracking Number: CXU-28358

  • If the Internet breakout WAN link of the cloud hub is not used for provisioning the overlay tunnel by at least one spoke site in a tenant, then traffic from sites to the Internet is dropped.

    Workaround: Ensure that you configure a firewall policy to allow traffic from security zone trust-tenant-name to zone untrust-wan-link, where tenant-name is the name of the tenant and wan-link is the name of the Internet breakout WAN link.

  • Bug Tracking Number: CXU-21291

  • If a WAN link on a CPE device goes down, the WAN tab of the Site-Name page (in Administration Portal) displays the corresponding link metrics as N/A.

    Workaround: None.

    Bug Tracking Number: CXU-23996

  • If you delete a cloud hub that is created in Release 3.3.1, CSO does not delete the stage-2 configuration.

    Workaround: You must manually delete the stage-2 configuration from the device.

    Bug Tracking Number: CXU-25764

Security Management

  • UTM web filtering fails even though the Enhanced Web Filtering (EWF) server is up and online.

    Workaround: From the device, configure the EWF Server with the 116.50.57.140 IP address as shown in the following example:

    root@SRX-1# set security utm feature-profile web-filtering juniper-enhanced server host 116.50.57.140

    Bug Tracking Number: CXU-32731

  • On the Active Database page in Customer Portal, the wrong installed device count is displayed. The count displayed is for all tenants and not for a specific tenant.

    Workaround: None.

    Bug Tracking Number: CXU-20531

  • If a cloud hub is used by two tenants, one with public key infrastructure (PKI) authentication enabled and other with preshared key (PSK) authentication enabled, the commit configuration operation fails. This is because only one IKE gateway can point to one policy and if you define a policy with a certificate then the preshared key does not work.

    Workaround: Ensure that the tenants sharing a cloud hub use the same type of authentication (either PKI or PSK) as the cloud hub device.

    Bug Tracking Number: CXU-23107

  • If UTM Web-filtering categories are installed manually (by using the request system security UTM web-filtering category install command from the CLI) on an NFX150 device, the intent-based firewall policy deployment from CSO fails.

    Workaround: Uninstall the UTM Web-filtering category that you installed manually by executing the request security utm web-filtering category uninstall command on the NFX150 device and then deploy the firewall policy.

    Bug Tracking Number: CXU-23927

  • If SSL proxy is configured on a dual CPE device and if the traffic path is changed from one node to another node, the following issue occurs:

    • For cacheable applications, if there is no cache entry the first session might fail to establish.

    • For non-cacheable applications, the traffic flow is impacted.

    Workaround: None.

    Bug Tracking Number: CXU-25526

  • The UTM policy configuration is not deployed on an SD-WAN site with the SRX device model SRX345-DUAL-AC.

    Workaround:

    1. Add the SRX345-DUAL-AC device model to the schema file.

      Note

      In the schema-svc docker, the schema file is available at /opt/csp-schema-data/*configuration.json.

    2. Restart the pod.

    Bug Tracking Number: CXU-25706

Site and Tenant Workflow

  • After you do an RMA for a site, alarms for Zscaler tunnels may not work.

    Workaround: Recreate the Zscaler tunnels from Configuration > SD-WAN > Breakout Profiles > Cloud Breakout settings.

    Bug Tracking Number: CXU-36000

  • If the PKI server used by a tenant fails, you cannot renew or revoke the certificates that are used by the sites or create new sites for the tenant.

    Workaround: Delete the tenant and create a new tenant with the new PKI server credentials.

    Bug Tracking Number: CXU-35644

  • After you upgrade CSO from Release 4.0.2 to Release 4.1.1, RMA does not work for sites that are not upgraded to Release 4.1.1.

    Workaround: Delete the device from the site and add that back to the site.

    Bug Tracking Number: CXU-35049

  • After you upgrade CSO to Release 4.1.1, hybrid WAN CPEs show a major alarm.

    Workaround: Upgrade the sites that show the alarm to CSO Release 4.1.1.

    Bug Tracking Number: CXU-34162

  • For centralized deployments, site status is shown as down. However, there is no impact to the traffic and the VNF is operational.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32663

  • After a revert action following an upgrade failure or backup restore, the user is unable to onboard tenants. This problem occurs because sometimes after the revert action, the csp-service-lookup fails as flannel is unable to provide subnet lease.

    Workaround: Run service flanneld restart on all microservices VMs.

    Bug Tracking Number: CXU-32110

  • On a site deployed with only application-specific breakout policies (that is, no breakout policy is configure for ANY application), traffic fails when all overlay tunnels are down.

    Workaround: Deploy a breakout policy for "ANY" application.

    Bug Tracking Number: CXU-28436

  • During site activation, activation of NFX250 dual CPE connected to MX series cloud hub device may fail with the following error message: No existing device_initiated device connection.

    Workaround: Retry the failed ZTP job from the administration portal.

    Bug Tracking Number: CXU-27902

  • After a site upgrade, status of policies that are associated with the site appears as pending deployment even though they are already deployed.

    Workaround: Trigger a policy deployment job to deploy the policies. CSO does not deploy the policies unless there are updates to the policy, but the status of policies are appropriately updated after you run a deployment job.

    Bug Tracking Number: CXU-27528

  • If you create a new tenant with the name of a tenant that was deleted, certain inconsistencies such as policy deployment failure are noticed.

    Workaround: When you create a tenant, ensure that you do not use the same name as that of a deleted tenant.

    Bug Tracking Number: CXU-26886

  • Site upgrade for hub sites that were created using custom device profile or cloned device profile is incomplete.

    Workaround:

    • After the upgrade, go to tssm core docker by entering the following command: docker exec -it docker name bash

    • In the docker run the following command: root@csp:/# cd /opt/meta_data/

    • From /opt/meta_data, run cp SRX_Advanced_SDWAN_HUB_option_1_upgrade.yaml custom_device_profile_upgrade.yaml

    Bug Tracking Number: CXU-26532

  • The tenant delete operation fails when CSO is installed with an external Keystone.

    Workaround: You must manually delete the tenant from the Contrail OpenStack user interface.

    Bug Tracking Number: CXU-9070

  • If you try to activate a branch SRX Series device with the factory-default configuration, the stage-1 configuration commit might fail when there are active DHCP server bindings on the device. This is because of the default DHCP server settings present in factory-default configuration.

    Workaround: When you are pre-staging the CPE device for activation, remove the DHCP server-related configuration from the device by executing the following commands on the Junos OS CLI:

    set system services dhcp-local-server group jdhcp-group interface fxp0.0
    set system services dhcp-local-server group jdhcp-group interface irb.0

    Bug Tracking Number: CXU-13446

  • In some cases, if automatic license installation is enabled in the device profile, after ZTP is complete, the license might not be installed on the CPE device even though license key is configured successfully.

    Workaround: Reinstall the license on the CPE device by using the Licenses page on the Administration Portal.

    Bug Tracking Number: PR1350302.

  • For a tenant, LAN segments with overlapping IP prefixes across sites are not supported.

    Workaround: Create LAN segments with unique IP prefixes across sites for the tenant.

    Bug Tracking Number: CXU-20494

  • When the primary and backup interfaces of the CPE device uses the same WAN interface of the hub, the backup underlay might be used for Internet or site-to-site traffic even though the primary links are available.

    Workaround: Ensure that you connect the WAN links of each CPE device to unique WAN links of the hub.

    Bug Tracking Number: CXU-20564

  • After you configure a site, you cannot modify the configuration either before or after activation.

    Workaround: None.

    Bug Tracking Number: CXU-21165

  • On an NFX250 device, if you disable (detach) a failed service successfully and then try to delete the site, the site is not deleted.

    Workaround: None.

    Bug Tracking Number: CXU-24355

  • If you try to activate a site with an MPLS link by using DHCP, the default route pointing to the MPLS gateway is added to the hub device, which results in Internet traffic from the hub taking the MPLS link.

    Workaround: None.

    Bug Tracking Number: CXU-24666

  • If you trigger the tenant creation workflow, the tenant might be displayed in the CSO GUI even before the job is completed. If you then try to trigger workflows for that tenant, the subsequent jobs fail because the tenant creation job is not completed.

    Workaround: Wait for the tenant creation job to complete successfully before triggering any workflows for the tenant.

    Bug Tracking Number: CXU-24783

  • The Configure Site operation fails if you import a cloud hub with a name that is different from that of other tenants.

    Workaround: While you are importing a cloud hub, specify the same name that is used while onboarding a cloud hub for a global service provider.

    Bug Tracking Number: CXU-25740

  • You cannot configure a site with dual CPE devices if WAN links are used exclusively for local breakout traffic.

    Workaround: While you are creating a site and enabling the link for local breakout, instead of selecting the Use only for breakout traffic option, select Use for breakout & WAN traffic. Also, while you are configuring a site ensure that the WAN link is connected to a hub.

    Bug Tracking Number: CXU-25776

General

  • You cannot reboot a single node in an SRX cluster device.

    Workaround: Wait for 10 minutes and retry the operation.

    Bug Tracking Number: CXU-36844

  • SRX cluster deletion fails and returns the following error message: /var/db/scripts/event/load-recovery.slax: Permission denied. This problem occurs if load-recovery.slax is present on the device.

    Workaround: Before you delete an SRX cluster, rename the load-recovery.slax file on both the devices on the cluster.

    To rename the file:

    1. Log in to the SRX cluster device.

    2. To go to /var/db/scripts/event, enter the following command:

      cd /var/db/scripts/event

    3. Rename load-recovery.slax. For example:

      mv load-recovery.slax load-recovery.slax.old

    4. Repeat these steps on both the nodes.

    Bug Tracking Number: CXU-36384

  • CSO UI navigation becomes slow or unresponsive for 8 to 10 minutes when a server fails.

    Workaround: Wait for 10 minutes and retry the operation.

    Bug Tracking Number: CXU-35769

  • Unable to deploy new VRR after CSO is upgraded to 4.1.1.

    Workaround: Copy vrr,baseMs and Baseinfra images from the CSO/artifacts directory to /var/www/html/csp_components before you try to deploy the new VRR..

    Bug Tracking Number: CXU-33967

  • Status of the pods appear as Unknown or Pending instead of Running.

    Workaround: Run the reinitialize_pods_py script. The script calculates the pod count per node and if the pod count is not properly distributed across the nodes, deletes and redeploys services, pods, and deployments.

    Bug Tracking Number: CXU-32574

  • ZTP for SRX 3xx devices may fail during the default trust certificate installation.

    Workaround: Because default trust certificates are used for application firewall, which is not a supported feature on SRX300 and SRX320 devices, disable installation of default trust certificates in the device template for SRX300 and SRX320 devices.

    For SRX 340 and 345 devices, retry the failed ZTP job. If application firewall is not required, you can consider disabling the installation of default trust certificates for SRX340 and SRX345 as well.

    Bug Tracking number: CXU-32627

  • LAN segments added after a site is activated are not monitored for alarm events. Because of this, link down events for the LAN port are not reported by CSO.

    Workaround: Add LAN segments while you create a site.

    Bug Tracking Number: CXU-32508

  • Information related to deleted VMs remains on an ESXi server.

    Workaround: Before you install CSO on an ESXi server, manually delete any VM folder that is available under /vmfs/volumes/datastore/vm_folder.

    Bug Tracking Number: CXU-32337

  • When you delete a tenant in CSO without deleting the corresponding virtual user account in the JIMS server, JIMS keeps attempting CSO authorization for the deleted tenant. Authorization failures are recorded in the CSO audit logs. This causes spamming of the CSO audit logs and might cause deterioration of database performance.

    Workaround: Before you delete a tenant from CSO, delete information specific to that tenant from JIMS.

    Bug Tracking Number: CXU-32315

  • If a restore operation fails, even subsequent attempts from a healthy backup fail and return the following error message:

    Workaround:

    1. Run a health check: cso_backuprestore -b health check.
    2. Fix any components that are in stopped or failing state.
    3. When CSO is in healthy state, run the restore operation again.

    Bug Tracking Number: CXU-32064

  • When you configure a site for SRX Series devices, the stage-1 configuration might fail if you use the fully qualified domain name (FQDN) for NTP server configuration.

    Workaround: Use the IP address of the NTP server instead of its FQDN.

    Bug Tracking Number: CXU-31415

  • The LAN segment state is changed to VPN attached if the LAN segment deployment failed because of network connectivity issues.

    Workaround: Delete and redeploy the LAN segment after you resolve the network connectivity issue.

    Bug Tracking Number: CXU-31039

  • Signature installation fails on some sites when you attempt to install signatures on more than a hundred sites in a single deploy job.

    Workaround: Install signatures separately on the site where the installation failed.

    Bug Tracking Number: CXU-28923

  • The Revert to Default function does not restore default APN settings if the SIM is already connected to a network with a custom APN.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-28724

  • MySql fails to come back online after an abnormal shutdown or restart of an infrastructure VM.

    Workaround: Run the following script:

    root@installervm:~/Contrail_Service_Orchestration_4.1.1# ./recovery.sh and select 1 from the options.

    Bug Tracking Number: CXU-32046

  • Images and licenses that customers uploaded are lost during a disaster recovery.

    Workaround: Upload the images and licenses again.

    Bug Tracking Number: CXU-31533

  • Reverting from CSO Release 4.1.0 to Release 4.0.2 fails because the controller container on the contrail_analytics node is in Exited mode.

    Workaround:

    1. Log in to the CAN VM.
    2. Run docker ps-a.
    3. If the status of the controller container is Exited, run docker restart controller.

    Bug Tracking Number: CXU-31469

  • Importing POP or onboarding tenants remain In Progress state for long and fail.

    Workaround: Clear data files in /var/lib/zookeeper/version-2 and restart zookeeper.

    Bug Tracking Number: CXU-29856

  • If multiple sites are using the same MX series cloud hub, IPSec overlay tunnels for some of the WAN links may fail to come up and show the following error: Negotiation failed with error code NO_PROPOSAL_CHOSEN received from peer (5 times).

    Workaround: Clear the IPSec session from the connected MX series cloud hub by executing the clear services ipsec-vpn ipsec security-associations command.

    Bug Tracking Number: CXU-27638

  • ZTP for SRX devices fails. This problem occurs if the SRX device was connected to clients on the LAN side before ZTP and has bindings that are not cleared during ZTP.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-27376

  • On an Ubuntu VNF spawned on an NFX250 device, the ping command to a website address (fully qualified domain name) does not work.

    Workaround:

    1. In Resource Designer, clone the existing ubuntu-fw-NFX250 template for the NFX250 device.
    2. Edit the template and ensure that offloads are disabled for the Left Interface.
    3. Click Next and complete the edit operation.

    Bug tracking number: CXU-24985

  • If you create VNF instances in the Contrail cloud by using Heat Version 2.0 APIs, a timeout error occurs after 120 instances are created.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-15033

  • The provisioning of CPE devices fails if all VRRs within a redundancy group are unavailable.

    Workaround: Recover the VRR that is down and retry the provisioning job.

    Bug Tracking Number: CXU-19063

  • After the upgrade, the health check on the standalone Contrail Analytics Node (CAN) fails.

    Workaround:

    1. Log in to the CAN VM.
    2. Execute the docker exec analyticsdb service contrail-database-nodemgr restart command.
    3. Execute the docker exec analyticsdb service cassandra restart command.

    Bug Tracking Number: CXU-20470

  • The load services data operation or health check of the infrastructure components might fail if the data in the Salt server cache is lost because of an error.

    Workaround: If you encounter a Salt server-related error, do the following:

    1. Log in to the installer VM.
    2. Execute the salt '*' deployutils.get_role_ips 'cassandra' command to confirm whether one or more Salt minions have lost the cache.
      • If the output returns the IP address for all the Salt minions, this means that the Salt server cache is fine; proceed to step 7.

      • If the IP address for some minions is not present in the output, this means that the Salt server has lost its cache for those minions and must be rebuilt as explained from step 3.

    3. Navigate to the current deployment directory for CSO; for example, /root/Contrail_Service_Orchestration_4.1.1/.
    4. Redeploy the central infrastructure services (up to the NTP step):
      1. Execute the DEPLOYMENT_ENV=central ./deploy_infra_services.sh command.
      2. Press Ctrl+c when you see the following message on the console:
    5. Redeploy the regional infrastructure services (up to the NTP step):
      1. Execute the DEPLOYMENT_ENV=regional ./deploy_infra_services.sh command.
      2. Press Ctrl+c when you see a message similar to the one for the central infrastructure services.
    6. Execute the salt '*' deployutils.get_role_ips 'cassandra' command and confirm that the output displays the IP addresses of all the Salt minions.
    7. Re-run the load services data operation or the health component check that had previously failed.

    Bug Tracking Number: CXU-20815

  • For an MX Series cloud hub device, if you have configured the Internet link type as OAM_and_DATA, the reverse traffic fails to reach the spoke device if you do not configure additional parameters by using the Junos OS CLI on the MX Series device.

    Workaround:

    1. Log in to the MX Series device and access the Junos OS CLI.
    2. Find the next-hop-service outside-service-interface multiservices interface as follows:
      1. Execute the show configuration | display set | grep outside-service-interface command.
      2. In the output of the command, look for the multiservices (ms-) interface corresponding to the service set that CSO created on the device.

        The name of the service set is in the format ssettenant-name_DefaultVPN-tenant-name, where tenant-name is the name of the tenant.

        The following is an example of the command and output:

        show configuration | display set | grep outside-service-interface

        In this example, the tenant name is Acme and the multiservices interface used is ms-1/0/0.4008.

    3. After you determine the correct interface, add the following configuration on the device: set routing-instances WAN_0 interface ms-interface

      where ms-interface is the name of the multiservices interface obtained in the preceding step.

    4. Commit the configuration.

    Bug Tracking Number: CXU-21818

  • In Resource Designer, if you add a VNF that does not require a password and trigger the Add VNF Manager workflow, you are asked to enter a password even though the VNF does not require it.

    Workaround: Even for VNFs that do not require a password, enter a dummy password in Resource Designer when you are creating a VNF package.

    Bug Tracking Number: CXU-21845.

  • In a full mesh topology, the simultaneous deletion of LAN segments on all sites is not supported.

    Workaround: Delete LAN segments on one site at a time.

    Bug Tracking Number: CXU-21936

  • On a CSO setup with secure OAM configured, if you bring up the FortiGate VNF and then apply the license on the VNF, the VNF reboots. However, after rebooting, sometimes the VNF does not come back up.

    Workaround: To ensure that the VNF comes back up, deactivate the VNF and then reactivate it by performing the following steps:

    1. Log in to the JDM CLI of the NFX Series device and access configuration mode.
    2. Deactivate the VNF by executing the deactivate virtual-network-functions Fortinet-oob-2-Firewall command.
    3. Commit the changes by executing the commit command.
    4. Rollback the changes by executing the rollback 1 command
    5. Commit the changes by executing the commit command.
    6. Exit the configuration mode by executing the quit command.
    7. Execute the show virtual-network-functions command and confirm that the status is Running alive, which means that the VNF is up.

    Bug Tracking Number: CXU-23371

  • When you reboot a device from the Tenant Devices or Devices pages, the reboot job fails because the connectivity is lost during the reboot.

    Workaround: Check the operational status of the device on the Tenant Devices or Devices page. During the reboot phase, the operational status of the device is Down. After the device is successfully rebooted and connectivity is restored, the operational status of the device changes to Up. You can now trigger operations on the device by using the CSO GUI.

    Bug Tracking Number: CXU-24512

  • For an NFX250 device, the Ubuntu VNF service chain configuration is incorrect if you set SINGLE_SSH_TO_NFX to False and then instantiate a service.

    Workaround: None.

    Bug Tracking Number: CXU-25018

  • An error occurs while EEPROM contents for copper ports are being read.

    Workaround: None.

    Bug Tracking Number: PR1372217

  • Because of insufficient buffer size, vSRX performs queue scheduling incorrectly and drop packets.

    Workaround: Set the buffer size to 3000 microseconds by executing the set class-of-service schedulers scheduler-name buffer-size temporal 3000 command.

    Bug Tracking Number: PR1361720.