Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Known Issues

 

This section lists known issues in Juniper Networks CSO Release 5.1.2.

SD-WAN

  • If the Internet breakout WAN link of the provider hub is not used for provisioning the overlay tunnel by at least one spoke site in a tenant, then traffic from sites to the Internet is dropped.

    Workaround: Ensure that you configure a firewall policy to allow traffic from security zone trust-tenant-name to zone untrust-wan-link, where tenant-name is the name of the tenant and wan-link is the name of the Internet breakout WAN link.

    Bug Tracking Number: CXU-21291

  • After you upgrade from CSO Release 4.1.1 to CSO Release 5.1.2, the Firewall Policy and SDWAN policy pages show incorrect count for undeployed intents.

    Workaround: Modify the policy and redeploy.

    Bug Tracking Number: CXU-45171

  • After you delete the last LAN segment of a site, you cannot view the WAN links on Monitor > Geographic Map and Site Management > Site Site-Name > WAN pages. This issue is applicable only to sites that are added before upgrading to CSO Release 5.1.2, and not for the sites that are added after you upgrade to CSO Release 5.1.2.

    Workaround: Add a LAN segment and redeploy.

    Bug Tracking Number: CXU-48454

  • For an SD-WAN site with a Zscaler tunnel, if the IKE source IP address for Zscaler tunnels is a pool of IP addresses and if you reboot the spoke device, the Zscaler tunnel may fail to come up.

    Workaround: Re-apply the configuration for Zscaler groups on the device.

    Bug Tracking Number: CXU-47338

  • You cannot upgrade an enterprise hub site if one of the enterprise hubs is not reachable and is in the configured state.

    Workaround: Upgrade the enterprise hub site after deleting the enterprise hub that is not reachable and is in the configured state.

    Bug Tracking Number: CXU-50060

SD-LAN

  • The deployment of a port profile fails if the values you have configured for the firewall filter are not supported on the device running Junos OS.

    Workaround:

    • Edit the firewall filter.

    • Update the values according to the supported configuration specified for a firewall filter, in this link.

    • Redeploy the port profile.

    Bug Tracking Number: CXU-39629

  • CSO is unable to configure access ports on the EX4600 and EX4650 devices after you zeroize the device because a default VLAN is configured on all the ports after zeroizing.

    Workaround: Load the factory-default configuration if you zeorize the EX4600 and EX4650 devices or delete the default VLAN configuration from all the ports of the members by using commands such as # wildcard range delete interfaces xe-0/0/[0-23].

    Bug Tracking Number: CXU-42865

  • When adding a switch to an already provisioned site, the site state is set to Provisioned in CSO. Therefore, a link to copy the stage-1 configuration for manually activating the EX Series device does not appear. You must set the state of a site to Provisioned only when all the devices in the site are provisioned.

    Workaround: Delete the device from CSO and add the device again after rectifying the reason for provision failure.

    Bug Tracking Number: CXU-40647

  • The chassis view for an EX2300 Virtual Chassis appears blank when the device resources are used up and the request for getting a response from the device times out.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-42866

  • In an on-premises installation, when deploying a port profile configuration fails on an EX4650 switch, CSO displays the management status of the site with EX4650 switch as provisioned even though the ZTP job fails on the switch.

    Workaround: Ensure that no port profile is deployed on an EX4650 switch during ZTP.

    Bug Tracking Number: CXU-42181

  • ZTP of an EX Series switch fails if you add the switch behind an enterprise hub.

    Workaround: For onboarding an EX Series switch behind an enterprise hub, manually configure the stage-1 configuration on the switch.

    Bug Tracking Number: CXU-38994

CSO High Availability

  • In an HA setup, in case of power failure scenarios, certain workflows, such as onboard tenant or configure site, may fail randomly with ReadTimeout Error.

    Workaround: Contact JTAC for the recovery procedure.

    Bug Tracking Number: CXU-43001

  • When the virtual route reflector (VRR) is down or not reachable from CSO, you cannot delete a site or tenant from CSO.

    Workaround: Recover the VRR and retry deleting the site or tenant.

    Bug Tracking Number: CXU-43724

  • After you restart all the three infrastructure nodes, MariaDB is not restored properly.

    Workaround: Execute the recovery.sh script on the startup server and select the MariaDB option to restore MariaDB completely.

    Bug Tracking Number: CXU-42125

  • In an HA installation, during infrastructure deployment, sometimes services inside the Contrail Analytics Node remain in the initializing state. Because of this, you cannot configure the Contrail Analytics Node and the infrastructure deployment fails.

    Workaround: There is no known workaround. You must delete all the virtual machines spawned and start the deployment all over again.

    Bug Tracking Number: CXU-42965

  • While you are installing CSO 5.1.2, the Contrail_analytics component is reported as unhealthy when you run the deploy.sh script for the first time.

    Workaround:

    1. Reboot the Contrail Analytics Node and wait for around 10 minutes.
    2. Run the ./components_health.sh script to check the health of the components.
    3. If all components are healthy, then run the following commands:

      ./python.sh micro_services/deploy_micro_services.py

      ./python.sh micro_services/load_services_data.py

    Bug Tracking Number: CXU-48269

  • After you reboot the server, the docker containers within the Contrail Analytics Node are not started.

    Workaround:

    To restart the docker containers:

    1. Run the sudo fsck /dev/vda1 command.
    2. Reboot the Contrail Analytics Node.
    3. After you reboot the server, check the status of the docker container by running the docker ps command.
    4. To ensure all services inside the Contrail Analytics Node are proper, run the components_health.sh script from the CSO folder in the startup server.

    Bug Tracking Number: CXU-48126

  • When you reboot a server, the status of all microservices is initially displayed as pending. Only when the node is ready, the status of the microservices is changed to Running . However, the secmgnt and monitoring pods are occasionally not up and running.

    Workaround: Restart the pods manually by running the kubectl delete pods pod-name -n command.

    Bug Tracking Number: CXU-48125

  • When you reboot a server, if you run the kubectl get nodes command, the status of all nodes is displayed as Not Ready. This is because the docker containers in the infraservices or microservices are not automatically started and the status is displayed as Loaded.

    Workaround: To bring back the node status to Ready, log in to the infraservices or microservices node, and run the following commands:

    rm -f /var/lib/docker/network/files/local-kv.db

    service docker start

    Bug Tracking Number: CXU-48027

  • After you reboot the server, a few pods in kube-system are in the Crashloop Backoff state.

    Workaround: You must replace the entire k8 master node.

    1. Log in to the startup server.
    2. From the CSO folder, run the ./deploy.sh -r replace_vm command.
    3. Select the appropriate k8 virtual machine, which is corrupted and must be replaced.
    4. After the virtual machine is spawned, run the following command from the startup server to ensure all the pods are in theRunning state:

      root@startupserver1:~/Contrail_Service_Orchestration_5.1.2# kubectl get pods -n kube-system -o wide

    Bug Tracking Number: CXU-46754

  • After you reboot the server, ArangoDB gets corrupted and the arangodb3 service will not be in the Running state.

    Workaround: Log in to Infra Node and execute the following commands:

    cd /mnt/data/arangodb3/cluster/dbserver8530/data/engine-rocksdb/journals

    mkdir -p /mnt/data/arango/

    mv * /mnt/data/arango/archive1

    systemctl start arangodb3.service

    Bug Tracking Number: CXU-47812

  • The installation of CSO Release 5.1.2 fails while you are configuring Contrail Analytics Node.

    Workaround: On the startup server, run the following commands and rerun the ./deploy.shscript:

    salt -C "G@roles:contrail_analytics" state.apply contrail_analytics.post_configure saltenv='central'

    salt 'contrail_analytics1' cmd.run "server-manager provision --cluster_id demo-cluster contrail_networking_docker --no_confirm"

    Bug Tracking Number: CXU-48745

  • After you reboot the BareMetal servers, occasionally Contrail Analytics Node processes are not running as expected.

    Workaround:

    • If there is an NTP Server-related issue, run the following command on the CSO installer VM:

      salt *contrail* state.apply ntp saltenv=central

    • If there is a RabbitMQ-related issue, run the following commands on all three Contrail Analytics Nodes:

      docker exec -it controller bash

      service rabbitmq-server stop

      ps -ef | grep epmd

      rm -rf /var/lib/rabbitmq/mnesia/

      service rabbitmq-server start

      docker restart analytics && docker restart analyticsdb && docker restart controller

    Bug Tracking Number: CXU-48572

  • After you reboot the startup server, the status of some pods (for example, etcd , kube-api and so on) in kube-system and infra node is displayed as CrashLoopBackOff.

    Workaround: You need to replace the k8 master node.

    1. Log in to startup server.

    2. From the CSO folder, run the following command:

      ./deploy.sh -r replace_vm

    3. Select the appropriate k8 VM or the infra node that is corrupted and must be replaced.

    4. After the VM is spawned, run the following command from startup server to ensure all the pods are in the Running state:

      root@startupserver1:~/Contrail_Service_Orchestration_5.1.2# kubectl get pods -n kube-system -o wide

    Bug Tracking Number: CXU-46754

  • After you reboot one of the startup servers, the Add Site or Delete Site workflows might fail with the following error:

    vhost '/' is down or inaccessible.

    Workaround: Restart all RabbitMQ pods sequentially. Log in to the startup server and run the following commands:

    kubectl delete pod rabbitmq-ha-0 -n infra

    kubectl delete pod rabbitmq-ha-1 -n infra

    kubectl delete pod rabbitmq-ha-2 -n infra

    Bug Tracking Number: CXU-46490

  • During the server reboot, for a considerable time the status of the Calico pod is displayed as the following:

    This is because the IP_AUTODETECTION_METHOD environment variable not set.

    Workaround: Reboot the server again.

    Bug Tracking Number: CXU-44871

  • You cannot view the latest csplogs in Kibana.

    Workaround:

    1. On the installer VM, navigate to the CSO 5.1.2 directory.

    2. Replace content in the deployments/central/file_root/elk_elasticsearch/configs/csplogs_template.json file with the following:

    3. Open upgrade_logstash.sls in the deployments/central/file_root/upgrade/ folder, and replace ^-Xmx3g with ^-Xmx2g.

    4. Run the following command:

      salt -C "G@roles:elk_logstash" state.apply upgrade.upgrade_elk_logstash saltenv='central'

    Bug Tracking Number: CXU-49881

Security Management

  • If a provider hub is used by two tenants, one with public key infrastructure (PKI) authentication enabled and other with preshared key (PSK) authentication enabled, the commit configuration operation fails. This is because only one IKE gateway can point to one policy and if you define a policy with a certificate then the preshared key does not work.

    Workaround: Ensure that the tenants sharing a provider hub use the same type of authentication (either PKI or PSK) as the provider hub device.

    Bug Tracking Number: CXU-23107

  • If UTM Web-filtering categories are installed manually (by using the request system security UTM web-filtering category install command from the CLI) on an NFX150 device, the intent-based firewall policy deployment from CSO fails.

    Workaround: Uninstall the UTM Web-filtering category that you installed manually by executing the request security utm web-filtering category uninstall command on the NFX150 device and then deploy the firewall policy.

    Bug Tracking Number: CXU-23927

Site and Tenant Workflow

  • When you perform ZTP on more than one enterprise hub at the same time, ZTP for one or the other enterprise hub may fail.

    Workaround: Perform ZTP on enterprise hubs one after the other; that is, after the ZTP of the first enterprise hub completes successfully. You can also retry executing the failed ZTP job.

    Bug Tracking Number: CXU-42985

  • When onboarding a next-generation firewall and switch, the CSO GUI may temporarily show that provisioning the firewall has failed when a license is not present, although the ZTP task completes and the site is provisioned.

    Workaround: Refresh the page to view the final status of onboarding the next-generation firewall.

    Bug Tracking Number: CXU-43024

General

  • UTM Web filtering fails at times even though the Enhanced Web Filtering (EWF) server is up and online.

    Workaround: From the device, configure the EWF Server with the IP address 116.50.57.140 as shown in the following example:

    root@SRX-1# set security utm feature-profile web-filtering juniper-enhanced server host 116.50.57.140

    Bug Tracking Number: CXU-32731

  • If you click a specific application on the Resources > Sites Management > WAN tab > Top applications widget, the Link Performance widget does not display any data.

    Workaround: You can view the data from the Monitoring >Application Visibility page or Monitoring >Traffic Logs page.

    Bug Tracking Number: CXU-39167

  • The bootstrap job for a device remains in the In Progress state for a considerable time. This is because CSO fails to receive the bootstrap completion notification from the device.

    Workaround: If the bootstrap job is in the In Progress state for more than 10 minutes, add the following configuration to the device:

    set system phone-home server https://redirect.juniper.net

    Bug Tracking Number: CXU-35450

  • After Network Address Translation (NAT), only one DVPN tunnel is created between two spoke sites if the WAN interfaces (with link type as Internet) of one of the spoke site have the same public IP address.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41210

  • On an SRX Series device, the deployment fails if you use the same IP address in both the Global FW policy and the Zone policy.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41259

  • In case of an AppQoE event (packet drop or latency), the application may not switch to the best available path among the available links.

    Workaround: Reboot the device.

    Bug Tracking Number: CXU-41922

  • While you are using a remote console for a tenant device, if you press the Up arrow or the Down arrow, then instead of the command history irrelevant text (that includes the device name and the tenant name) appears on the console.

    Workaround. To clear the irrelevant text, press the down arrow key a few times and then press Enter.

    Bug Tracking Number: CXU-41666

  • While you are editing a tenant, if you modify Tenant-owned Public IP Pool under Advanced Settings (optional), then the changes that you made to the Tenant-owned Public IP pool field are not reflected after the completion of the edit tenant operation job.

    Note

    You cannot add Tenant-owned Public IP pool after you create an SD-WAN site for the tenant.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-41139

  • The TAR file installation of a distributed deployment fails. This issue occurs if the version of the bare-metal server that you are using is later than the recommended version.

    Workaround: You must install the python-dev script before running the deploy-sh script.

    After you extract the CSO TAR file on the bare-metal server:

    1. Navigate to the /etc/apt directory and execute the following commands:

      • cp sources.list sources.list.cso

      • cp orig-sources.list sources.list

    2. Install the python2.7-dev script by running the following commands:

      • apt-get update && apt-get install python2.7-dev

      • cp sources.list.cso sources.list

    3. Navigate to the /root/Contrail_Service_Orchestration_5.1.0 folder and then run the deploy.sh script.

    Bug Tracking Number: CXU-41845

  • The Users page continues to display the name of the user that you deleted. This is because the Users page is not automatically refreshed.

    Workaround: Manually refresh the page.

    Bug Tracking Number: CXU-41793

  • After ZTP of an NFX Series device, the status of some tunnels are displayed as down. This issue occurs if you are using the subnet IP address192.168.2.0 on WAN links, which causes an internal IP address conflict.

    Workaround: Avoid using the 192.168.2.0 subnet on WAN links.

    Bug Tracking Number: CXU-41511

  • In the CSO GUI, in the LAN tab of a next-generation firewall site with a LAN switch, when you click the arrow icon next to a LAN segment, the ports displayed in the Switch Ports field disappear.

    Workaround: Hover over the +number of ports link in the Switch Ports column to view the list of ports on the LAN.

    Bug Tracking Number: CXU-42608

  • Installation of licenses on an SRX4200 dual CPE cluster by using CSO is failing.

    Workaround: Install the licenses manually. To install the licenses manually:

    1. Copy the license files for both the devices to the primary node of the cluster.
    2. Install the license on the primary device.
    3. Copy the license file of the backup node to the backup node.
    4. Log in to the backup node and install the license.

    Bug Tracking Number: CXU-40522

  • When you configure a DVPN tunnel between an Internet link that is behind NAT and an Internet link that is not behind NAT the IPSec tunnel may not come up.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-43217

  • You cannot change PKI properties (CA Server URL, Password, CRL Server, Auto Renew) on the Tenant Settings page.

    Workaround: Only a tenant administrator can change PKI properties on the Administration > Certificate Management > VPN Authentication page.

    Bug Tracking Number: CXU-41231

  • Even though you successfully upgrade a spoke site from CSO Release 4.1.1 to CSO Release 5.1.2, the MPLS flow mode settings are not applied. This issue is not applicable if the MPLS flow mode settings are already applied to the CPE device during CSO Release 4.1.1 through stage-2 templates.

    Workaround: Reboot the server.

    Bug Tracking Number: CXU-42670

  • CSO does not support cluster-level Return Material Authorization (RMA) for SRX dual CPE devices. Only cluster node-level RMA is supported.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-32157

  • CSO Release 5.1.2 does not support the installation of third-party certificates.

    Workaround: There is no known workaround.

    Bug Tracking Number: CXU-49827

  • You cannot delete the last LAN segment from the department if there is no connectivity between,

    • The spoke device and CSO, or

    • The designated hub of the spoke device and CSO.

    Workaround: Delete the last LAN segment from the department after the connectivity is restored.

    Bug Tracking Number: CXU-49439