Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 

Known Issues

AWS Spoke

  • The AWS device activation process takes up to 30 minutes. If the process does not complete in 30 minutes, a timeout might occur and you must retry the process. You need not download the cloud formation template again.

    To retry the process:

    1. Log in to the Customer Portal.
    2. Access the Activate Device page, enter the activation code, and click Next.
    3. After the CREATE_COMPLETE message is displayed on the AWS server, click Next on the Activate Device page to proceed with device activation.
  • For an AWS spoke, during the activation process, the device status on the Activate Device page is displayed as Detected even though the device is down.

    Workaround: None.

    Bug Tracking Number: CXU-19779.

CSO HA

  • In a CSO HA setup, two RabbitMQ nodes are clustered together, but the third RabbitMQ node does not join the cluster. This might occur just after the initial installation, if a virtual machine reboots, or if a virtual machine is powered off and then powered on.

    Workaround: Do the following:

    1. Log in to the RabbitMQ dashboard for the central microservices VM (http://central-microservices-vip:15672) and the regional microservices VM (http://regional-microservices-vip:15672).
    2. Check the RabbitMQ overview in the dashboards to see if all the available infrastructure nodes are present in the cluster.
    3. If an infrastructure node is not present in the cluster, do the following:
      1. Log in to the VM of that infrastructure node.
      2. Open a shell prompt and execute the following commands sequentially:

        rabbitmqctl stop_app

        service rabbitmq-server stop

        rabbitmqctl stop_app command

        rm -rf /var/lib/rabbitmq/mnesia/

        service rabbitmq-server start

        rabbitmqctl start_app

    4. In the RabbitMQ dashboards for the central and regional microservices VMs, confirm that all the available infrastructure nodes are present in the cluster.

    Bug Tracking Number: CXU-12107

  • CSO may not come up after a power failure.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-16530

  • In some cases, when the power fails, the ArangoDB cluster does not form.

    Workaround:

    1. Login to the centralinfravm3 VM.
    2. Execute the service arangodb.cluster stop
    3. Login to the centralinfra2 VM.
    4. Execute the service arangodb.cluster stop
    5. Login to the centralinfra1 VM.
    6. Execute the service arangodb.cluster stop
    7. On the centralinfravm1 VM, execute the service arangodb.cluster start command and wait for 20 seconds for the command to execute.
    8. On the centralinfravm2 VM, execute the service arangodb.cluster start command and wait for 20 seconds for the command to execute.
    9. On the centralinfravm3 VM, execute the service arangodb.cluster start command and wait for 20 seconds for the command to execute.

    Bug Tracking Number: CXU-20346.

  • In a HA setup, the time configured for the CAN VMs might not be synchronized with the time configured for the other VMs in the setup. This can cause issues in the throughput graphs.

    Workaround:

    1. Log in to can-vm1 as root.
    2. Modify the /etc/ntp.conf file to point to the desired NTP server.
    3. Restart the NTP process.

    After the NTP process restarts successfully, can-vm2 and can-vm3 automatically re-synchronize their times with can-vm1.

    Bug Tracking Number: CXU-15681

SD-WAN

  • For CSO Release 3.3, the LTE link can only be a backup link. Therefore, the SLA metrics are not applicable and default values of zero might be displayed in on the Application SLA Performance page, which can be ignored.

    Workaround: None.

    Bug Tracking Number: CXU-19943

  • In a dual CPE spoke, non-cacheable applications do not work when the initial path is on CPE0 and APBR path selected is on CPE1.

    Workaround: None.

    Bug Tracking Number: PR1340331

  • In an SRX dual CPE site, when the application traffic takes the Z-mode path, the application throughput reported in Administration Portal GUI is lower than the actual data throughput.

    Workaround: None.

    Bug Tracking Number: PR1347723.

  • If all the active links, including OAM connectivity to CSO, are down and the LTE link used for traffic, if the DHCP addresses changes to a new subnet, the traffic is dropped because CSO is unable to re-configure the device.

    Workaround: None.

    Bug Tracking Number: CXU-19080.

  • On the Site SLA Performance page, applications with a different SLA scores are plotted at the same coordinate on the X-axis.

    Workaround: None.

    Bug Tracking Number: CXU-19768.

  • When all local breakout links are down, site to Internet traffic fails even though there is an active overlay to hub.

    Workaround: None.

    Bug Tracking Number: CXU-19807

  • When the CPE is not able to reach CSO, DHCP address changes on WAN interfaces might not be detected and re-configured.

    Workaround: None.

    Bug Tracking Number: CXU-19856

  • When the OAM link is down, the communication between the CPE devices and CSO does not work even though CSO can be reached over other WAN links. There is no impact to the traffic.

    Workaround: None.

    Bug Tracking Number: CXU-19881.

  • In a full mesh topology, the GRE IPsec down alarms are not created for some overlays during link failures.

    Workaround: None.

    Bug Tracking Number: CXU-20403.

  • If you specify an MPLS link without local breakout capability as the backup link, then Internet breakout traffic is dropped because the overlay link to hub will not be used for Internet traffic if local breakout is enabled for the site.

    Workaround: Configure an Internet or an LTE link as the backup link.

    Bug Tracking Number: CXU-20447.

  • If you define an SLA profile for a static SD-WAN policy but do not remove the default values for the SLA parameters and deploy the policy, the policy is deployed as a dyanmic SD-WAN policy.

    Workaround: When you define the SLA profile for static SD-WAN policy, ensure that you remove the default values for the SLA parameters.

    Bug Tracking Number: CXU-20499.

  • When you modify the path preference of an existing SLA profile that has already been deployed and redeploy the SD-WAN policy, the path of the SLA profile is not updated on the CPE device.

    Workaround: Modify the path preference in an SLA profile that is not yet deployed.

    Bug Tracking Number: CXU-20540.

  • For non-cacheable applications, in a hub and spoke topology, on link switchover, in some cases, the traffic between the hub and spoke might take incorrect physical path because the existing session flow is not updated. However, there is no traffic loss.

    Workaround: None.

    Bug Tracking Number: PR1341274

  • In the bandwidth-optimized SD-WAN mode, when the same SLA is used in the SD-WAN policy for different departments and an SLA violation occurs, two link switch events that appear identical, because the department name is missing from the event details, are displayed.

    Workaround: None.

    Bug Tracking Number: CXU-20529.

Security Management

  • If you create a firewall policy with a large number of firewall policy intents and deploy the policy on a tenant with a higher sites with LAN segments and monitoring enabled, the policy deployment fails.

    Workaround: None.

    Bug Tracking Number: CXU-20292

  • When you create a NAT pool, specify the Translation as Port/Range, configure the port as a range, and enter an incorrect starting port number, you cannot enter the ending port number and the NAT pool is created with a single port value instead of a range.

    Workaround: When you create a NAT pool with a port range, ensure that the starting port number is between 1,024 through 65,335 and then enter the corresponding ending port number between 1,024 through 65,335.

    Bug Tracking Number: CXU-20366.

Site and Tenant Workflow

  • ZTP fails on SRX 3xx Series device CPE because DHCP bindings already exist on CPE.

    Workaround: Manually clear the DHCP bindings on the CPE and restart ZTP.

    Bug Tracking Number: CXU-13446

  • The tenant delete operation fails when CSO is installed with an external Keystone.

    Workaround: You must manually delete the tenant from the Contrail OpenStack user interface.

    Bug Tracking Number: CXU-9070

  • When both the OAM and data interfaces are untagged, ZTP fails when using a NFX Series platform as CPE.

    Workaround: Use tagged interfaces for both OAM and data.

    Bug Tracking Number: CXU-15084

  • The tenant creation job may fail if connectivity from CSO to VRR is lost during job execution.

    Workaround: If the tenant creation job fails and the tenant is created in CSO, delete the tenant and retrigger the tenant creation.

    Bug Tracking Number: CXU-16884

  • If the tenant name exceeds 16 characters, the activation of the SRX hub device fails.

    Workaround: Delete the tenant and re-create a new tenant with name less than 16 characters or less and retry the activation.

    Bug Tracking Number: PR1344369.

  • For tenants with a large number of spoke sites, the tenant deletion job fails because of token expiry.

    Workaround: Retry the tenant delete operation.

    Bug Tracking Number: CXU-19990.

  • In some cases, on the Monitor Overview page (Monitoring > Overview) for a site, the ZTP status is displayed incorrectly when you hover over the site.

    Workaround: None.

    Bug Tracking Number: CXU-20226.

  • In some cases, if automatic license installation is enabled in the device profile, after ZTP is complete, the license might not be installed on the CPE device even though license key is configured successfully.

    Workaround: Reinstall the license on the CPE device by using the Licenses page on the Administration Portal.

    Bug Tracking Number: PR1350302.

  • In the scenario where the redirect service from Juniper (redirect.juniper.net) is not being used, after you upgrade an NFX device to Junos OS Release 15.1X53-D472, the device is unable to connect to the regional server because the phone home server certificate (phd-ca.crt) is reverted to the factory default.

    Workaround: Manually copy the regional certificate to the NFX device.

    Bug Tracking Number: PR1350492.

  • In a hub and spoke topology with multi-tenancy (network segmentation) enabled, the reverse traffic from the hub to the originating spoke might not take the same path as the traffic in the forward direction. There is no traffic loss.

    Workaround: None.

    Bug Tracking Number: CXU-20494.

  • In the Configure Site workflow for a full mesh topology with multitenancy enabled, the option to connect the CPEs only to the hub is not supported; that is, if you specify false for the used_for_meshing parameter, this option is ignored.

    Workaround: None.

    Bug Tracking Number: CXU-20495.

  • For hybrid WAN tenants, during site creation, all the VIMs in the system are displayed even though a specific VIM is already assigned during the tenant creation.

    Workaround: None.

    Bug Tracking Number: CXU-20371.

  • When you use DHCP for the activation of a dual CPE device, ZTP might fail because the device takes longer than expected to to connect to the Device Connectivity Service (DCS).

    Workaround: Retry the failed ZTP job.

    Bug Tracking Number: CXU-20467.

  • During site addition, if you create a department but do not assign a LAN segment to that department, during the site activation, the firewall policy deployment fails.

    Workaround: Do one of the following:

    • Go to the Site-Name page, and on the LAN tab, add a new LAN segment to the department that did not have any LAN segments assigned during site creation.
    • Alternatively, during site addition, when you create a department, ensure that you assign at least one LAN segment to that department.

    Bug Tracking Number: CXU-20502.

Topology

  • When a spoke is recalled, the configuration remains on the hub. When the spoke is reprovisioned, the activation fails and an error message indicating that the source and destination addresses of the tunnel cannot be the same is displayed in the logs.

    Workaround: Clean up the configuration of the recalled spoke in the hub and reprovision the spoke with a new name.

    Bug Tracking Number: CXU-20441.

User Interface

  • When you bring down or bring up an AWS availability zone, there might be a momentary slowdown in the response time of the Administration Portal GUI and some in-progress jobs might be affected.

    Workaround: Wait for five minutes and retry the failed jobs.

    Bug Tracking Number: CXU-20463.

General

  • If you create VNF instances in the Contrail cloud by using Heat Version 2.0 APIs, a timeout error occurs after 120 instances are created.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-15033

  • When you upgrade the gateway router (GWR) by using the CSO GUI, after the upgrade completes and the gateway router reboots, the gateway router configuration reverts to the base configuration and loses the IPsec configuration added during Zero Touch Provisioning (ZTP).

    Workaround: Before you upgrade the gateway router by using the CSO GUI, ensure that you do the following:

    1. Log in to the Juniper Device Manager (JDM) CLI of the NFX Series device.
    2. Execute the virsh list command to obtain the name of the gateway router (GWR_NAME).
    3. Execute the request virtual-network-functions GWR_NAME restart command, where GWR_NAME is the name of the gateway router obtained in the preceding step.
    4. Wait a few minutes for the gateway router to come back up.
    5. Log out of the JDM CLI.
    6. Proceed with the upgrade of the gateway router by using the CSO GUI.

    Bug Tracking Number: CXU-11823.

  • The reboot of the central infrastructure VM is not supported.

    Workaround: If the VM reboots, contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-17242.

  • If you run the script to revert the upgraded setup to CSO Release 3.2.1, in some cases, the status of the ArangoDB cluster becomes unhealthy.

    Workaround:

    1. Login to the centralinfravm3 VM.
    2. Execute the service arangodb3 stop and wait for 30 seconds.
      • If the command executes successfully, proceed to 3
      • If there is no progress after 30 seconds:
        1. Press Ctrl+C to abort the command.
        2. Execute the kill -9 `ps -ef | grep arangod | grep -v grep | awk {'print $2'}` command.
    3. Login to the centralinfra2 VM.
    4. Execute the service arangodb3 stop and wait for 30 seconds.
      • If the command executes successfully, proceed to step 5
      • If there is no progress after 30 seconds:
        1. Press Ctrl+C to abort the command.
        2. Execute the kill -9 `ps -ef | grep arangod | grep -v grep | awk {'print $2'}` command.
    5. Login to the centralinfra1 VM.
    6. Execute the service arangodb3 stop and wait for 30 seconds.
      • If the command executes successfully, proceed to step 7
      • If there is no progress after 30 seconds:
        1. Press Ctrl+C to abort the command.
        2. Execute the kill -9 `ps -ef | grep arangod | grep -v grep | awk {'print $2'}` command.
    7. On the centralinfravm3, execute the service arangodb3 stop and wait for 20 seconds for the command to execute.
    8. On the centralinfravm2, execute the service arangodb3 stop and wait for 20 seconds for the command to execute.
    9. On the centralinfravm1, execute the service arangodb3 stop and wait for 20 seconds for the command to execute.
    10. Execute the netstat -tuplen | grep arangod command on all three central infrastructure VMs to check the status of the ArangoDB cluster. If the port binding is successful for all the central infrastructure VMs, then the status of the ArangoDB cluster is healthy.

      The following is a sample output.

          tcp6 0 0 :::8528 :::* LISTEN 0 54213 9220/arangodb
          tcp6 0 0 :::8529 :::* LISTEN 0 44018 9327/arangod
          tcp6 0 0 :::8530 :::* LISTEN 0 91216 9289/arangod
          tcp6 0 0 :::8531 :::* LISTEN 0 42530 9232/arangod 

      Bug Tracking Number: CXU-20397.

  • On a CPE configured with an LTE backup link, LTE link flaps are observed when the CPE is running for a longer period.

    Workaround: None.

    Bug Tracking Number: PR1349613.

  • In a HA environment, when you upgrade from JCS 3.2.1 to JCS 3.3, the Kubernetes system pods for the central or regional load balancer VM are in the Terminating state. This causes the load balancer VM to be in the Not Ready state, which causes the health check to fail during the upgrade.

    Workaround:

    1. On the installer VM:
      • If the central load balancer VM is in the Not Ready state, execute the following command: salt 'csp-central-lbvm*' cmd.run 'reboot'.
      • If the regional load balancer VM is in the Not Ready state, execute the following command: salt 'csp-regional-lbvm*' cmd.run 'reboot'
    2. Wait for some time until the nodes are in the Ready state.
    3. Rerun the upgrade.sh script to continue with the upgrade.

    Bug Tracking Number: CXU-20271.

  • The provisioning of CPE devices fail if all VRRs within a redundancy group are unavailable.

    Workaround: Recover the VRR that is down and retry the provisioning (ZTP) job.

    Bug Tracking Number: CXU-19063

  • In the centralized deployment, after you import a POP, the CPU, memory, and storage allocation are displayed as zero.

    Workaround: Refresh the UI and the correct information is displayed.

    Bug Tracking Number: CXU-19105

  • The CSO health check displays the following error message: ERROR: ONE OR MORE KUBE-SYSTEM PODS ARE NOT RUNNING

    Workaround:

    1. Login to the Central microservices VM.
    2. Execute the kubectl get pods –namespace=kube-system command
    3. If the kube-proxy process is not in the Running state, execute the kubectl apply –f /etc/kubernetes/manifests/kube-proxy.yaml command.

      Bug Tracking Number: CXU-20275.

  • In a department, when there are two LAN segments with DHCP enabled, only one DHCP server setting is deployed on the device.

    Workaround: Enable DHCP only for one LAN segment in a department.

    Bug Tracking Number: CXU-20519.

  • For LAN segments that contain overlapping IP addresses:
    • If you create two or more LAN segments that contain overlapping IP addresses and either select one of the LAN segments with overlapping IP addresses or do not select any of the LAN segments with overlapping IP addresses and click Deploy, the deployment for all LAN segments that are not in the VPN attached state is triggered.

      Workaround: To create two or more LAN segments with overlapping IP addresses, create and trigger the deploy operation for one LAN segment at a time. After the deployment is successful, create and trigger the deploy operation for the next LAN segment with the overlapping IP address, and so on.

    • If you delete two or more LAN segments that contain overlapping IP addresses, select the deleted LAN segments and then trigger the deploy operation, an incorrect error message indicating that the IP prefixes are the same is displayed.

      Workaround: To delete and deploy two or more LAN segments with overlapping IP addresses, delete the LAN segments and then trigger the deploy operation without selecting any LAN segments.

    Bug Tracking Number: CXU-20365.

  • The Grant RMA operation fails for a multihome hub device.

    Workaround: None.

    Bug Tracking Number: CXU-20457.

  • After the upgrade, the health check on the standalone Contrail Analytics Node (CAN) fails.

    Workaround:

    1. Log in to the CAN VM.
    2. Execute the docker exec analyticsdb service contrail-database-nodemgr restart command.
    3. Execute the docker exec analyticsdb service cassandra restart command.

    Bug Tracking Number: CXU-20470.

  • When the LTE modem is disconnected or disabled in the NFX250 CPE device, an alarm is triggered. However, the underlay link status in Sites page might not display the alarm.

    Workaround: None.

    Bug Tracking Number: CXU-20492.

Modified: 2018-04-01