Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 

Known Issues

AWS Spoke

  • The AWS device activation process takes up to 30 minutes. If the process does not complete in 30 minutes, a timeout might occur and you must retry the process. You need not download the cloud formation template again.

    To retry the process:

    1. Log in to the Customer Portal.
    2. Access the Activate Device page, enter the activation code, and click Next.
    3. After the CREATE_COMPLETE message is displayed on the AWS server, click Next on the Activate Device page to proceed with device activation.
  • For an AWS spoke, the device status on the Activate Device page is displayed as Detected even though the device is down.

    Workaround: None.

    Bug Tracking Number: CXU-19779.

  • When you create a cloud spoke site, the default links and backup link fields are not applicable.

CSO HA

  • In a three-node setup, two nodes are clustered together, but the third node is not part of the cluster. In addition, in some cases, the RabbitMQ nodes are also not part of the cluster. This is a rare scenario, which can occur just after the initial installation, if a virtual machine reboots, or if a virtual machine is powered off and then powered on.

    Workaround: Do the following:

    1. Log in to the RabbitMQ dashboard for the central microservices VM (http://central-microservices-vip:15672) and the regional microservices VM (http://regional-microservices-vip:15672).
    2. Check the RabbitMQ overview in the dashboards to see if all the available infrastructure nodes are present in the cluster.
    3. If an infrastructure node is not present in the cluster, do the following:
      1. Log in to the VM of that infrastructure node.
      2. Open a shell prompt and execute the following commands sequentially:

        rabbitmqctl stop_app

        service rabbitmq-server stop

        rabbitmqctl stop_app command

        rm -rf /var/lib/rabbitmq/mnesia/

        service rabbitmq-server start

        rabbitmqctl start_app

    4. In the RabbitMQ dashboards for the central and regional microservices VMs, confirm that all the available infrastructure nodes are present in the cluster.

    Bug Tracking Number: CXU-12107

  • Data in the Maria database instances of a cluster mode can go out of sync when a central Infrastructure fails.

    Workaround: You must manually synchronize the Maria database instances. Contact Juniper Networks Technical Support for instructions.

    Bug Tracking Number: CXU-13128

  • In an HA deployment, when one of the central infrastructure hosts goes down, the SD-WAN workflow fails.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-16273

  • CSO may not come up after a power failure.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-16530

  • Data in Maria DB instances in cluster mode may go out of sync upon central infrastructure failures.

    Workaround: You must manually synchronize the Maria DB instances. Contact Juniper Networks Technical Support for instructions.

    Bug Tracking Number: CXU-13128

  • In HA deployment, when one of the central infrastructure host is down, SD-WAN workflows fail.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-16273

  • Sometimes CSO may not come up after power failure.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-16530

Installation

  • In a HA setup, the time configured for the CAN VMs might not be synchronized with the time configured for the other VMs in the setup. This can cause issues in the throughput graphs.

    Workaround:

    1. Log in to can-vm1 as root.
    2. Modify the /etc/ntp.conf file to point to the desired NTP server.
    3. Restart the NTP process.

    After the NTP process restarts successfully, can-vm2 and can-vm3 automatically re-synchronize their times with can-vm1.

    Bug Tracking Number: CXU-15681

Policy Deployment

  • Automatic Policy deployments on new Site addition (for example, auto NAT, firewall, SD-WAN) can sometimes fail due to trusted certificate installations on the device happening in parallel.

    Workaround: To redeploy the failed job, open the Configuration > Deployments > History window, select the failed job and click Re-Deploy.

    Bug Tracking Number: CXU-16652

  • If you create a firewall policy and deploy it to the device, and subsequently create one or more firewall policy intents without re-deploying the policy, the firewall policy is automatically deployed to the device when there's a change in the topology, such as the addition of a new site, department, or LAN segment.

    Workaround: Create firewall policy intents when you intend to deploy them to the device and re-deploy the policy.

    Bug Tracking Number: CXU-15794

SD-WAN

  • The Application SLA Performance page displays dummy SLA performance data for LTE links. This is because an LTE link can only be a backup link, which means that SLA metrics are not applicable.

    Workaround: None.

    Bug Tracking Number: CXU-19943

  • In a dual CPE spoke, non-cacheable applications do not work when the initial path is on CPE0 and APBR path selected is on CPE1.

    Workaround: None.

    Bug Tracking Number: PR1348889

  • For AppQoE-based tenants, the Application SLA Performance page does not display the application-specific round-trip time (RTT), jitter, and throughput parameters against the SLA profiles. instead, these parameters are displayed against the SLA profile named Default.

    Workaround: None.

    Bug Tracking Number: CXU-20162

  • If all the active links, including OAM connectivity to CSO, are down and the LTE link used for traffic, if the DHCP addresses changes to a new subnet, the traffic is dropped because CSO is unable to re-configure the device.

    Workaround: None.

    Bug Tracking Number: CXU-19080.

  • If you specify an MPLS link as the backup link, then CSO enables local breakout on the MPLS link, which causes packets to be dropped without notice.

    Workaround: Configure an Internet or LTE link as the backup link.

    Bug Tracking Number: CXU-20447.

Site and Tenant Workflow

  • ZTP for SRX Series devices will not work with a redirect server because a BOOTSTRAP complete message is not received when ZTP is initiated through a redirect server.

    Workaround: Use a CSO regional server instead of a redirect server for CPE activation.

    Bug Tracking Number: CXU-14099

  • ZTP fails on SRX 3xx Series device CPE because DHCP bindings already exist on CPE.

    Workaround: Manually clear the DHCP bindings on the CPE and restart ZTP.

    Bug Tracking Number: CXU-13446

  • The tenant delete operation fails when CSO is installed with an external Keystone.

    Workaround: You must manually delete the tenant from the Contrail OpenStack user interface.

    Bug Tracking Number: CXU-9070

  • When both the OAM and data interfaces are untagged, ZTP fails when using a NFX Series platform as CPE.

    Workaround: Use tagged interfaces for both OAM and data.

    Bug Tracking Number: CXU-15084

  • The tenant creation job may fail if connectivity from CSO to VRR is lost during job execution.

    Workaround: If the tenant creation job fails and the tenant is created in CSO, delete the tenant and retrigger the tenant creation.

    Bug Tracking Number: CXU-16884

  • If the tenant name exceeds 16 characters, the activation of the SRX hub device fails.

    Workaround: Delete the tenant and re-create a new tenant with name less than 16 characters or less and retry the activation.

    Bug Tracking Number: PR1344369.

  • In some cases, on the Monitor Overview page (Monitoring > Overview) for a site, the site information displays the device status message as stage 2 passed, stage 2 failed, or stage 2 in-progress depending on the department deployment status.

    Workaround: None.

    Bug Tracking Number: CXU-20226.

  • In some cases, even though automatic license installation is enabled in the device profile, after ZTP is complete, the license might not be installed on the CPE device even though license key is configured successfully.

    Workaround: Reinstall the license on the CPE device by using the Licenses page on the Administration Portal.

    Bug Tracking Number: PR1350302.

  • After you upgrade an NFX device to Junos OS Release 15.1X53-D472, the device is unable to connect to the regional server because the phone home server certificate (phd-ca.crt) is reverted to the factory default.

    Workaround: Manually copy the regional certificate to the NFX device.

    Bug Tracking Number: PR1350492.

Topology

  • When configuring the SRX spoke in the multihoming topology with a cloud hub and enterprise hub, the administration portal displays a Primary Hub and Secondary Hub must belong to a same Device Family error message.

    Workaround: Click OK to dismiss this error. You can ignore this error message.

    Bug Tracking Number: CXU-16662

  • On link failover, in some cases, the traffic between the hub and spoke takes an incorrect physical path because the existing session flow is not updated with the new generic routing encapsulation (GRE) interface information. However, there is no traffic loss.

    Workaround: None.

    Bug Tracking Number: PR 1341274

User Interface

  • Sorting by Administrator in the tenant page displays an error message.

    Workaround: This is an invalid error message. Click OK to continue.

    Bug Tracking Number: CXU-16642

  • If sites are removed without first undeploying the associated policies, the removal of SLA profiles fails.

    Workaround: Delete and deploy all the associated SD-WAN policies before removing sites.

    Bug Tracking Number: CXU-13179

General

  • If you create VNF instances in the Contrail cloud by using Heat Version 2.0 APIs, a timeout error occurs after 120 instances are created.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-15033

  • When you upgrade the gateway router (GWR) by using the CSO GUI, after the upgrade completes and the gateway router reboots, the gateway router configuration reverts to the base configuration and loses the IPsec configuration added during Zero Touch Provisioning (ZTP).

    Workaround: Before you upgrade the gateway router by using the CSO GUI, ensure that you do the following:

    1. Log in to the Juniper Device Manager (JDM) CLI of the NFX Series device.
    2. Execute the virsh list command to obtain the name of the gateway router (GWR_NAME).
    3. Execute the request virtual-network-functions GWR_NAME restart command, where GWR_NAME is the name of the gateway router obtained in the preceding step.
    4. Wait a few minutes for the gateway router to come back up.
    5. Log out of the JDM CLI.
    6. Proceed with the upgrade of the gateway router by using the CSO GUI.

    Bug Tracking Number: CXU-11823.

  • The reboot of the central infrastructure VM is not supported.

    Workaround: If the VM reboots, contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-17242.

  • If you run the script to revert the upgraded setup to CSO Release 3.2.1, in some cases, the status of the ArangoDB cluster becomes unhealthy.

    Workaround:

    1. Login to the centralinfravm3 VM.
    2. Execute the service arangodb3 stop and wait for 30 seconds.
      • If the command executes successfully, proceed to 3
      • If there is no progress after 30 seconds:
        1. Press Ctrl+C to abort the command.
        2. Execute the kill -9 `ps -ef | grep arangod | grep -v grep | awk {'print $2'}` command.
    3. Login to the centralinfra2 VM.
    4. Execute the service arangodb3 stop and wait for 30 seconds.
      • If the command executes successfully, proceed to step 5
      • If there is no progress after 30 seconds:
        1. Press Ctrl+C to abort the command.
        2. Execute the kill -9 `ps -ef | grep arangod | grep -v grep | awk {'print $2'}` command.
    5. Login to the centralinfra1 VM.
    6. Execute the service arangodb3 stop and wait for 30 seconds.
      • If the command executes successfully, proceed to step 7
      • If there is no progress after 30 seconds:
        1. Press Ctrl+C to abort the command.
        2. Execute the kill -9 `ps -ef | grep arangod | grep -v grep | awk {'print $2'}` command.
    7. On the centralinfravm3, execute the service arangodb3 stop and wait for 20 seconds for the command to execute.
    8. On the centralinfravm2, execute the service arangodb3 stop and wait for 20 seconds for the command to execute.
    9. On the centralinfravm1, execute the service arangodb3 stop and wait for 20 seconds for the command to execute.
    10. Execute the netstat -tuplen | grep arangod command on all three central infrastructure VMs to check the status of the ArangoDB cluster. If the port binding is successful for all the central infrastructure VMs, then the status of the ArangoDB cluster is healthy.

      The following is a sample output.

          tcp6 0 0 :::8528 :::* LISTEN 0 54213 9220/arangodb
          tcp6 0 0 :::8529 :::* LISTEN 0 44018 9327/arangod
          tcp6 0 0 :::8530 :::* LISTEN 0 91216 9289/arangod
          tcp6 0 0 :::8531 :::* LISTEN 0 42530 9232/arangod 

    Bug Tracking Number: CXU-20397.

  • The OAM status of the GRE tunnel is shown as Down even though tunnel destination is reachable.

    Workaround: Use a GRE over IPsec tunnel.

    Bug Tracking Number: PR1348721

  • On a CPE configured with an LTE backup link, LTE link flaps are observed when the CPE is running for a longer period.

    Workaround: None.

    Bug Tracking Number: PR1349613.

  • When you upgrade from JCS 3.2.1 to JCS 3.3 for a trial HA environment using KVM, the Kubernetes system pods are in the Terminating state for the regional load balancer VM, which causes the VM to be in the Not Ready state, which in turn causes the health check to fail during the upgrade.

    Workaround:

    1. On the installer VM, execute the salt 'csp-regional-lbvm*' cmd.run 'reboot' command.
    2. Wait for some time until the nodes are in the Ready state.
    3. Rerun the upgrade.sh script to continue with the upgrade.

    Bug Tracking Number: CXU-20271.

  • When you create a network service by using different types of VNFs, Network Services Designer displays an incorrect resource requirement even though CSO uses the exact resources configured.

    Workaround: None.

    Bug Tracking Number: CXU-14864

  • Provisioning of the CPE device fails if one of the VRRs fail.

    Workaround: Recover the VRR that is down and retry the provisioning (ZTP) job.

    Bug Tracking Number: CXU-19063

  • In the centralized deployment, after you import a POP, the CPU, memory, and storage allocation are displayed as zero.

    Workaround: Refresh the UI and the correct information is displayed.

    Bug Tracking Number: CXU-19105

  • For AWS deployments, in some cases, when the central infrastructure deployment is in progress, the MariaDB user creation or permission modification fails because of network latency issues.

    Workaround: Re-run the central infrastructure deployment.

    Bug Tracking Number: CXU-19806.

  • For AWS deployments, the import VRR HA operation fails for some VRRs because of network latency issues.

    Workaround: Re-run the load data services script.

    Bug Tracking Number: CXU-20485.

Modified: 2018-03-31