Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 

Known Issues

AWS Spoke

  • The AWS device activation process takes up to 30 minutes. If the process does not complete in 30 minutes, a timeout might occur and you must retry the process. You need not download the cloud formation template again.

    To retry the process:

    1. Log in to the Customer Portal.
    2. Access the Activate Device page, enter the activation code, and click Next.
    3. After the CREATE_COMPLETE message is displayed on the AWS server, click Next on the Activate Device page to proceed with device activation.
  • For an AWS spoke, during the activation process, the device status on the Activate Device page is displayed as Detected even though the device is down.

    Workaround: None.

    Bug Tracking Number: CXU-19779.

CSO HA

  • In a CSO HA setup, two RabbitMQ nodes are clustered together, but the third RabbitMQ node does not join the cluster. This might occur just after the initial installation, if a virtual machine reboots, or if a virtual machine is powered off and then powered on.

    Workaround: Do the following:

    1. Log in to the RabbitMQ dashboard for the central microservices VM (http://central-microservices-vip:15672) and the regional microservices VM (http://regional-microservices-vip:15672).
    2. Check the RabbitMQ overview in the dashboards to see if all the available infrastructure nodes are present in the cluster.
    3. If an infrastructure node is not present in the cluster, do the following:
      1. Log in to the VM of that infrastructure node.
      2. Open a shell prompt and execute the following commands sequentially:

        rabbitmqctl stop_app

        service rabbitmq-server stop

        rabbitmqctl stop_app command

        rm -rf /var/lib/rabbitmq/mnesia/

        service rabbitmq-server start

        rabbitmqctl start_app

    4. In the RabbitMQ dashboards for the central and regional microservices VMs, confirm that all the available infrastructure nodes are present in the cluster.

    Bug Tracking Number: CXU-12107

  • CSO may not come up after a power failure.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-16530

  • In some cases, when the power fails, the ArangoDB cluster does not form.

    Workaround:

    1. Login to the centralinfravm3 VM.
    2. Execute the service arangodb.cluster stop
    3. Login to the centralinfra2 VM.
    4. Execute the service arangodb.cluster stop
    5. Login to the centralinfra1 VM.
    6. Execute the service arangodb.cluster stop
    7. On the centralinfravm1 VM, execute the service arangodb.cluster start command and wait for 20 seconds for the command to execute.
    8. On the centralinfravm2 VM, execute the service arangodb.cluster start command and wait for 20 seconds for the command to execute.
    9. On the centralinfravm3 VM, execute the service arangodb.cluster start command and wait for 20 seconds for the command to execute.

    Bug Tracking Number: CXU-20346.

  • In a HA setup, the time configured for the CAN VMs might not be synchronized with the time configured for the other VMs in the setup. This can cause issues in the throughput graphs.

    Workaround:

    1. Log in to can-vm1 as root.
    2. Modify the /etc/ntp.conf file to point to the desired NTP server.
    3. Restart the NTP process.

    After the NTP process restarts successfully, can-vm2 and can-vm3 automatically re-synchronize their times with can-vm1.

    Bug Tracking Number: CXU-15681

SD-WAN

  • For CSO Release 3.3, the LTE link can only be a backup link. Therefore, the SLA metrics are not applicable and default values of zero might be displayed in on the Application SLA Performance page, which can be ignored.

    Workaround: None.

    Bug Tracking Number: CXU-19943

  • In a dual CPE spoke, non-cacheable applications do not work when the initial path is on CPE0 and APBR path selected is on CPE1.

    Workaround: None.

    Bug Tracking Number: PR1340331

  • If all the active links, including OAM connectivity to CSO, are down and the LTE link used for traffic, if the DHCP addresses changes to a new subnet, the traffic is dropped because CSO is unable to re-configure the device.

    Workaround: None.

    Bug Tracking Number: CXU-19080.

  • If you specify an MPLS link without local breakout capability as the backup link, then Internet breakout traffic is dropped because the overlay link to hub will not be used for Internet traffic if local breakout is enabled for the site.

    Workaround: Configure an Internet or an LTE link as the backup link.

    Bug Tracking Number: CXU-20447.

  • When all local breakout links are down, site to Internet traffic fails even though there is an active overlay to hub.

    Workaround: None.

    Bug Tracking Number: CXU-19807

  • When the CPE is not able to reach CSO, DHCP address changes on WAN interfaces might not be detected and re-configured.

    Workaround: None.

    Bug Tracking Number: CXU-19856

  • When the OAM link is down, the communication between the CPE devices and CSO does not work even though CSO can be reached over other WAN links. There is no impact to the traffic.

    Workaround: None.

    Bug Tracking Number: CXU-19881.

Site and Tenant Workflow

  • ZTP fails on SRX 3xx Series device CPE because DHCP bindings already exist on CPE.

    Workaround: Manually clear the DHCP bindings on the CPE and restart ZTP.

    Bug Tracking Number: CXU-13446

  • The tenant delete operation fails when CSO is installed with an external Keystone.

    Workaround: You must manually delete the tenant from the Contrail OpenStack user interface.

    Bug Tracking Number: CXU-9070

  • When both the OAM and data interfaces are untagged, ZTP fails when using a NFX Series platform as CPE.

    Workaround: Use tagged interfaces for both OAM and data.

    Bug Tracking Number: CXU-15084

  • The tenant creation job may fail if connectivity from CSO to VRR is lost during job execution.

    Workaround: If the tenant creation job fails and the tenant is created in CSO, delete the tenant and retrigger the tenant creation.

    Bug Tracking Number: CXU-16884

  • If the tenant name exceeds 16 characters, the activation of the SRX hub device fails.

    Workaround: Delete the tenant and re-create a new tenant with name less than 16 characters or less and retry the activation.

    Bug Tracking Number: PR1344369.

  • In some cases, on the Monitor Overview page (Monitoring > Overview) for a site, the ZTP status is displayed incorrectly when you hover over the site.

    Workaround: None.

    Bug Tracking Number: CXU-20226.

  • In some cases, if automatic license installation is enabled in the device profile, after ZTP is complete, the license might not be installed on the CPE device even though license key is configured successfully.

    Workaround: Reinstall the license on the CPE device by using the Licenses page on the Administration Portal.

    Bug Tracking Number: PR1350302.

  • In the scenario where the redirect service from Juniper (redirect.juniper.net) is not being used, after you upgrade an NFX device to Junos OS Release 15.1X53-D472, the device is unable to connect to the regional server because the phone home server certificate (phd-ca.crt) is reverted to the factory default.

    Workaround: Manually copy the regional certificate to the NFX device.

    Bug Tracking Number: PR1350492.

  • In a hub and spoke topology with multi-tenancy (network segmentation) enabled, the reverse traffic from the hub to the originating spoke might not take the same path as the traffic in the forward direction. There is no traffic loss.

    Workaround: None.

    Bug Tracking Number: CXU-20494.

Topology

  • For non-cacheable applications, in a hub and spoke topology, on link switchover, in some cases, the traffic between the hub and spoke might take incorrect physical path because the existing session flow is not updated. However, there is no traffic loss.

    Workaround: None.

    Bug Tracking Number: PR1341274

General

  • If you create VNF instances in the Contrail cloud by using Heat Version 2.0 APIs, a timeout error occurs after 120 instances are created.

    Workaround: Contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-15033

  • When you upgrade the gateway router (GWR) by using the CSO GUI, after the upgrade completes and the gateway router reboots, the gateway router configuration reverts to the base configuration and loses the IPsec configuration added during Zero Touch Provisioning (ZTP).

    Workaround: Before you upgrade the gateway router by using the CSO GUI, ensure that you do the following:

    1. Log in to the Juniper Device Manager (JDM) CLI of the NFX Series device.
    2. Execute the virsh list command to obtain the name of the gateway router (GWR_NAME).
    3. Execute the request virtual-network-functions GWR_NAME restart command, where GWR_NAME is the name of the gateway router obtained in the preceding step.
    4. Wait a few minutes for the gateway router to come back up.
    5. Log out of the JDM CLI.
    6. Proceed with the upgrade of the gateway router by using the CSO GUI.

    Bug Tracking Number: CXU-11823.

  • The reboot of the central infrastructure VM is not supported.

    Workaround: If the VM reboots, contact Juniper Networks Technical Support.

    Bug Tracking Number: CXU-17242.

  • If you run the script to revert the upgraded setup to CSO Release 3.2.1, in some cases, the status of the ArangoDB cluster becomes unhealthy.

    Workaround:

    1. Login to the centralinfravm3 VM.
    2. Execute the service arangodb3 stop and wait for 30 seconds.
      • If the command executes successfully, proceed to 3
      • If there is no progress after 30 seconds:
        1. Press Ctrl+C to abort the command.
        2. Execute the kill -9 `ps -ef | grep arangod | grep -v grep | awk {'print $2'}` command.
    3. Login to the centralinfra2 VM.
    4. Execute the service arangodb3 stop and wait for 30 seconds.
      • If the command executes successfully, proceed to step 5
      • If there is no progress after 30 seconds:
        1. Press Ctrl+C to abort the command.
        2. Execute the kill -9 `ps -ef | grep arangod | grep -v grep | awk {'print $2'}` command.
    5. Login to the centralinfra1 VM.
    6. Execute the service arangodb3 stop and wait for 30 seconds.
      • If the command executes successfully, proceed to step 7
      • If there is no progress after 30 seconds:
        1. Press Ctrl+C to abort the command.
        2. Execute the kill -9 `ps -ef | grep arangod | grep -v grep | awk {'print $2'}` command.
    7. On the centralinfravm3, execute the service arangodb3 stop and wait for 20 seconds for the command to execute.
    8. On the centralinfravm2, execute the service arangodb3 stop and wait for 20 seconds for the command to execute.
    9. On the centralinfravm1, execute the service arangodb3 stop and wait for 20 seconds for the command to execute.
    10. Execute the netstat -tuplen | grep arangod command on all three central infrastructure VMs to check the status of the ArangoDB cluster. If the port binding is successful for all the central infrastructure VMs, then the status of the ArangoDB cluster is healthy.

      The following is a sample output.

          tcp6 0 0 :::8528 :::* LISTEN 0 54213 9220/arangodb
          tcp6 0 0 :::8529 :::* LISTEN 0 44018 9327/arangod
          tcp6 0 0 :::8530 :::* LISTEN 0 91216 9289/arangod
          tcp6 0 0 :::8531 :::* LISTEN 0 42530 9232/arangod 

      Bug Tracking Number: CXU-20397.

  • The OAM status of the GRE tunnel is shown as Down even though tunnel destination is reachable.

    Workaround: Use a GRE over IPsec tunnel.

    Bug Tracking Number: PR1348721

  • On a CPE configured with an LTE backup link, LTE link flaps are observed when the CPE is running for a longer period.

    Workaround: None.

    Bug Tracking Number: PR1349613.

  • For a trial HA environment (using KVM), when you upgrade from JCS 3.2.1 to JCS 3.3, the Kubernetes system pods for the regional load balancer VM are in the Terminating state. This causes the VM to be in the Not Ready state, which causes the health check to fail during the upgrade.

    Workaround:

    1. On the installer VM, execute the salt 'csp-regional-lbvm*' cmd.run 'reboot' command.
    2. Wait for some time until the nodes are in the Ready state.
    3. Rerun the upgrade.sh script to continue with the upgrade.

    Bug Tracking Number: CXU-20271.

  • When you create a network service by using different types of VNFs, Network Services Designer displays an incorrect resource requirement even though CSO uses the exact resources configured.

    Workaround: None.

    Bug Tracking Number: CXU-14864

  • The provisioning of CPE devices fail if all VRRs within a redundancy group are unavailable.

    Workaround: Recover the VRR that is down and retry the provisioning (ZTP) job.

    Bug Tracking Number: CXU-19063

  • In the centralized deployment, after you import a POP, the CPU, memory, and storage allocation are displayed as zero.

    Workaround: Refresh the UI and the correct information is displayed.

    Bug Tracking Number: CXU-19105

  • After the upgrade, the health check on the Contrail Analytics Node (CAN) fails.

    Workaround:

    1. Log in to the CAN VM.
    2. Execute the docker exec analyticsdb service contrail-database-nodemgr restart command.
    3. Execute the docker exec analyticsdb service cassandra restart command.
  • The CSO health check displays the following error message: ERROR: ONE OR MORE KUBE-SYSTEM PODS ARE NOT RUNNING

    Workaround:

    1. Login to the Central microservices VM.
    2. Execute the kubectl get pods –namespace=kube-system command
    3. If the kube-proxy process is not in the Running state, execute the kubectl apply –f /etc/kubernetes/manifests/kube-proxy.yaml command.

      Bug Tracking Number: CXU-20275.

Modified: 2018-03-31