Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 

Known Issues

This section lists known issues for the Juniper Networks Cloud CPE Solution Release 3.1.

  • The device profile srx-deployment-option-1 assigns OAM traffic to the fxp0 interface, which is not available on the SRX Series Services Gateway.

    Workaround: Edit the stage 1 configuration for the CPE device in Customer Portal to use an interface other than fxp0 for OAM traffic. [CXU-3779]

  • The traffic rate value does not change when you monitor a service on an SRX Series Services Gateway in Administration Portal.

    Workaround: None

    [CXU-3822]

  • The NFX250 device does not receive communications to block unicast reverse path forwarding (uRPF) because two interfaces on the NFX250 device communicate separately with one interface on the regional microservices server.

    Workaround: Disable the uRPF check in JDM on all interfaces for each NFX Series device.

    [CXU-3854]

  • You cannot edit the settings for a customer in Administration Portal.

    Workaround: Use Administration Portal to import a JSON file that contains the correct data for the customer. [CXU-4538]

  • You can only view one network service at a time on a CPE device in Customer Portal.

    [CXU-4551]

  • During activation, the NFX250 device reboots and requests that you enter the activation code twice, because there is no default value for the HugesPages count in the Linux Kernel on the device.

    Workaround: Before you activate the NFX250 device, specify the HugePages count on the device and reboot it.

    [CXU-5601, PR1254219]

  • After you install CSO, the username and password that CSO uses to access Contrail Analytics might not match the corresponding username and password in Contrail Analytics.

    Workaround: Complete the following actions:

    1. Log in to the CSO central infrastructure VM as root.
    2. View the username that CSO uses to access Contrail Analytics.
      root@host:~/# etcdctl get /csp/infra/contrailanalytics/queryuser
      <username>

      <username> is the actual value that the query returns.

    3. View the password that CSO uses to access Contrail Analytics.
      root@host:~/# etcdctl get /csp/infra/contrailanalytics/querypassword
      <password>

      <password> is the actual value that the query returns.

    4. Log in to the CSO regional infrastructure VM as root.
    5. Repeat Steps 4 and 5 to verify the username and password on the regional host.

      If the username and password on both the central and regional infrastructure VMs match the values configured in Contrail Analytics, you do not need to take further action. The default username configured in Contrail Analytics is admin and the default password is contrail123.

    If the username and password on the central or regional infrastructure VMs do not match the values in Contrail Analytics, update them as follows:

    1. Log in to the appropriate CSO infrastructure VM as root.
    2. Update the username and password with the values configured in Contrail Analytics.
      root@host:~/# etcdctl set /csp/infra/contrailanalytics/queryuser contrail-analytics-username
      root@host:~/# etcdtl set /csp/infra/contrailanalytics/querypassword <contrail-analytics-password>

      contrail-analytics-username is the actual username, and <contrail-analytics-password> is the actual password.

    [CXU-5873]

  • The configuration for a CPE device might not be removed when you deactivate the device in Administration Portal.

    Workaround: To deactivate the CPE device, first delete the configuration from the CPE device with Customer Portal, and then deactivate the device with Administration Portal. [CXU-6059]

  • You cannot edit the Deployment Type (vCPE-Only or uCPE-Only) in a request that you create with Network Service Designer.

    Workaround: Create a new request. [CXU-6474]

  • Performance metrics for the NFX Series device are collected through the HTTP interface.

    Workaround: None. [CXU-8710]

  • In the detailed view of a site on the Sites page (Sites > Site Management), the Overlay Links tab displays only GRE links and not GRE over IPsec links.

    Workaround: None. [CXU-10170]

  • In some cases, when multiple system log messages (syslogs) or queries are being processed, the Contrail Analytics Node (CAN) crashes (Docker restarts).

    Workaround: Do the following:

    1. Log in to the CAN as root.
    2. Restart the analytics and controller Docker containers.
    3. Log in to the analytics Docker container by using the docker exec -it analytics-docker-id bash command, where analytics-docker-id is the ID of the analytics Docker container.
    4. Verify that the following entries are present in the /etc/contrail/contrail-collector.conf file:

      [STRUCTURED_SYSLOG_COLLECTOR]

      # TCP &amp; UDP port to listen on for receiving structured syslog messages

      port=3514

    5. Verify that the following entries are available in the /etc/contrail/contrail-analytics-api.conf file:

      aaa_mode=no-auth

    6. If either of the entries is not present, restart the services by executing the following commands:

      service contrail-collector restart

      service contrail-analytics-api restart

    7. Log in to the controller Docker container by using the docker exec -it controller_docker_id bash command, where controller_docker_id is the ID of the controller Docker container.
    8. Verify that the following entries are available in the /usr/lib/python2.7/dist-packages/vnc_cfg_api_server/vnc_cfg_api_server.py file:

      (under) def __init__(self, args_str=None):

      self.aaa_mode = 'no-auth'

    [CXU-10838]

  • On the NAT Rules page, if you try to search or use the column filters for departments named Internet or Corporate Network, the search does not work.

    Workaround: None. [CXU-10406]

  • Traffic not classified by SD-WAN policies follows the default routing behavior.

    Workaround: Configure SD-WAN policies for all the traffic originating from the LAN. [CXU-10716]

  • The Activate Device link is enabled even though activation is in progress.

    Workaround: After clicking the Activate Device link, wait for the activation process to complete; the Activate Device link is disabled after activation completes. [CXU-10760]

  • In some cases, an operation might fail with the Authentication required error message because of an expired token.

    Workaround: Retry the operation. [CXU-10809]

  • If one of the CSO infrastructure nodes (virtual machines) fails (shuts down or restarts), the topology service repeatedly sends out the error message “Failed to consume message from queue: 'NoneType' object has no attribute 'itervalues'” to the logs.

    Workaround: After the infrastructure node fails over to the backup node, wait for ten minutes before using any workflows. [CXU-11004]

  • If a user who belongs to more than one tenant is deleted, the user is deleted from all tenants.

    Workaround: None. [CXU-11201]

  • On the Site-Name page (Sites > Site-Name), when you click the Device link (in the Connectivity & Devices box on the Overview tab), you are navigated to the Devices page (Resources > Devices) where all available devices are displayed instead of only the device that you selected.

    Workaround: None. [CXU-11339]

  • If you modify a site group that is not used in any policy, a GUI notification incorrectly indicates that policies need to be deployed.

    Workaround: Deploy the indicated policies on the device. However, because there are no changes to be deployed, the configuration is not deployed. [CXU-11395]

  • If an infrastructure node (virtual machine) goes down, the backup node takes over the tasks handled by the primary node. However, after the node that was down recovers, it does not join the cluster and is stuck in slave mode.

    Workaround: Ensure that both nodes are up before you perform the following tasks:

    1. Log in to the infrastructure node as root.
    2. Open a shell prompt and access the redis CLI by executing the redis-cli -h node-IP-p 6379 -c command, where node-IP is the IP address of the infrastructure node that went down.
    3. At the redis CLI prompt, execute the CLUSTER FAILOVER command.
    4. Execute the INFO command.
    5. In the output, under replication, ensure that the node is displayed as Master.
    6. Exit the redis CLI by executing the QUIT command.
    7. Log out of the infrastructure node.

    [CXU-11711]

  • If you create a LAN segment with the name LAN0, LAN1, or LAN2, the deployment of the LAN segment fails.

    Workaround: Do not use the names LAN0, LAN1, or LAN2 when you create a LAN segment. [CXU-11743]

  • In the device template, if the ZTP_ENABLED and ACTIVATION_CODE_ENABLED flags are set to true, you cannot proceed with device activation.

    Workaround: Set the ZTP_ENABLED flag to true and the ACTIVATION_CODE_ENABLED flag to false before proceeding with device activation. [CXU-11794]

  • When you remove a cloud hub from a tenant, the corresponding router is removed from the All Tenants scope.

    Workaround: None. [CXU-11796]

  • When you upgrade the gateway router (GWR) by using the CSO GUI, after the upgrade completes and the gateway router reboots, the gateway router configuration reverts to the base configuration and loses the IPsec configuration added during Zero Touch Provisioning (ZTP).

    Workaround: Before you upgrade the gateway router by using the CSO GUI, ensure that you do the following:

    1. Log in to the Juniper Device Manager (JDM) CLI of the NFX Series device.
    2. Execute the virsh list command to obtain the name of the gateway router (GWR_NAME).
    3. Execute the request virtual-network-functions GWR_NAME restart command, where GWR_NAME is the name of the gateway router obtained in the preceding step.
    4. Wait a few minutes for the gateway router to come back up.
    5. Log out of the JDM CLI.
    6. Proceed with the upgrade of the gateway router by using the CSO GUI.

    [CXU-11823]

  • On rare occasions, the Logspout microservice causes the docker daemon to hog the CPU on the microservices virtual machine.

    Workaround: Restart the Logspout microservice by doing the following:

    1. Log in to the central microservices virtual machine as root.
    2. At the shell prompt, run the kubectl get pod command to find out the name of the Logspout pod.
    3. Restart the pod by executing the kubectl delete pod pod-name command, where pod-name is the name of the Logspout pod.

    [CXU-11863]

  • If the central infrastructure node that created the SAML 2.0 session goes down and you log out of the CSO GUI, the SAML 2.0 session log out fails.

    Workaround:

    1. Reload the CSO login page.
    2. Enter the username and press the Tab button on your keyboard or click the mouse outside the username field.

      You are automatically logged in to the CSO GUI.

    3. Click the Logout link in the banner to log out.

      You are logged out of the CSO GUI and the SAML 2.0 session.

    [CXU-11867]

  • When the deployment of a LAN segment on a device fails, the device status is changed from PROVISIONED to PROVISION_FAILED. However, when you redeploy the LAN segment and the deployment is successful the device status is not changed to PROVISIONED. Therefore, when you attempt to deploy an SD-WAN or a firewall policy on the device, the deployment fails with the error message "[get_policy_info_list: method execution failed]Site/ Device information not found".

    Workaround: None. [CXU-11874]

  • If you trigger a device activation on the Activate Device page, the status of the activation is displayed based on the progress of the activation steps completed. However, the device activation process takes between 20 and 30 minutes and if you click OK to close the Activate Device page, you cannot go back to the Activate Device page to find out the status.

    Workaround:

    1. Wait for 20 to 30 minutes after you trigger the activation.
    2. On the Sites page, hover over the device icon to see the status:
      • If the device status is PROVISIONED, it means that the activation was successful.
      • If the device status is EXPECTED or PROVISION_FAILED, then the device activation has failed.

        Contact your service provider for further assistance.

    [CXU-11878]

  • When a tunnel goes down, the event generated displays different information for the NFX Series and SRX Series devices:
    • When the GRE over IPsec tunnel goes down:
      • The event generated for the vSRX device (running on the NFX Series device) has the description ['Tunnel-id ID is inactive'].
      • The event generated for the SRX Series device has the description GRE over IPSEC is Down.
    • When the GRE-only tunnel goes down:
      • The event generated for the vSRX device (running on the NFX Series device) has the description tunnel-oam-down.
      • The event generated for the SRX device has the description GRE tunnel down.

    Workaround: None. [CXU-11895]

  • If you try to delete one or more LAN segments, the confirmation dialog box does not display the list of LAN segments selected for deletion. However, when you click OK to confirm the deletion, the LAN segments are deleted successfully.

    Workaround: None. [CXU-11896]

  • If the role of an existing user is changed from MSP Operator to MSP Administrator and that user tries to switch the tenant by using the scope switcher in the banner, the tenant switching fails.

    Workaround: Delete the existing user and add an MSP user with the MSP Administrator role. The new user will be able to perform the tenant switch. [CXU-11898]

  • In Cloud CPE Solution Release 3.1, editing a site is not supported. When you try to edit a site, the message "unable to retrieve the router info" is displayed.

    Workaround: Delete the site and add the site again with the modified settings. [CXU-11912]

  • If you edit an existing LAN segment that was previously added during site creation, the Department field is changed to Default.

    Workaround: When you edit a LAN segment, ensure that you select the correct department before saving your changes. [CXU-11914]

  • If you apply an APBR policy on a vSRX device or an SRX Series device, in some cases, the APBR rule is not active on the device.

    Workaround:

    1. Log in to the vSRX or SRX Series device in configuration mode.
    2. Deactivate the APBR policy by executing the delete apply-groups srx-gwr-apbr-policy-config command.
    3. Commit the configuration by executing the commit command.
    4. Activate the APBR policy by executing the set apply-groups srx-gwr-apbr-policy-config command.
    5. Commit the configuration by executing the commit command.
    6. Log out of the device.

    [CXU-11920]

  • In some cases, traffic fails to flow over an overlay tunnel.

    Workaround: Reboot the vSRX or SRX Series device to ensure that the traffic flows normally. [CXU-11921]

  • When you log in to CSO as a Tenant Administrator user, the Configure Site workflow is not available.

    Workaround: Log in as an MSP Administrator user and switch to the tenant for which you want to configure the site. The Configure Site workflow becomes available. [CXU-11922]

  • ZTP activation of an SRX Series device by using the phone home client (PHC) fails.

    Workaround: If the activation fails with the error message Ztp-activation finished incomplete for ems-device and no other error messages are present in the activation logs, then the MSP Administrator (in the All Tenants scope) can retry the activation job by navigating to the Jobs page (Monitor > Jobs), selecting the failed job, and clicking the Retry Job button. [CXU-11926]

  • On the Monitor > Overview page:
    • The number of hub devices is reported as zero even though a cloud hub exists.

      Workaround: The expanded view displays the correct data.

    • When you collapse and expand the map view, the number of links reported is incorrect.

      Workaround: Refresh the page to display the correct data.

    [CXU-11931]

  • For GRE-over-IPsec overlays, in some cases, the event-options configuration fails to re-enable the gr-0/0/0 interface. As a result, the traffic between the spoke and hub overlay stops flowing even though the overlay is up.

    Workaround: Do the following:

    1. Find out which gr-0/0/0 interfaces are down on the spoke device:
      1. Log in to the spoke device as root.
      2. Execute the show interfaces gr-0/0/0.* terse command.

        An example of the command and output follows:

        user@host> show interfaces gr-0/0/0.* terse
          Interface Admin Link Proto Local Remote
        gr-0/0/0.1 up up inet 192.0.2.1/31
                                           mpls
        gr-0/0/0.2 down up inet 192.0.2.2/31
                                           mpls
        gr-0/0/0.3 up up inet 192.0.2.3/31
                                           mpls
        gr-0/0/0.4 up up inet 192.0.2.4/31
                                           mpls
        gr-0/0/0.5 down up inet 192.0.2.5/31
                                           mpls
        gr-0/0/0.6 up up inet 192.0.2.6/31
                                           mpls
        
    2. For each disabled interface on the spoke, execute the show configuration | display set | match "disable" | match "gr-interface-name unit unit-number" command to find out whether any disabled statements are present in the configuration, where gr-interface-name is the name of the interface and unit-number is the unit number that was obtained in Step 2.

      An example of the command and output follows:

      user@host>show configuration | display set | match "disable" | match "gr-0/0/0 unit 5"
      set groups NFX-5-SPOKE_CPE1_WAN_2_GRE_IPSEC_0 interfaces gr-0/0/0 unit 5 disable
    3. Find out whether there is a problem by doing the following:
      1. Execute the show interfaces st0* terse operational command to find out the status of the IPsec tunnels and the IP addresses:

        An example of the command and output follows:

        user@host> show interfaces st0* terse
        Interface Admin Link Proto Local Remote
        st0 up up
        st0.1 up down inet 192.0.2.7/31
        st0.2 up up inet 192.0.2.8/31  # [IP address match]
        st0.3 up up inet 192.0.2.9/31
        
      2. Execute the show interfaces gr-down-interface-name command, where gr-down-interface-name is the name of the interface that was down (obtained in Step 2).

        Note: You must execute this command for all the interfaces that were down.

        An example of the command and output follows:

        user@host> show interfaces gr-0/0/0.5
          Logical interface gr-0/0/0.5 (Index 139) (SNMP ifIndex 579)
            Flags: Down
        Down Point-To-Point SNMP-Traps 0x4000
        IP-Header 192.0.2.10/31:192.0.2.8/31:47:df:64:0000000000000000  # [IP address match]
        Encapsulation: GRE-NULL
            Gre keepalives configured: Off, Gre keepalives adjacency state: down
            Input packets : 0
            Output packets: 0
            Security: Zone: trust
            Allowed host-inbound traffic : bootp bfd bgp dns dvmrp igmp ldp msdp nhrp
            ospf ospf3 pgm pim rip ripng router-discovery rsvp sap vrrp dhcp finger ftp
            tftp ident-reset http https ike netconf ping reverse-telnet reverse-ssh
            rlogin rpm rsh snmp snmp-trap ssh telnet traceroute xnm-clear-text xnm-ssl
            lsping ntp sip dhcpv6 r2cp webapi-clear-text webapi-ssl tcp-encap
            Protocol inet, MTU: 9168
              Flags: Sendbcast-pkt-to-re
              Addresses, Flags: Dest-route-down Is-Preferred Is-Primary
                Destination: 192.0.2.11/31, Local: 192.0.2.5/31
            Protocol mpls, MTU: 9156, Maximum labels: 3
        
      3. For each IP address obtained in Step a, search the output from Step b for a match. (In the sample outputs, the IP address of the st0.2 interface (192.0.2.8/31) matches the IP address in the IP-Header parameter.)
      4. If the st0 interface corresponding to the matched IP address is up and the corresponding gr-0/0/0 interface is down, then there is a problem with the configuration.
    4. Modify the configuration on the spoke as follows:
      1. Log in to the spoke device as root.
      2. Delete the disabled statements found in Step 3 by executing the delete command. An example is provided below:
        user@host# delete groups NFX-5-SPOKE_CPE1_WAN_2_GRE_IPSEC_0 interfaces gr-0/0/0 unit 5 disable
      3. Commit the configuration by executing the commit command.
    5. Repeat Step 2 through Step 5 for the hub device.
    6. Verify that the links are up by following the procedure in Step 1.

    [CXU-11996]

  • If an SLA profile is defined with only the throughput metric specified, in some cases, the SLA profile is assigned to a link that is down.

    Workaround: In addition to the throughput metric, ensure that you specify at least one more metric (for example, packet loss or latency) for the SLA profile. [CXU-11997]

  • You see an import error message when you use the license tool, because the tssmclient package is missing from the license tool files.

    Workaround: complete the following procedure:

    1. Copy the tssmclient.tar.gz file from https://www.juniper.net/support/downloads/?p=cso#sw to the csoVersion/licenseutil/csoclients/ directory on the installer VM, where csoVersion is the name of the installer directory created when you extract the TAR file for the installation package.
    2. Access the csoVersion/licenseutil/csoclients/ directory and extract the tssmclient directory from the TAR file.

      For example, if the installer directory is called csoVersion:

      root@host:~/# cd csoVersion/licenseutil/csoclients/
      root@host:~/csoVersion/licenseutil/csoclients# tar xvzf tssmclient.tar.gz

      This command replaces the tssmclient folder and its contents.

    3. Run the license tool using the instructions in the Deployment Guide.

    [CXU-12054]

  • In a three-node setup, two nodes are clustered together, but the third node is not part of the cluster. In addition, in some cases, the RabbitMQ nodes are also not part of the cluster. This is a rare scenario, which can occur just after the initial installation, if a virtual machine reboots, or if a virtual machine is powered off and then powered on.

    Workaround: Do the following:

    1. Log in to the RabbitMQ dashboard for the central microservices VM (http://central-microservices-vip:15672) and the regional microservices VM (http://regional-microservices-vip:15672).
    2. Check the RabbitMQ overview in the dashboards to see if all the available infrastructure nodes are present in the cluster.
    3. If an infrastructure node is not present in the cluster, do the following:
      1. Log in to the VM of that infrastructure node.
      2. Open a shell prompt and execute the following commands sequentially:

        service rabbitmq-server stop

        rabbitmqctl stop_app command

        rm -rf /var/lib/rabbitmq/mnesia/

        service rabbitmq-server start

        rabbitmqctl start_app

    4. In the RabbitMQ dashboards for the central and regional microservices VMs, confirm that all the available infrastructure nodes are present in the cluster.

    [CXU-12107]

  • Some variables in the CSO and NSC installer packages do not have the correct values.

    Workaround: After you extract the TAR file for the installation package, access the installer directory, which has the same name as the installer package, and execute the following sed command:

    For example, if the name of the installer package is csoVersion.tar.gz:

    root@host:~/# cd csoVersion
    root@host:~/csoVersion# sed -i s@gcr.io/google_containers@csp-installer:10000@g salt/file_root/kubeminion/files/manifests/kube-proxy.yaml;sed -i s@"timedatectl set-ntp"@ntpdate@g salt/file_root/ntp/init.sls;sed -i s@"add_admin_portal_documentation(server_dict\['ip'\]"@"#add_admin_portal_documentation(server_dict\['ip'\])"@g micro_services/core.py;

    You can then use the files and tools in the installer directory to perform operations such as provisioning VMs, creating configuration files, and installing the solution.

    [CXU-12113]

  • When you try to install the Distributed_Cloud_CPE_Network_Service_Controller_3.1 package, the load services data module fails with an import error in the publish_data_to_design_tools function.

    Workaround:

    1. Navigate to the untar-dir/micro_services/ directory, where untar-dir is the directory where you unzipped the installation tar file.
    2. Open the load_services_data.py file in an editor and comment out the publish_data_to_design_tools function in the file as follows:
          ''' 
          # publish data to design-tools 
          publish_data_to_design_tools( 
              token, 
              regions['central']['vip_ip'], 
              regions['central']['vip_port'], 
              data_str, 
              http_protocol 
          ) 
          '''
      
    3. Save the load_services_data.py file.
    4. Run the load_service_data.sh script to continue with the installation.

    [CXU-12137]

  • When you deploy a firewall policy, the deployment fails with the message Fail to invoke mapper to create snapshot with reason null.

    Workaround: Do the following:

    1. Log in to the central infrastructure virtual machine (VM) as root.
    2. Start the ZooKeeper CLI by executing the /usr/share/zookeeper/bin/zkCli.sh command.
    3. Execute the delete /secmgt/sm_initialized command.
    4. Exit the ZooKeeper CLI by executing the quit command.
    5. Log out of the central infrastructure VM.
    6. Log in to the central microservice VM as root.
    7. At the shell prompt, execute the kubectl get pods | grep secmgt-sm command to find out the name of the security management pod.
    8. Restart the pod by executing the kubectl delete pod pod-name command, where pod-name is the name of the security management pod.
    9. Wait until the security management pod is in the 1/1 running state.
    10. Log out of the central microservice VM.
    11. Re-deploy the firewall policy.

    [CXU-12151]

Modified: 2017-09-26